Deprecated: Assigning the return value of new by reference is deprecated in /home/wsp/public_html/blog/wp-includes/cache.php on line 99

Deprecated: Assigning the return value of new by reference is deprecated in /home/wsp/public_html/blog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /home/wsp/public_html/blog/wp-includes/theme.php on line 576
PageRank: An Essay | Website Publisher Blog

PageRank: An Essay

November 17th, 2006 by Chris

It is my firm belief that most people, even many who would call themselves Search Engine Optimizers, do not “get” PageRank. They know that it is about your incoming links, but they do not understand what made it so revolutionary, or why it is so useful. It is my opinion that this lack of understanding or perspective probably stems from a lack of experience, and of course some old fashioned lack of higher brain function. You see, 2003 was not the start of the Internet, and in this industry people have a habit of thinking that the day they started getting into SEO is the day SEO started. So they forget, or ignore, all that came before.

I love PageRank for it’s elegance, it’s power, and most of all the saving of the Internet. Yes, I do believe PageRank saved the Internet. Before Brin & Page there was crap, a lot of crap. Search engines relied entirely on on-page features to rank a page, and they were not very good at weeding out garbage or finding realistic language patterns. In short if you just repeated your word enough times you could rank well on it.

The solution of course was to design an algorithm that could judge the quality of a web page. Quality though is subjective, you would have as much luck asking a computer to pick a favorite color as you would asking it for an opinion on a web page’s usefulness. What Brin & Page hit on was that they didn’t need to actually provide a computer with the ability to have an opinion, but rather all they needed to do was to find a way to poll real humans on a massive scale; thus the concept of an incoming link. They theorized that if someone linked to your page they must be recommending it. Furthermore the text they used to create the link to your site must have something to do with your site. So thus was born the use of off-page factors for ranking websites in search engines.

PageRank became all the rage, people believed it possible of many things, many things which were outside of it’s scope. In the height of the PageRank pandemonium, on December 24th, 2002, I published an article on SitePoint.com called “10 Google Myths Revealed.” This was one of the most controversial articles I’ve ever written, many people refuted the claims I made within it. The article was basic in some regards, rather than go into technical specifics I make up a few terms such as “specific PageRank” to illustrate the concept of incoming links needing a contextual basis to help your site. However mostly it has been proven to be spot on and showed PageRank hysteria for what it was, while not diminishing the actual role PageRank played, and still plays, in search engines.

It has amazed me though that, despite the article being nearly 4 years old now, many people still believe in the myths that I wrote about. For instance the issue of special bonuses for Yahoo & DMOZ listings. This myth was a very old one and since then it has even expanded to include links from .edu, .gov, .mil, or whatever “special” sites the myth-mongers are fixated upon at the moment.

For instance SEOMoz released this tool only just this year that calculates an arbitrary score based on mostly arbitrary factors, but some of those factors are so called “special” links from “special” sources that I refuted back in 2002. In their defense they don’t actually say such things help, at least not within the tool itself, but many people believe as such and frequently they use this tool as “proof” backing up their theories. The tool is neat, but it makes a few incorrect inferences that serve to propagate these theories.

This is where the issues I discussed in my first paragraph come in. People who have not be in this industry long lack the perspective to make sound judgments on new search engine theories, and other people just plain do not understand the purpose behind PageRank.

This fall Matt Cutts, a Google engineer whom is more or less their contact to the webmaster industry, made a series of video blogs in which he explicitly confirmed what I had said in 2002, that so called special bonuses for certain links do not exist. While this was news to many, it certainly wasn’t news to me.

I mentioned previously that I loved PageRank for its elegance, and here is where that elegance comes into play. What people do not realize is that PageRank is not just a measure of incoming links, it is a measure of usefulness. Web pages that are useful get more incoming links than web pages that are not useful. This can and has been gamed by search engine optimizers for years with the buying, selling, and manipulation of links. However in the end no one can buy links from the entire Internet and the good and useful sites always rise to the top given enough time.

Because PageRank is a measure of usefulness, it is unnecessary to add any other feature to a search engine’s algorithm as a representation of usefulness. This is where many people trip up, they think of what they find useful in a page and think that a search engine must also value those same things and so give a bonus for them. They fail to realize that if the feature they consider helpful, such as say outgoing links, truly adds to a web page’s usefulness, then that will be reflected in an increase in incoming links. The search engine has no need to guess whether or not feature X truly makes a page more useful, and they have no need to make general global assumptions either, they can fall back on PageRank.

Then of course as well, with the passing of PageRank from one website to another, it is unnecessary to give a special bonus to certain types of sites such as .edu ones or major directories. PageRank passed is a direct result of PageRank obtained and the more useful directories or .edu sites will have more PageRank and thus pass more of it on to those they link to without the need for a special bonus. Meanwhile the less or not at all useful .edu pages (like a student’s personal web space) or directories (such as link farms) will not pass much PageRank because they are not useful and do not have much to pass. The system works, no special assumptive bonus required.

The above illustrated concept, of PageRank as a single and all-encompassing quality modifier, is elegant, and powerful and the reason why I love it.

It is important in the study of search engines to realize one thing, that relevancy and fairness are not the same thing. Many webmasters operate under the incorrect notion that search engines strive to treat all webmasters equally in some liberal fair way. Search engines do not exist to police or arbitrate the webmaster population. Their goal is not a fair marketplace for webmaster competition, their goal (aside from making money of course), is providing relevant results to user queries.

So webmasters have somewhat of a love-hate relationship with PageRank. There has been significant reaction, really the epitome of a knee-jerk reaction, to PageRank in recent years. Many webmasters see it as unfair, favoring older established sites, and think that this unfairness is enough to justify a change. Others simply took a knee jerk reaction to the PageRank mania and now say it is completely unimportant. This again is likely do to lack of experience.

PageRank was never, ever, ever, the sole part of Google’s algorithm. It was never, ever, ever, anything more than a quality metric. In the end you need to know two things to rank pages in search results, quality and topic, and PageRank has always been just half of that. So people who believed that it was the whole shebang, eventually realized it wasn’t and instead of thinking that they had been wrong before, came up with the idea that they weren’t wrong, Google just made a change, and if they made a change to downtweak PageRank’s importance then that is indicative of an overall trend and PageRank is lessening in importance or is no longer importance at all. Yes, its true, people actually did and still do believe that.

In truth PageRank today is as it always has been: A representation of the total value of the weight of a web page’s incoming links. This value has only ever been an approximation of the quality of a web page and has never had anything to do with the measuring of the topical relevance of a web page. Topical relevance is measured with link context and on-page factors such as keyword density, title tag, and everything else.

So do not disdain PageRank, do not scorn it as yesterday’s news. It is still what it has always been, use it as such. Be thankful as well, for without this innovation the Internet would not be what it is today.

Also, when the next new, or even recycled, theory du jour, is floated around, remember what I’ve said, and think to yourself if such a thing would really add relevancy, or if the PageRank algorithm, in it’s infinite elegance, already takes such a feature into account indirectly through it’s appreciation of the implications of an incoming link.

Related posts:

  1. PageRank or Your Page’s Rank?
  2. A Google Myth Busted
  3. Bing to Power Yahoo Next Week
  4. The PageRank Possibilities of an In-House Affiliate Program
  5. Top 5 Worst SEO Mistakes

18 Responses to “PageRank: An Essay”

  1. Dreamsubmitting  Says:

    Very good writing.

  2. Chromate  Says:

    Glad you posted this. It really winds me up every time I read someone coming out with the whole “pagerank is dead” type post.

    However, I’m not so sure you’re absolutely correct about links from .edu / dmoz / yahoo not having any extra significance.

    One of Google’s top software engineers (working there since 2000) gave a talk at my brother’s university last week about the technology behind Google search and how they crack down on spam etc. He actually said that there are MANUALLY picked out sites such as .edu, .gov, bbc etc that he referred to as “trusted party sites” and that links from these manually picked out sites are more highly valued. How much more highly valued? I don’t know.

    Matt Cutts didn’t actually say that this wasn’t the case in that video. He actually said that they “don’t really have much in the way to say ‘Oh this is a link from the ODP, or .gov, or .edu, so give that some sort of special boost.’” Which isn’t the same as saying they don’t have ANYTHING to differentiate between a link from a .gov site and a normal link.

    Also, remember that Matt Cutts has such a strong webmaster readership now that he’s got to be pretty calculated in what he says. Read him like a politician.

    I’m trying to get my brother to make a full post about the talk, because the Google software engineer came out with some other interesting stuff too.

  3. Chris  Says:

    Manually adjusting results is a far cry from blanket bonuses which some people claim, and those manually chosen sites aren’t chosen based upon their TLD but by the institution behind them.

    Plus, remember to take it with a grain of salt. There are numerous instances where Google employees make off the cuf type remarks that do not end up being right. Google is a huge organization and it would be wrong to assume that everyone working there knows everything about what goes on there. There have been numerous instances in the past where this has happened.

    However, with Matt Cutts, his remarks are not off the cuff, asides, or anything like that. His are answers to direct questions, and moreso answers that he has had time to research and double check before posting.

    Finally, remember than a ranking algorithm is not the same thing as an anti-spam algorithm and if Google uses some things for anti-spam measures it does not mean that it uses those same things for rankings.

    Not that I’m saying your wrong, there is no way to prove or disprove that they manually tinker with things. However even if they did it isn’t that relevant because it is a manual change and cannot possibly apply to more than a tiny fraction of the total pages listed on the Internet.

  4. Dave White  Says:

    When it comes to Google and its algorithms it is really difficult whether to belive Google reps or not.I don’t think they will talk in public of their important secrets.

  5. Dan  Says:

    Refreshing yet nostalgic morning read.

    No blog entries for around a month then around three at once!

  6. Ken Barbalace  Says:

    Another great article Chris. Like you said in you reply above there is no way to prove or disprove exactly how things work, but you make some very compelling in your essay.

    Personally, I’m from the school of thought that says the more free back links one can get the better. Whether there is some manual or automated bonus for certain types of sites is irrelevant to my link building strategy.

  7. Hi-Tech  Says:

    I personally prefer the Page Strength tool better than Page Rank as it takes more factors into consideration than *just* Google’s perception of a site. However until the webster community adopts PS over PR as a standard we will be stuck with the PR yardstick.

  8. Chris  Says:

    Oh don’t get me started on that Page Strength tool. It is curious, and interesting, but completely arbitrary. Most of the factors are meaningless to actual SERP performance. Taking it too seriously is a bad thing.

    Also… I don’t know what you mean by “adopts.” What is there to adopt? Do you feel people currently measure their success by PageRank and that should change or something? I don’t know anyone who does. PageRank is a representation of the total weight of your incoming links, nothing more, nothing less. To use it as anything else is foolish.

  9. adsense4dummies  Says:

    PageRank from a different perspective. It’s nice to see a bit of the background to PR instead of the usual ’strategies’ for increasing your value.

  10. Web Designer  Says:

    Excellent essay, Chris! It really helped me understand PageRank better and that the algorithm is actually a whole lot simpler than I used to think… Thanks for this!

  11. seo  Says:

    1000 Hard Links

  12. smallnice  Says:

    small nice baby, small nice funny pictures and video

  13. SEO  Says:

    The only problem I have with pagerank is the time it takes to achieve. I could write one of the best articles on the internet and yet google wont give me that credit for months. Hopefully google caffeine will change this.

  14. Douglas Adams  Says:

    “In the end you need to know two things to rank pages in search results, quality and topic, and PageRank has always been just half of that.”

    I total agree!

  15. crash zone  Says:

    A very informative and great article about pagerank criteria and its background. Google really uses very typical algorithms for calculating the page rank and god knows when the whole lot confusion of nofollow tags will be solved.

  16. Rob  Says:

    Google interprets a link from page A to page B as a vote, by page A, for page B. Google looks not only at the sheer volume of votes; among 100 other aspects it also analyzes the page that casts the vote. However, these aspects don’t count, when PageRank is calculated.
    PageRank is based on incoming links, but not just on the number of them – relevance and quality are important (in terms of the PageRank of sites, which link to a given site).
    PR(A) = (1-d) + d(PR(t1)/C(t1) + … + PR(tn)/C(tn)). That’s the equation that calculates a page’s PageRank.
    Not all links weight the same when it comes to PR.

  17. Mitch  Says:

    I guess the page rank is dynamic in nature.I have seen page ranks change,maybe backlinks determine how google ranks a page. Any comments??

  18. edi  Says:

    tanks

Leave a Response








(Email field must be filled in)

Top of page...