Why .edu links shouldn’t (and aren’t) be given bonuses

December 8th, 2006 by Chris

Many people believe .edu links should give special bonuses, less people believe it now that Google has flat out said they don’t, but still some do.

Check out these google results (though hurry, I did fill out a spam report so they may be removed). Notice that 4 of the top 5 results are spam, notice they are .edu TLDs, and notice that it is some of the worst and most blatant spam I’ve ever seen at the top of a SERP. Notice as well too that the spam actually spans 3 different .edu TLDs. If .edu TLDs had arbitrary blanket bonuses, these pages would benefit. Now why would Google do that?

Valueclick Acquires Shopping.net

December 4th, 2006 by Chris

This’ll be a quick post.

It was announced today that Valueclick would acquire Shopping.net for 13.3 million. Valueclick estimates Shopping.net to contribute 2 million in profit to it’s business.

Two things I want to mention here are:

1. So much for those people trying to value large established sites as something ridiculous like 12 months of profit. This valuation is at over 5 years of profit.

2. An ad network buying a network of content sites? Makes sense from the business perspective, and many ad networks grew out of large content networks, but I can’t remember any acquisitions like this. The ad network still gets to serve the ads, only now they keep all of the revenue. Can anyone remember any similar acquisition? I imagine there will be more.

Amazon Beta Content Links Program

November 29th, 2006 by Chris

Amazon is launching another beta content links program. I say another because they have already been experimenting in contextual text links with their successful (I assume, I can’t see how it couldn’t be successful) Omakase program

Dear Amazon Associate:

Are you looking for additional revenue streams to monetize your content? Are you interested in providing relevant information to your site visitors at the very instant they are looking for it? If so, you should participate in the exclusive Context Links beta program!

The Context Links tool automatically identifies and links contextually relevant phrases within your content to Amazon products. When a user hovers over a Context Link they will see a preview window that displays a summary of the product. We believe this product will unlock new ad inventory for you by identifying linking opportunities that you previously had not identified, while also allowing you to control the number of links on each page. In addition, Context Links can save you the time needed to manually create text links within your content.

Before we make Context Links available to the public, we would like to pilot this innovation with a small group of hand-selected Associates. We would like you to be part of that group. As a beta participant you have complete control on which pages you want to include Context Links. To participate you will be asked to:

1) Agree to the terms and conditions applicable to the Associates Context Links Beta Program
2) Obtain the required code snippet from Associates Central and deploy it on pages where you want Context Links to appear
3) Provide feedback via email
4) Complete an evaluation survey

Your referral fees from Context Links will be handled in the same way as in your current agreement. In addition, if you fully participate and provide us feedback, we will send you a $50 gift certificate to spend at Amazon.com at the end of the beta program as a show of our thanks for your participation.

Our beta is scheduled to begin now, and space is limited. So, if you’d like to participate please visit Associates Central, read the agreement, and sign up. After you sign up you will be able to view the Context Links configuration page, where you can choose simple options and obtain the code to add to your Web pages.

Thank you for your continued support of the Amazon Associates Program, and we look forward to your participation!

I haven’t decided yet if I will use this program. It is unclear, as usual, if it would be against Adsense TOS to run this on the same page as Adsense, and I certainly cannot stop using Adsense as it makes me too much money. All in all this sounds very similar to Kontera.com’s program, which I have never been much a fan of as it always linked unrelated words in my tests. Actually, it sounds most like this program is a hybrid of Chitika and Kontera.

Niche Web Acquisitions

November 25th, 2006 by Chris

If you haven’t heard of Demand Media, it is a new company by Richard Rosenblatt (former CEO of Intermix/MySpace) that more or less aims to create hundreds or thousands of niche content websites based mostly around a user community.

Sound familiar? It should, it is what myself, and many of the readers of this blog, already do (albeit without hundreds of millions in VC funding).

You can check out this article for more or read a profile in the most recent issue of Business 2.0.

The reason I’m blogging about this is, well, I think that with this company planning to acquire many existing niche content sites to build their network, there is a good chance that one or more readers of this blog could end up being bought up, especially those with large forums. What’s more, where there is one company aiming to buy niche content sites, there are bound to be more as people play copy cat and catch-up.

It could be a good next couple of years for those of us with niche properties.

Top 5 Worst SEO Mistakes

November 23rd, 2006 by Chris

SitePoint published a pretty crappy article yesterday about the worst SEO mistakes. Not only does it mention some things that will not hurt you, it also fails to mention the most common and most hurtful SEO mistakes people regularly make. I wonder how many months the author has actually been doing SEO.

So here is my list:

1. Inaccessible Navigation.
The #1 mistake webmasters make is to make their navigation inaccessible to search engines either through flash, javascript, or just plain bad coding. If a search engine cannot utilize your navigation they cannot crawl your site, period. Any pages on your site without an external incoming link will not be crawled. Even if you have 1000 pages on your site, likely only the homepage will be indexed. Your traffic potential will be only a minute fraction of what it could be.

2. Non-unique Title Tags.
The #2 mistake webmasters make is to repeat the same title tag on every page of their site. Some software programs, either content site management systems, or shopping carts, do this out of the box. The title tag is the single most important on-page element for search engine optimization and it needs to be uniquely tailored for each page of content. Failure to do this is a huge hindrance to your efforts. If your software does this you either need to hack the software, or get different software.

3. Putting Session IDs in the URL
Search engines cannot crawl sites that append session IDs to the URL. The reason is that while the crawl happens there IDs will continually change, thus changing the URL, thus making the search engine crawl every more duplicate pages in a never ending loop. As such, when encountering a session ID or something that looks like it, most search engines stop crawling. Depending on your backend code as well, this can be a problem when a session ID page is actually indexed and someone follows that link from a search engine they can be assigned this old session, which may not be theirs.

4. Mucking up your Robots.txt File
Having a robots.txt file is a good idea; otherwise your error log will be full of requests for it. However the most common cause of a “ban” or “penalty” from a search engine is really just a person putting the wrong thing in their robots.txt file and accidentally banning the search engine (not the other way around). With Google SiteMaps or Yahoo SiteExplorer a webmaster can verify their site is crawlable and robots.txt is not interfering, among other things. Using these services is so vital and helpful that one could even say not using them could be a major SEO mistake.

5. Using Meaningless Identifiers in the Anchor Text of Internal or Incoming Links
Incoming links are extremely important, the anchor text inside of them is doubly so. One major mistake people make is to use meaningless anchor text in their internal or incoming links. For instance “Next Page” or “Click Here” do absolutely nothing to help you. You should never, ever, use such anchor text within your own site unless you really do not care about ranking well with the linked to page. And while you cannot control how others link to you, if they ask your input, always ask for something more descriptive that uses your keywords.

There, those are what I consider the 5 most common and most hurtful SEO mistakes. Sure, things like leaving out an image’s alt attribute are bad, but not as bad as any of the above, and then too you should remember other accessible things like title attributes for anchor tags. Really, at the level of importance of alt attributes there are dozens and dozens of things you could do wrong.

PageRank: An Essay

November 17th, 2006 by Chris

It is my firm belief that most people, even many who would call themselves Search Engine Optimizers, do not “get” PageRank. They know that it is about your incoming links, but they do not understand what made it so revolutionary, or why it is so useful. It is my opinion that this lack of understanding or perspective probably stems from a lack of experience, and of course some old fashioned lack of higher brain function. You see, 2003 was not the start of the Internet, and in this industry people have a habit of thinking that the day they started getting into SEO is the day SEO started. So they forget, or ignore, all that came before.

I love PageRank for it’s elegance, it’s power, and most of all the saving of the Internet. Yes, I do believe PageRank saved the Internet. Before Brin & Page there was crap, a lot of crap. Search engines relied entirely on on-page features to rank a page, and they were not very good at weeding out garbage or finding realistic language patterns. In short if you just repeated your word enough times you could rank well on it.

The solution of course was to design an algorithm that could judge the quality of a web page. Quality though is subjective, you would have as much luck asking a computer to pick a favorite color as you would asking it for an opinion on a web page’s usefulness. What Brin & Page hit on was that they didn’t need to actually provide a computer with the ability to have an opinion, but rather all they needed to do was to find a way to poll real humans on a massive scale; thus the concept of an incoming link. They theorized that if someone linked to your page they must be recommending it. Furthermore the text they used to create the link to your site must have something to do with your site. So thus was born the use of off-page factors for ranking websites in search engines.

PageRank became all the rage, people believed it possible of many things, many things which were outside of it’s scope. In the height of the PageRank pandemonium, on December 24th, 2002, I published an article on SitePoint.com called “10 Google Myths Revealed.” This was one of the most controversial articles I’ve ever written, many people refuted the claims I made within it. The article was basic in some regards, rather than go into technical specifics I make up a few terms such as “specific PageRank” to illustrate the concept of incoming links needing a contextual basis to help your site. However mostly it has been proven to be spot on and showed PageRank hysteria for what it was, while not diminishing the actual role PageRank played, and still plays, in search engines.

It has amazed me though that, despite the article being nearly 4 years old now, many people still believe in the myths that I wrote about. For instance the issue of special bonuses for Yahoo & DMOZ listings. This myth was a very old one and since then it has even expanded to include links from .edu, .gov, .mil, or whatever “special” sites the myth-mongers are fixated upon at the moment.

For instance SEOMoz released this tool only just this year that calculates an arbitrary score based on mostly arbitrary factors, but some of those factors are so called “special” links from “special” sources that I refuted back in 2002. In their defense they don’t actually say such things help, at least not within the tool itself, but many people believe as such and frequently they use this tool as “proof” backing up their theories. The tool is neat, but it makes a few incorrect inferences that serve to propagate these theories.

This is where the issues I discussed in my first paragraph come in. People who have not be in this industry long lack the perspective to make sound judgments on new search engine theories, and other people just plain do not understand the purpose behind PageRank.

This fall Matt Cutts, a Google engineer whom is more or less their contact to the webmaster industry, made a series of video blogs in which he explicitly confirmed what I had said in 2002, that so called special bonuses for certain links do not exist. While this was news to many, it certainly wasn’t news to me.

I mentioned previously that I loved PageRank for its elegance, and here is where that elegance comes into play. What people do not realize is that PageRank is not just a measure of incoming links, it is a measure of usefulness. Web pages that are useful get more incoming links than web pages that are not useful. This can and has been gamed by search engine optimizers for years with the buying, selling, and manipulation of links. However in the end no one can buy links from the entire Internet and the good and useful sites always rise to the top given enough time.

Because PageRank is a measure of usefulness, it is unnecessary to add any other feature to a search engine’s algorithm as a representation of usefulness. This is where many people trip up, they think of what they find useful in a page and think that a search engine must also value those same things and so give a bonus for them. They fail to realize that if the feature they consider helpful, such as say outgoing links, truly adds to a web page’s usefulness, then that will be reflected in an increase in incoming links. The search engine has no need to guess whether or not feature X truly makes a page more useful, and they have no need to make general global assumptions either, they can fall back on PageRank.

Then of course as well, with the passing of PageRank from one website to another, it is unnecessary to give a special bonus to certain types of sites such as .edu ones or major directories. PageRank passed is a direct result of PageRank obtained and the more useful directories or .edu sites will have more PageRank and thus pass more of it on to those they link to without the need for a special bonus. Meanwhile the less or not at all useful .edu pages (like a student’s personal web space) or directories (such as link farms) will not pass much PageRank because they are not useful and do not have much to pass. The system works, no special assumptive bonus required.

The above illustrated concept, of PageRank as a single and all-encompassing quality modifier, is elegant, and powerful and the reason why I love it.

It is important in the study of search engines to realize one thing, that relevancy and fairness are not the same thing. Many webmasters operate under the incorrect notion that search engines strive to treat all webmasters equally in some liberal fair way. Search engines do not exist to police or arbitrate the webmaster population. Their goal is not a fair marketplace for webmaster competition, their goal (aside from making money of course), is providing relevant results to user queries.

So webmasters have somewhat of a love-hate relationship with PageRank. There has been significant reaction, really the epitome of a knee-jerk reaction, to PageRank in recent years. Many webmasters see it as unfair, favoring older established sites, and think that this unfairness is enough to justify a change. Others simply took a knee jerk reaction to the PageRank mania and now say it is completely unimportant. This again is likely do to lack of experience.

PageRank was never, ever, ever, the sole part of Google’s algorithm. It was never, ever, ever, anything more than a quality metric. In the end you need to know two things to rank pages in search results, quality and topic, and PageRank has always been just half of that. So people who believed that it was the whole shebang, eventually realized it wasn’t and instead of thinking that they had been wrong before, came up with the idea that they weren’t wrong, Google just made a change, and if they made a change to downtweak PageRank’s importance then that is indicative of an overall trend and PageRank is lessening in importance or is no longer importance at all. Yes, its true, people actually did and still do believe that.

In truth PageRank today is as it always has been: A representation of the total value of the weight of a web page’s incoming links. This value has only ever been an approximation of the quality of a web page and has never had anything to do with the measuring of the topical relevance of a web page. Topical relevance is measured with link context and on-page factors such as keyword density, title tag, and everything else.

So do not disdain PageRank, do not scorn it as yesterday’s news. It is still what it has always been, use it as such. Be thankful as well, for without this innovation the Internet would not be what it is today.

Also, when the next new, or even recycled, theory du jour, is floated around, remember what I’ve said, and think to yourself if such a thing would really add relevancy, or if the PageRank algorithm, in it’s infinite elegance, already takes such a feature into account indirectly through it’s appreciation of the implications of an incoming link.

The Need for Speed

November 17th, 2006 by Chris

Never doubt the power of a quickly loading site.

My literature site is the 800 pound gorilla of my web empire. It is database heavy and very popular. I’ve gone through a myriad of ways to optimize the main part of the site, first I used phpCache for a number of years, but it wasn’t a perfect solution as it still relied on a good deal of php code for every page view. Later I developed a custom system to literally write static html files for a few thousand of the most heavily hit pages on my site, you can read about it here.

You may think asking the web server to write so many files at once would bring it to its knees, but actually it isn’t a big deal. I write the files once every couple of days and that is that. This is an extremely easy caching system to install and I highly recommend it.

However, while the main part of the site was loading fast, the forums were really really slow. There was just too much going on with 400-600 total people online at one time.

I did a variety of server tweaks at the advice of the people at the vbulletin.com support forums (excellent place to find MySQL or Apache tweaks for making your vbulletin run faster) and it helped but not enough.

So finally I bought a new server, I went from a p4 3.2 with HT to a dual dual-core Opteron 64. Finally, it loads fast.

The moral of this story is this. My forum was averaging around 800-1000 new registrations monthly. In October, the first full month with the new server, I passed 2500. I should do so again this month, unless the Thanksgiving holiday puts too large of a dent in my traffic. The typical number of guests online hasn’t changed, but obviously before people didn’t have the patience to always complete the registration process with the slow page loads that were going on. Getting all those people to post and confirm their emails is another problem I have yet to tackle, but they are definitely registering in greater numbers.

Links for your Homepage or Subpage?

November 15th, 2006 by Chris

In reading forums today I came across a topic I do not often see discussed and I thought it would make a good blog post.

When doing link building do you work on links for your homepage alone, or your subpages? If you can get a link from one source, how do you decide which page you would like that link to point to?

Assuming you have a normal internal linking structure any incoming link will generate the same overall PR throughout your site no matter which page on your site it points to. So then, your concern is which keywords you are targeting.

If your main page targets the keywords “Ipod accessories” and a subpage targets the keywords “Ipod speakers” you need to decide which one you want to rank for. Consider the following:

1. Are you already #1 on either keyword (or as high as you think is possible with your competition)? If so, go with the other one.

2. Do you think you would make more money with a higher ranking on one keyword or the other?

3. What is the topic of the page you are getting the link from? If it substantially matches the topic of either page in question, have it link to the closest match.

Ideally you will perform SEO with the same vigor for all your subpages as you do for your homepage, but in cases where you need to pick which one to promote I believe the above items will help guide you to the correct decision.

Help! My Website is Gone from Google!

October 18th, 2006 by Chris

I sometimes am amazed at how easy some articles can be to write. I was reading a forum thread about yet another webmaster being dropped from Google and as I started responding I thought to myself that there was so much information I could say, I might as well write an article. So I sat down (well technically I was already seated, but I like the phrase) and started writing and a hour later I had 1800 words. When you’ve dealt with a topic so often it ends up being easy to right about.

So anyways, here is the article: Help! My Website is Gone from Google!. Be sure to link to it next time you encounter such a situation, links = good karma.

Doing Link Research

October 17th, 2006 by Chris

Researching links is an important activity that every webmaster must do. This encompasses not only researching who links to you, but also who links to your competitors, or where your competitors link, or even finding places where you can get new links.

Google was the first search engine to provide a link search that would allow you to learn who was linking to a page, and they included easy access to this search in their toolbar, that most webmasters use.

However Google’s backlink search has never been entirely accurate, they purposefully do not include all known links, only a sample. So when using it you aren’t getting the full picture. Yet I’m willing to bet that most webmasters out there still use Google for this type of research.

Instead I recommend trying Yahoo. Yahoo’s new Site Explorer search engine, which is in some ways a response to Google’s Sitemaps (now known as Google Webmaster Central) is a search engine just for webmasters and it rocks. It includes great drill-down tools for researching links from your homepage, or even individual subsections of your site. Additionally you can specify if you want all links pointing to a URL, or just links not from your own site, or just links from any specific site. For instance if you want to know how many places a page is linked to from Wikipedia or DMOZ you can search and limit the result to those domains. What is also nice is you can indicate if you want results that only link to a specific URL, or that URL plus all other pages deeper than it in your directory structure. This is great if you run a site where subsections or subpages garner a majority if your incoming links, now you can get a true full site measure.

Yahoo Site Explorer also has an easy way to add a feed like Google Sitemaps to make it easier for them to crawl your site.

Another neat new tool for link research is MSN’s LinkFromDomain operator which tells you not what links point to a site, but where a site as a whole links to. Do you want to know where your competitors link to? Or even where you link to if you don’t know? Maybe your run a large forum and you want to know what types of links your visitors are leaving. In any case there is a plethora of uses for this type of search and I’m surprised it took this long for someone to create it.

So if you’re still using Google for your link researching, broaden your horizons, there are better tools out there.

Top of page...