From affiliate program best practices to search engine optimization, and all points in between, CAP Marketing Guru Bob Rains knows Internet marketing! Make sure to take advantage of Bob’s years of Internet experience right here at the CAP Learning Center.
Bob Rains Online Marketing Chat, Wednesday, March 24, 2009: SEO and Meta-Tag Strategies
I’ve received many different questions all touching on small items that I would lump into the category of on-page SEO. I figured it would be a good time to cover a bunch of on-page issues, and answer many of these questions in one fell swoop.
First, I want to break it down to the impetus of performing any form of on-page SEO. Search engine robots, or "crawlers", are the eyes and ears of search engines, and as shocking as it may sound, I feel like many people are not as aware of this fact as you might like to think. There seems to be a suspicion that Google is this group of evil scientists in long white lab coats that have super X-ray vision and can not only look deep into your website, but also your soul to find your inner motivation for every line of code.
Needless to say, this is not the case.
Nor do search engine crawlers' travel about the web at random. The navigation paths they follow obey rules and often schedules given by defined control centers. In the past, the Webmaster could play the part of control center — the main method of giving instructions to search engines' bots was Meta tags and robots.txt usage.
Meta elements provide web page information that helps search engines to find and categorize content or data or actually ignore it.
The major search engines of the mid-1990s relied heavily on meta tags, and from what I hear, many online marketers think they still do. Back in the day, savvy web geeks realized that the search engine results pages (SERPS) could be easily manipulated for lucrative commercial purposes. This is why search engines don't pay as much attention to Meta information as they once did.
But this doesn’t mean you should ignore meta-tags today. As a matter of fact, there are some very important meta-tags you must pay attention to:
Title Meta Tag.
The title tag has been — and probably will always be — one of the most important on-page factors in achieving high search engine rankings.
In fact, fixing just the title tags of your pages can often generate quick and appreciable differences to your rankings. And because the words in the title tag are what appear in the clickable link on the SERP, changing them may result in more click-throughs.
Title tags are definitely one of the “big three” as far as the algorithmic weight given to them by search engines; they are equally as important as your visible text copy and the links pointing to your pages — perhaps even more so. Whatever text you place in the title tag (between the <TITLE> and </TITLE>) will appear in the reverse bar of someone's browser when they view the web page.
Description Meta-Tag.
Example: <meta name="description" content="Yo! This is the description of the page dawg">
Any search engine worth caring about should support the “description meta tag”. It provides an explanation of your web page content and allows you to write a proper description of it. This text is often used in the SERPs, so good descriptive text may increase the page's click-through rate. W3C does not set the length of description meta tag, however most SEO experts advise using no more than 200 characters of plain text.
Robots Meta-Tag.
Example: <meta name="robots" content="noindex, nofollow">
This tag is used to tell bots to screw, and that a page shouldn't be indexed and that its links shouldn't be followed. Robotstxt.org reminds us that "robots can ignore your <META> tag, especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention"!
The "nofollow" directive only applies to the on-page links. This doesn’t mean that a robot might find and follow the same links on some other page without a "nofollow", and so arrive at the page that you don’t want them to find, or that you’re working on. For important pages, a better idea is exclusion with the help of robots.txt.
In addition to "noindex" and "nofollow" values, "noarchive" and "nosnippet" are used to tell bots that a page should not be cached and to avoid the description for the SERPS.
Do not confuse the robots' meta tag with rel="nofollow" link attribute (it is set on an HTML <a> link tag). This attribute was invented by Google and supported by other search engines. It tells bots that PageRank should not be spread to the link. Thus it only affects the ranking, and does not stop bots from following the link and index pages.
Speaking of link attributes, the new rel="canonical" should be mentioned. At the beginning of February, Google, Yahoo, and Microsoft announced that this new link attribute is supported. It was created to give webmasters more control over pages that have the same content.
Google gives the following example of the attribute usage on the Webmaster Central blog:
<link rel="canonical" href="http://www.example.com/product.php?item=fish"/>
All this was done to reduce the amount of canonical duplicate content. For years people were confronted with serious www vs. non-www duplicate content issues, and now it’s very easy to tell the search engines what content you want to have indexed. To use the tag, simply place it in the head section of the duplicate content URLs. The tag can only be used on pages within a single site. The search engines recommend using absolute links, though relative links are also acceptable.
Sitemaps.
Sitemaps are the easiest way to inform search engines about website pages available for crawling and indexing. A Sitemap is a simple XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
Using a Sitemap will not guarantee that web pages are included in SERPS, but will provide the bots with the directions. You can find more information on sitemaps at Sitemaps.org.
Robots.txt.
There is one more way to instruct bots: the Robots Exclusion Protocol, also known as /robots.txt file.
All major bots check what is written in www.example.com/robots.txt and if it says:
User-agent: *
Disallow:
… then the page should be indexed by each and every bot again. You can read more about robots.txt at Robotstxt.org.
In regards to SEO, the robots.txt file is mandatory, as it helps to navigate through the pages you want indexed. You can exclude pages that are available only to registered users; and can exclude pages with duplicate content to prevent them from outranking the original pages.
That should just about cover bots and tags for bots that matter. Sure, there are tons of other tags, and more ways to direct bots, but I don’t recommend you toy with other methods until you are 100 percent in line with the above.
Send Bob your questions! Email your online marketing questions to Bob Rains at expert@casinoaffiliateprograms.com today.