The Ghost of Search Engine’s Past
As the owner of several web sites, I’ve had to learn, unlearn, and teach numerous people how to let go of the “common knowledge” of search engine optimization. There is more information on the web than ever before regarding search engines, and while some of it is pure gold, most of it is simply fodder to feed unwarranted superstition.
Too much (search engine) information
The web contains several generations of documents, ranging from the beginnings of the web in 1995 or so up until the present day. The Internet has changed so vastly over the last 10 years that many of the web pages created just a few years ago have little relevance in today’s world, especially when it comes to search engines.
In the early days, it was easy to trick search engines to siphon off some quality search engine traffic, and many people did. This lead to bad search results and unpleasant surprises when, for example, searchers found themselves in the red light district of the web after doing a search for t-shirts.
The search engine vs. SPAM
Quickly, the search engines took to task of battling spam and the spam-filled search engines of yesteryear were gradually replaced with robust engines that produce fairly decent results today, and they’re only getting better.
Because information on the web has a tendency to spread extremely fast – especially when the information is helpful or outrageous – the older the information, the more likely it is that you will find it on the web because it has had the time to spread itself around.
Information you will find online about search engines is no exception, meaning that the most of the documents you will find on the web regarding search engine optimization are outdated and may not be as relevant as they once were. For many, it is difficult to sift through this information to isolate useful information, which is one of the reasons these lessons were written.
The Truth About META Tags
Most search engine optimization tips are not based in actual experience
Search engines purposefully make it difficult for the average web site owner to predict search results, and they make it even more difficult to deduce how particular changes to a site will affect its ranking on search engine result pages (or SERPs, as they are commonly known). While many find this frustrating, search engines only do this to protect their systems from abuse. Search engine indexes change, algorithms are tweaked, and explanations offered by search engine companies on how their engines work are typically vague and brief.
Because of this lack of definitive knowledge, tips and suggestions get passed around the web ad infinitum until ideas and suspicions come to be considered concrete truths, even though most are not based on actual experience. These suggestions may make sense, but not all of them are true.
Unlearn this: Meta tags matter
For those just getting started, meta tags appear in the head of a typical web page. A set of meta tags will look like this:
The purpose of meta tags is to give search engines and other indexing tools a clear snapshot of what a web page is about without having to search through the actual page for data. There are numerous kinds of meta tags, but the most commonly used are the “keywords” and “description”, as shown above.
In the early days of the web, search engines used meta tags to determine the content of a web page. It was quick to process, and easy to store in a database. The problem they ran into eventually was due to the fact that meta tags are invisible to the average web surfer. This meant that a site could easily put up a false front to search engines. A search for “dog toys” may land you at a web site featuring a lingerie site rife with scantly clad humans and a distinct absence of dogs.
While there are still meaningful uses for meta tags, search engines have on the whole stopped using them for indexing purposes. Many will use the “description” meta tag in a SERP (Search Engine Result Page), but it will have little or no effect on your search engine rankings.
Is Search Engine Submission Enough?
Unlearn this: Submitting a site to a search engine will get you indexed
Search engines rely on recommendations from other sites to determine if a site is worth showing on their SERPs. On the web, recommendations come in the form of hyperlinks . If you submit your site to a search engine and you don’t have any “inbound links” (other web sites linking to yours – you might not be indexed, or it might take quite a while before the search engine robots make their way over to your site
The key here is getting inbound links. Once you have links from some well-positioned sites, the search engines will find you on their own.
Unlearn this: Using search engine submission software will get you ranked higher
Most search engine submission software works under the premise that by submitting your web site to a search engine often, the site will be indexed more regularly and / or will be ranked higher.
Once your site is indexed, however, submitting it again to search engines will have little effect. Once you’ve been indexed, the search engine spiders will come back to your site on a regular basis to get a feel for how often your content changes. Eventually, they will determine a visitation schedule based on when they think your content might change again.
In this lesson, we covered some of the most common misunderstandings floating around regarding how search engines work, including the current ineffectiveness of meta tags, the necessity of inbound links to get your site indexed, and the myth of search engine submission software.
Once you understand that most of what you read about how search engine optimization works is based on speculation or outdated standards, you will be way ahead of the game. In some cases, taking the wrong advice can get you in trouble and cause your site to be dropped from search engines all together.