Search Engine Optimization History


Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Site. Initially, all webmasters needed to do was to submit the Url of a page, to the various engines which would send a "spider" to "crawl" that page, return information found on the page to be indexed. The process Include a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various content about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "SEO" probably came into use in 1997. The first documented use of the term SEO was John Audette and his company Multimedia Marketing Group as documented by a web page from the MMG site from August, 1997.

Early versions of search algorithms on webmaster-provided info such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches. Website/Blog content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.

By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links. PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the website/blog, and follows links from one page to another. This means that some links are stronger than others, as a higher Page-Rank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors were considered as well as on-page factors to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their opinions Patents related to search engines can provide information to better understand search engines.

In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.

In 2007, Google announced a campaign against paid links that transfer Page-Rank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting. As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.

Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.

In February 2011, Google announced the "Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in SEO rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.

In April 2012, Google launched the Google Penguin update the goal of which was to personalise websites that used manipulative techniques to Raise their rankings on the Search Engine.
Share:

2 comments:

Popular Posts

Visitors

Labels