How do search engines get all the information they use to present the results you see on a search engine results page (SERP)? They use a program called a “crawler” that works 24/7 scanning the Internet. The crawler, also sometimes called a “bot,” searches for web pages, blog posts, .pdf documents, and even images. It notes their exact location, the domain they are associated with, the hosting server IP address, the words on the page, any links it finds on a page, and several other items. Understanding the “crawl” is an important part of your search engine optimization strategy. Google and other search engines can’t index and rank what they can’t crawl (i.e. Flash).

What you need to know:

  • Crawling is how Google and other search engines learn about the pages you publish
  • Crawling takes place at your website and blog
  • Your website design must be “crawler” friendly or it will hurt your ability to get top ranked!
  • Search engines can’t index information it has difficulty crawling (i.e. Flash)
  • You can help Google crawl your website by using a Google Sitemap
Print Friendly, PDF & Email