GETTING TECHNICAL: HOW IT ALL COMES TOGETHER
Now, for the technical part. But don’t worry! I’m not asking you to go make changes to the servers yourself! Chapter 7 contains more details about working with Web developers and ensuring that they’re using search-friendly best practices.
Crawling
Before a search engine can evaluate the content on your site to determine whether it’s relevant for a searcher’s query, the engine has to know the page exists and extract that content from the site for analysis.
1. Discovering the pages: Search engines find out about pages on the Web generally by following links from other sites on the Web and by following a site’s internal links. The most important thing to remember about the discovery process is that you should build a great site that makes others want to link to it and that you should have a comprehensive site navigation structure. Of course, you’d want both of these things on your site even if search engines didn’t exist.
2. Crawling the pages: Once a search engine such as Google learns about pages on the Web, it uses a “bot” to crawl those pages. Your goal is likely to have your entire site crawled, which can be hindered by crawling inefficiencies and by infrastructure issues that make URLs inaccessible to the bots.
3. Extracting content: Once a crawler has accessed a page, it has to be able to extract the content from that page and store it. As with crawling, a number of obstacles may keep a search engine from extracting content from ...
Get Marketing in the Age of Google: Your Online Strategy IS Your Business Strategy, Revised and Updated now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.