Search technology involves these main processes:
• Crawling: The purpose of crawling is to identify relevant pages for indexing and assess whether they have changed. It is performed by robots or spiders. These are simple software processes running on a search engine’s server which request pages, follow the links contained on that page and so create a series of page references with associated URL
• Indexing: created to enable the search engine to rapidly find the most relevant pages containing the query typed by the searcher
• Ranking or scoring: Ranking of the document to assess the most relevant set of documents and these are then displayed on the search results page
• Query request and results serving: The familiar search engine interface accepts the searcher’s query. The user’s location is assessed through their IP address and the query is then passed to a relevant data centre for processing.
Now that you have a clearer picture about how search engines function, you might be interested in reading my next post on what SEO is and how it is influenced. Check it out.