|
how search engines work 1. Web Discovery Google's first job is to crawl the web . Spiders (or bots) are automated programs that crawl the web and "scan" web pages by recording the headlines and all the content of your pages to learn more about who you are, what you do, what you write and who you might be interested in reading. find As Google states: “Before you search for something, web crawlers collect information from hundreds of billions of web pages and organize it into the Search index. The crawling process begins with a list of web addresses from previous crawls and sitemaps provided by site owners." Web crawlers pay attention to new websites, changes to existing websites and inactive links.
There are computer programs that determine which websites should be crawled, how often and how many pages should be retrieved from each website." This may sound simple, but it is not, given that 300-500 new websites are created every minute of the day. 2. Organization LOB Directory of information through indexing When googlebots locate a web page, Google's systems note key identifying features (from keywords to the website's update status) and track them in the Search index ( indexing ). The Google Search index contains hundreds of billions of web pages and is over 100,000,000 gigabytes in size. "When we index a web page, we add it to the listings for all the words it contains," Google notes.

Page Ranking Google's next job is to figure out how to "serve", how to rank the best results from its database when someone types in a search query ( ranking ). “When you do a search, at the most basic level, our algorithms look for your search terms in the index to find relevant pages. They analyze how often and where these keywords appear on a page, whether in titles or headings or in the body of the text . "In addition to keyword matching, algorithms look for clues to measure the extent to which various potential search results give users what they're looking for," Google emphasizes, regarding search matching. He adds:
|
|