how bots and crawler works in search engine

The related collected data is indexed and stored in a database. All of these operations are performed by the search engine software (crawler, spider, bot). Search engines use special programs, which are named spiders or bots. A search engine is some kind of software, which collects data about websites. At this point, the collected data includes the website URL, some keywords or keyword groups that define the content of the website, the code structure that forms the web page, and links provided on the website. These programs move by using the hyperlink structure of the Web. They navigate through web pages periodically and capture changes that have been made since the last navigation (Spiders / Crawlers). Data obtained by the related programs are stored in a very wide database system. This database is called the index of the search engine. On the other hand, the performed operation is called "indexing". When users perform a query in order to get some data or information, the related query is transferred to the search engine index and results are shown to users(query). Essential competition factor among search engines appears during the "relevant result showing, sorting" process. After determining the related pages with the performed query, they must be shown to users in a sorted list structure. At this point, search engine algorithms take an important role and they try to show the most relevant results for users (øyiler, 2009). Briefly, searching robots collect data about each URL and store the collected data in a database. When a user connects to the search engine for a search session, references in the related database are evaluated and obtained results are returned back to the user(Atay and others, 2010).