Search engines have main functionalities, two of the major ones being the crawling of the web and building an index once crawled. The engine then ranks the websites based on their criteria of providing the most relevance to the user.
You could use the analogy of the World Wide Web (WWW) being a network of calling points in a large city subway. Each of these calling points has unique content (typically a web page, image, PDF etc.). The search engines need to find a way to ‘crawl’ the whole city and find all the calling points along the way. By using the best network available, being links.
This link structure of the WWW serves a purpose in a way to bind all the calling points together.
Search engines use robots, usually referred to as ‘spider’, ‘bots’ or ‘crawlers’ to reach the billions of inter connected webpages on the WWW by following links.