Search engines work by storing information about a large number of web pages which they retrieve from the WWW itself. These pages are retrieved by what we call a Web crawler--basically an automated web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags).
When a user comes to the search engine and makes a query (typically by giving some key words), the engine looks up the index and provides a listing of possibly relevant web pages, possibly with a short summary of at least the document's title.
Google's recent success is based on the concept of PageRank. Each page is ranked by how many quality pages link to it. PageRank of linking pages and the number of links on these pages contribute to the PageRank of the linked page. This makes it possible for Google to first present pages that are highly linked to by quality websites.
Challenges faced by search engines
The first Web search engine was Lycos which started as a University research project in 1994.
There were other (non-Web) search engines before 1994 - Archie, for instance Robert Merkel