Travel remains the single largest component of e-commerce according to PR6-Webkatalog Forrester Research, a consulting firm in Cambridge, Mass. But despite the dominance of such online travel agency heavyweights as Expedia.com, Hotwire.com, Orbitz.com, Priceline.com and Travelocity.com, most users consult multiple Web sites when shopping online for travel. The average consumer visits 3.6 sites when shopping for an airline ticket online, according to PhoCusWright, a Sherman, CT-based travel technology firm. Yahoo claims 76% of all online travel purchases are preceded by some sort of search function, according to Malcolmson, director of product development for Yahoo Travel. The 2004 Travel Consumer Survey published Jupiter Research noted that "nearly two in five online travel consumers say they believe that no one site has the lowest rates or fares." Thus a niche was created for aggregate travel search such as Kayak.com, Lowfares.com, Dohop.com or Trabber.com which seek to find the lowest rates from multiple travel sites, obviating the need for consumers to cross-shop from site to site. Even in emerging markets such as China and India, Qunar.com and Ixigo.com have adopted this model with considerable success. Within the class of travel search engines are several subcategories of sites that offer a range of services and search methods:
Several of the leading generic search and information aggregator sites also offer travel components. In the broadest sense, virtually any search engine could be considered a travel search engine. However, some generic search engines also should be ranked as TSEs, since they include both paid and unpaid links to travel sites and maintain "travel" pages, often accompanied by original editorial content. This category of generic search sites includes About.com, AOL, MSN, and Yahoo.
These sites use technological tools generate an aggregate result from other travel sites, including third-party travel agency sites such as Expedia.com, Orbitz.com , and Travelocity.com, and branded sites maintained by individual travel companies, such as Delta.com, Hilton.com, or Hertz.com, for example.
These sites collect and publish bargain rates by advising consumers where to find them online (sometimes but not always through a direct link). PR6-Webkatalog Rather than providing detailed PR6-Webkatalog search tools, these sites generally focus on offering advertised specials, such as last-minute sales from travel suppliers eager to deplete unused inventory; therefore, these sites often work best for consumers who are flexible about destinations and other key itinerary components. This category includes sites such as Cheapflights.com, Travelzoo.com, Kayak.com, TripSchedule.com, and USAToday.com’s travel listings.
These tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost have improved dramatically to partially offset the difficulty. There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness. In designing Google, we have considered both the rate of growth of the Web and technological changes. Google is designed to scale well to extremely large data sets. It makes efficient use of storage space to store the index. Its data structures are optimized for fast and efficient access (see section 4.2). PR6-Webkatalog Further, we expect that the cost to index and store text or HTML will eventually decline relative to the amount that will be available (see Appendix B). This will result in favorable scaling properties for centralized systems like Google.
Another important design goal was to build systems that reasonable numbers of people can actually use. Usage was important to us because we think some of the most interesting research will involve leveraging the vast amount of usage data that is available from modern web systems. For example, there are many tens of millions of searches performed every day. However, it is very difficult to get this data, mainly because it is considered commercially valuable.
Our final design goal was to build an architecture that can support novel research activities on large-scale web data. To support novel research uses, Google stores all of the actual documents it crawls in compressed form. PR6-Webkatalog One of our main goals in designing Google was to set up an environment where other researchers can come in quickly, process large chunks of the web, and produce interesting results that would have been very difficult to produce otherwise. In the short time the system has been up, there have already been several papers using databases generated by Google, and many others are underway. Another goal we have is to set up a Spacelab-like environment where researchers or even students can propose and do interesting experiments on our large-scale web data.
We assume page A has pages T1...Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. We usually set d to 0.85. There are more details about d in the next section. Also C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:PageRank or PR(A) can be calculated using a simple iterative algorithm, and corresponds to the principal eigenvector of the normalized link matrix of the web. Also, a PageRank for 26 million web pages can be computed in a few hours on a medium size workstation. There are many other details which are beyond the scope of this paper.PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))
Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one.