Skip to content
Surf Wiki
Save to docs
general/google-buildings-and-structures

From Surf Wiki (app.surf) — the open knowledge base

Google data centers

Facilities containing Google servers


Facilities containing Google servers

Google uses large data center facilities to provide their services, which combine large drives, computer nodes organized in aisles of racks, internal and external networking, environmental controls (mainly cooling and humidification control), and operations software (especially as concerns load balancing and fault tolerance).

There is no official data on how many servers are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number is changing as the company expands capacity and refreshes its hardware.

Locations

The locations of Google's various data centers by continent are as follows:

ContinentLocationGeoProducts LocationCloud LocationTimelineDescription
North AmericaArcola (VA), USALoudoun CountyN. Virginia (us-east4)last=Reportfirst=Times-Mirror Stafftitle=Google 'caps off' $600M investment in Loudoun Countyurl=https://www.loudountimes.com/business/google-caps-off-600m-investment-in-loudoun-county/article_6a4a3110-9291-11e9-a5f5-673864320f20.htmlaccess-date=February 12, 2021website=LoudounTimes.comdate=June 19, 2019language=en}}
North AmericaAtlanta (GA), USADouglas County-2003 - launched350 employees
South AmericaCerrillos, Santiago, Chile-Santiago (southamerica-west1)2020 - announced
AsiaChanghua County, TaiwanChanghua CountyTaiwan2011 - announced60 employees
North AmericaClarksville (TN), USAMontgomery County-2015 - announced
North AmericaColumbus (OH), USA-Columbus (us-east5)2022 - launched
North AmericaCouncil Bluffs (IA), USACouncil Bluffs2007 - announced130 employees
North AmericaCouncil Bluffs (IA), USAIowa (us-central1)
AsiaDelhi, India-Delhi (asia-south2)2020 - announced
Middle EastDoha, Qatar-Doha (me-central1)2023 - launched
EuropeDublin, IrelandDublin-2011 - announced150 employees
EuropeEemshaven, NetherlandsEemshavenNetherlands (europe-west4)2014 - announced200 employees
EuropeFrankfurt, Germany-Frankfurt (europe-west3)title=Google Cloud investing in Germany with new infrastructure and sustainable energyurl=https://cloud.google.com/blog/products/infrastructure/google-invests-1-billion-euros-in-germanys-digital-futureaccess-date=2023-07-10website=Google Cloud Bloglanguage=en-US}}
EuropeFredericia, DenmarkFredericia-2018 - announced€600M building costs
EuropeGhlin, Hainaut, BelgiumSaint-GhislainBelgium (europe-west1)2007 - announced12 employees
EuropeHamina, FinlandHaminaFinland2009 - announced6 buildings, 400 employees
North AmericaHenderson (NV), USAHendersonLas Vegas (us-west4)2019 - announced64-acres; $1.2B building costs
AsiaHong Kong, Hong Kong-Hong Kong (asia-east2)2017 - announced
AsiaInzai, JapanInzai-2023 - launched
AsiaJakarta, Indonesia-Jakarta (asia-southeast2)2020 - launched
AsiaKoto-Ku, Tokyo, Japan-Tokyo2016 - launched
North AmericaLeesburg (VA), USALoudoun CountyN. Virginia (us-east4)2017 - announced
North AmericaLenoir (NC), USALenoir-2007 - announcedover 110 employees
AsiaLok Yang Way, Pioneer, SingaporeSingaporeSingapore (asia-southeast1)2022 - launched
EuropeLondon, UK-London2017 - launched
North AmericaLos Angeles (CA), USA-Los Angeles (us-west2)
EuropeMadrid, Spain-Madrid (europe-southwest1)2022 - launched
PacificMelbourne, Australia-Melbourne2021 - launched
EuropeMiddenmeer, Noord-Holland, The NetherlandsMiddenmeerNetherlands (europe-west4)2019 - announced
North AmericaMidlothian (TX), USAMidlothianDallas (us-south1)2019 - announced375-acres; $600M building costs
EuropeMilan, Italy-Milan (europe-west8)2022 - launched
North AmericaMoncks Corner (SC), USABerkeley CountySouth Carolina (us-east1)2007 - launched150 employees
North AmericaMontreal, Quebec, Canada-Montréal (northamerica-northeast1)2018 - launched62.4-hectares; $600M building costs
AsiaMumbai, India-Mumbai (asia-south1)2017 - launched
North AmericaNew Albany (OH), USANew Albany-2019 - announced400-acres; $600M building costs
AsiaOsaka, Japan-Osaka2019 - launched
South AmericaOsasco, São Paulo, Brazil-São Paulo (southamerica-east1)2017 - launched
North AmericaPapillion (NE), USAPapillion-2019 - announced275-acres; $600M building costs
EuropeParis, France-Paris (europe-west9)2022 - launched
North AmericaPryor Creek (OK), USAMayes County-2007 - announcedover 400 employees, land at MidAmerica Industrial Park
South AmericaQuilicura, Santiago, ChileQuilicura-2012 - announcedup to 20 employees expected. A million dollar investment plan to increase capacity at Quilicura was announced in 2018.
North AmericaReno (NV), USAStorey County-2017 - 1,210 acres of land bought in the Tahoe Reno Industrial Center
North AmericaSalt Lake City (UT), USA-Salt Lake City (us-west3)2020 - launched
AsiaSeoul, South Korea-Seoul2020 - launched
PacificSydney, Australia-Sydney2017 - launched
Middle EastTel Aviv, Israel-Tel Aviv (me-west1)2022 - launched
North AmericaThe Dalles (OR), USAThe DallesOregon (us-west1)2006 - launched80 full-time employees
North AmericaToronto, Canada-Toronto (northamerica-northeast2)2021 - launched
EuropeTurin, Italy-Turin (europe-west12)2023 - launched
South AmericaVinhedo, São Paulo, BrazilSão Paulo (southamerica-east1)
EuropeWarsaw, Poland-Warsaw (europe-central2)2019 - announced
AsiaWenya, Jurong West, SingaporeSingaporeSingapore (asia-southeast1)2011 - announced
North AmericaWidows Creek (Bridgeport) (AL), USAJackson County-2018 - broke ground
EuropeZürich, Switzerland-Zurich (europe-west6)2018 - announced
EuropeAustriatitle=Introducing 5 new Google Cloud regionsurl=https://cloud.google.com/blog/products/infrastructure/introducing-new-google-cloud-regionsaccess-date=2023-07-10website=Google Cloud Bloglanguage=en-US}}
EuropeBerlin, GermanyBerlin (europe-west10)title=Berlin-Brandenburg Google Cloud region is now openurl=https://cloud.google.com/blog/products/infrastructure/berlin-brandenburg-google-cloud-region-is-now-openaccess-date=2024-02-03website=Google Cloud Bloglanguage=en-US}}
Middle EastDammam, Saudi Arabia2021 - announced
EuropeAthens, Greece2022 - announced
North AmericaKansas City, Missouri2019 - announced
Middle EastKuwait2023 - announced
AsiaMalaysiatitle=Announcing new Google Cloud regions in Asia Pacificurl=https://cloud.google.com/blog/products/infrastructure/announcing-new-google-cloud-regions-in-asia-pacificaccess-date=2023-07-10website=Google Cloud Bloglanguage=en-US}}
PacificAuckland, New Zealand2022 - announced
EuropeOslo, Norway2022 - announced
North AmericaQuerétaro, Mexico2022 - announced
AfricaJohannesburg, South AfricaJohannesburg (africa-south1)2022 - announced 2024 - launched
EuropeSweden2022 - announced
AsiaTainan City, Taiwan-Taiwandate=September 12, 2019title=Google purchases land for new data center in Tainanurl=http://www.taipeitimes.com/News/biz/archives/2019/09/12/2003722106access-date=December 20, 2019website=Taipei Times}}
AsiaThailand2022 - announced
AsiaYunlin County, Taiwan-Taiwan (asia-east1)date=September 3, 2020title=Google confirms plans to build 3rd data center in Taiwanurl=https://www.taiwannews.com.tw/en/news/4001183access-date=September 3, 2020website=Taiwan News}}
North AmericaMesa (AZ), USAPhoenix (us-west8)2023 - construction started
EuropeWaltham Cross, Hertfordshire, UK2024 January - announced
South AmericaCanelones, Uruguay2024 - construction started
AsiaVisakhapatnam, Andhra Pradesh, IndiaAnnounced July 2025.Google plans to invest US$15 billion to build a 1 GW data center campus in Visakhapatnam, including US$2 billion for renewable energy infrastructure. It is expected to be the company’s largest data center project in Asia.

Hardware

Original hardware

Google's first production server rack, circa 1998

The original hardware (circa 1998) that was used by Google when it was located at Stanford University included:

  • Sun Microsystems Ultra II with dual 200 MHz processors, and 256 MB of RAM. This was the main machine for the original Backrub system.
  • 2 × 300 MHz dual Pentium II servers donated by Intel, they included 512 MB of RAM and 10 × 9 GB hard drives between the two. It was on these that the main search ran.
  • F50 IBM RS/6000 donated by IBM, included 4 processors, 512 MB of memory and 8 × 9 GB hard disk drives.
  • Two additional boxes included 3 × 9 GB hard drives and 6 x 4 GB hard disk drives respectively (the original storage for Backrub). These were attached to the Sun Ultra II.
  • SSD disk expansion box with another 8 × 9 GB hard disk drives donated by IBM.
  • Homemade disk box which contained 10 × 9 GB SCSI hard disk drives.

Google Cluster

The state of Google infrastructure in 2003 was described in a report by Luiz André Barroso, Jeff Dean, and Urs Hölzle as a "reliable computing infrastructure from clusters of unreliable commodity PCs".

At the time, on average, a single search query read ~100 MB of data, and consumed \sim 10^{10} CPU cycles. During peak time, Google served ~1000 queries per second. To handle this peak load, they built a compute cluster with ~15,000 commodity-class PCs instead of expensive supercomputer hardware to save money. To make up for the lower hardware reliability, they wrote fault tolerant software.

The structure of the cluster consists of five parts. Central Google Web servers (GWS) face the public Internet. Upon receiving a user request, the Google Web server communicates with a spell checker, an advertisement server, many index servers, many document servers. Each of the four parts responds to a part of the request, and the GWS assembles their responses and serves the final response to the user.

The raw documents were ~100 TB, and the index files were ~10 TB. The index files are sharded, and each shard is served by a "pool" of index servers. Similarly, the raw documents are also sharded. Each query to the index file results in a list of document IDs, which are then sent to the document servers to retrieve the title and the keyword-in-context snippets.

There were several CPU generations in use, ranging from single-processor 533 MHz Intel-Celeron-based servers to dual 1.4 GHz Intel Pentium III. Each server contained one or more hard drives, 80 GB each. Index servers have less disk space than document servers. Each rack had two Ethernet switches, one per side. The servers on each side interconnected via a 100-Mbps. Each switch had a ~250 MB/sec uplink to a central switch that connected to all racks.

The design objectives include:

  • Use low-reliability consumer hardware and make up for it with fault-tolerant software.
  • Maximize parallelism, such as by splitting a single document match lookup in a large index into a MapReduce over many small indices.
  • Partition index data and computation to minimize communication and evenly balance the load across servers, because the cluster is a large shared-memory machine.
  • Minimize system management overheads by developing all software in-house.
  • Pick hardware that maximizes performance/price, not absolute performance.
  • Pick hardware that has high thoroughput over high latency. This is because queries are served with massive parallelism, with very few dependent steps and minimal communication between servers, so high latency does not matter.

Due to the massive parallelism, scaling up hardware scales up the thoroughput linearly, i.e. doubling the compute cluster doubles the number of queries servable per second.

The cluster is made of server racks at 2 configurations: 40 x 1u per side with 2 sides, or 20 x 2u per side with 2 sides. The power consumption is 10 kW per rack, at a density of 400 W/ft^2, consuming 10 MWh per month, costing $1,500 per month.

Production hardware

As of 2014, Google has used a heavily customized version of Debian Linux. They migrated from a Red Hat-based system incrementally in 2013.

The customization goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance. How this is measured is unclear, but it is likely to incorporate running costs of the entire server, and CPU power consumption could be a significant factor. Servers as of 2009–2010 consisted of custom-made open-top systems containing two processors (each with several cores), a considerable amount of RAM spread over 8 DIMM slots housing double-height DIMMs, and at least two SATA hard disk drives connected through a non-standard ATX-sized power supply unit. The servers were open top so more servers could fit into a rack. According to CNET and a book by John Hennessy, each server had a novel 12-volt battery to reduce costs and improve power efficiency.

In 2013, the press revealed the existence of Google's floating data centers along the coasts of the states of California (Treasure Island's Building 3) and Maine. The development project was maintained under tight secrecy. The data centers are 250 feet long, 72 feet wide, 16 feet deep. The patent for an in-ocean data center cooling technology was bought by Google in 2009 (along with a wave-powered ship-based data center patent in 2008). Shortly thereafter, Google declared that the two massive and secretly built infrastructures were merely "interactive learning centers, [...] a space where people can learn about new technology."

Google halted work on the barges in late 2013 and began selling off the barges in 2014.

Software

Most of the software stack that Google uses on their servers was developed in-house. According to a well-known former Google employee in 2006, C++, Java, Python and (more recently) Go are favored over other programming languages. For example, the back end of Gmail is written in Java and the back end of Google Search is written in C++. Google has acknowledged that Python has played an important role from the beginning, and that it continues to do so as the system grows and evolves.

The software that runs the Google infrastructure includes:

  • Google Web Server (GWS) custom Linux-based Web server that Google uses for its online services.
  • Storage systems:
    • Google File System and its successor, Colossus
    • Bigtable structured storage built upon GFS/Colossus
    • Spanner planet-scale database, supporting externally-consistent distributed transactions
    • Google F1 a distributed, quasi-SQL DBMS based on Spanner, substituting a custom version of MySQL.
  • Chubby lock service
  • MapReduce and Sawzall programming language
  • Indexing/search systems:
    • TeraGoogle Google's large search index (launched in early 2006)
    • Caffeine (Percolator) continuous indexing system (launched in 2010).
    • Hummingbird major search index update, including complex search and voice search.
  • Borg declarative process scheduling software

Google has developed several abstractions which it uses for storing most of its data:

  • Protocol Buffers "Google's lingua franca for data", a binary serialization format which is widely used within the company.
  • SSTable (Sorted Strings Table) a persistent, ordered, immutable map from keys to values, where both keys and values are arbitrary byte strings. It is also used as one of the building blocks of Bigtable.
  • RecordIO a sequence of variable sized records.

Software development practices

Most operations are read-only. When an update is required, queries are redirected to other servers, so as to simplify consistency issues. Queries are divided into sub-queries, where those sub-queries may be sent to different ducts in parallel, thus reducing the latency time.

To lessen the effects of unavoidable hardware failure, software is designed to be fault tolerant. Thus, when a system goes down, data is still available on other servers, which increases reliability.

Search infrastructure

Google data center in [[The Dalles, Oregon

Index

Like most search engines, Google indexes documents by building a data structure known as inverted index. Such an index obtains a list of documents by a query word. The index is very large due to the number of documents stored in the servers.

The index is partitioned by document IDs into many pieces called shards. Each shard is replicated onto multiple servers. Initially, the index was being served from hard disk drives, as is done in traditional information retrieval (IR) systems. Google dealt with the increasing query volume by increasing number of replicas of each shard and thus increasing number of servers. Soon they found that they had enough servers to keep a copy of the whole index in main memory (although with low replication or no replication at all), and in early 2001 Google switched to an in-memory index system. This switch "radically changed many design parameters" of their search system, and allowed for a significant increase in throughput and a large decrease in latency of queries.

In June 2010, Google rolled out a next-generation indexing and serving system called "Caffeine" which can continuously crawl and update the search index. Previously, Google updated its search index in batches using a series of MapReduce jobs. The index was separated into several layers, some of which were updated faster than the others, and the main layer wouldn't be updated for as long as two weeks. With Caffeine, the entire index is updated incrementally on a continuous basis. Later Google revealed a distributed data processing system called "Percolator" which is said to be the basis of Caffeine indexing system.

Server types

Google's server infrastructure is divided into several types, each assigned to a different purpose:

  • Web servers coordinate the execution of queries sent by users, then format the result into an HTML page. The execution consists of sending queries to index servers, merging the results, computing their rank, retrieving a summary for each hit (using the document server), asking for suggestions from the spelling servers, and finally getting a list of advertisements from the ad server.
  • Data-gathering servers are permanently dedicated to spidering the Web. Google's web crawler is known as GoogleBot. They update the index and document databases and apply Google's algorithms to assign ranks to pages.
  • Each index server contains a set of index shards. They return a list of document IDs ("docid"), such that documents corresponding to a certain docid contain the query word. These servers need less disk space, but suffer the greatest CPU workload.
  • Document servers store documents. Each document is stored on dozens of document servers. When performing a search, a document server returns a summary for the document based on query words. They can also fetch the complete document when asked. These servers need more disk space.
  • Ad servers manage advertisements offered by services like AdWords and AdSense.
  • Spelling servers make suggestions about the spelling of queries. There are also "canary requests", whereby a request is first sent to one or two leaf servers to see if the response time is reasonable. If not, then the request fails. This provides security.

Security

In October 2013, The Washington Post reported that the U.S. National Security Agency intercepted communications between Google's data centers, as part of a program named MUSCULAR. This wiretapping was made possible because, at the time, Google did not encrypt data passed inside its own network. This was rectified when Google began encrypting data sent between data centers in 2013.

Environmental impact

Some of Google's most efficient data centers in 2012 ran at 35 C using only fresh air cooling, requiring no electrically powered air conditioning.

In December 2016, Google announced that—starting in 2017—it would purchase enough renewable energy to match 100% of the energy usage of its data centers and offices. The commitment will make Google "the world's largest corporate buyer of renewable power, with commitments reaching 2.6 gigawatts (2,600 megawatts) of wind and solar energy".

In 2025, Google agreed to pay to restart the 600 MW Duane Arnold nuclear power station in Iowa by 2029.

References

References

  1. (March 16, 2017). "How Many Servers Does Google Have?".
  2. "Google data centers, locations".
  3. "ISO/IEC 27001 - Compliance".
  4. Report, Times-Mirror Staff. (June 19, 2019). "Google 'caps off' $600M investment in Loudoun County".
  5. (November 29, 2017). "Google Plans 2 Loudoun Data Centers".
  6. Arellano (CIPER), Alberto. (2020-05-25). "Las zonas oscuras de la evaluación ambiental que autorizó "a ciegas" el megaproyecto de Google en Cerrillos".
  7. "Google instalará un nuevo data center en Chile".
  8. (2021-12-01). "Una nueva Región de Google Cloud en Santiago, todo el potencial de la nube ahora más cerca".
  9. "A Google Cloud region now available in Columbus, Ohio".
  10. "Namaste, India. Our new cloud region in Delhi NCR is now live.".
  11. "New Google Cloud region now open in Qatar".
  12. "Dublin, Ireland – Data Centers – Google".
  13. (September 2, 2021). "PROJEKT IN HANAU: RECHENZENTRUM FÜR GOOGLE ENTWICKELT".
  14. "Google Cloud investing in Germany with new infrastructure and sustainable energy".
  15. (2018-11-20). "Breaking ground for Google's first data center in Denmark".
  16. (September 11, 2020). "Google gets green light for its sixth data center in Hamina, Finland".
  17. "Investing in Google infrastructure, investing in Nevada.".
  18. Torres-Cortez, Ricardo. (September 16, 2020). "Google to invest additional $600M at Henderson data center – Las Vegas Sun Newspaper".
  19. Baxtel. "Google Henderson NV Data Center".
  20. "Coming soon: GCP's Hong Kong region".
  21. "Growing our presence in Asia Pacific: New GCP regions in Hong Kong and Jakarta".
  22. "The new Google Cloud region in Jakarta is now open".
  23. "Google Cloud Platform Tokyo region now open for business".
  24. "Google - Singapore".
  25. "Google Cloud Platform now open in London".
  26. "New Google Cloud region in Madrid, Spain now open".
  27. "The Google Cloud region in Melbourne is now open".
  28. "Welcome".
  29. (June 24, 2019). "Google to Spend $1.1 Billion on New Data Centers in Netherlands".
  30. "A Google Cloud region now available in Dallas, Texas".
  31. (June 14, 2019). "Google's massive $600M data center takes shape in Ellis County as tech giant ups Texas presence".
  32. "New Google Cloud region in Milan, Italy now open".
  33. "Project to expand Google's activities in Quebec - Future computer data center in Beauharnois".
  34. "GCP arrives in Canada with launch of Montréal region".
  35. "Google's ongoing commitment in Quebec".
  36. Stiver, Dave. (November 1, 2017). "GCP arrives in India with launch of Mumbai region".
  37. Williams, Mark. "Google joins New Albany high-tech crowd with $600 million data center".
  38. "New Albany, Ohio – Data Centers – Google".
  39. "Google Cloud launches new Osaka region to support growing customer base in Japan".
  40. "GCP arrives in South America with launch of São Paulo region!".
  41. (October 4, 2019). "Google confirms it is behind $600m Papillion data center project".
  42. "Papillion, Nebraska – Data Centers – Google".
  43. "Google Cloud region in Paris France now open".
  44. Dawn-Hiscox, Tanwen. (February 20, 2018). "Google to spend m on Pryor data center expansion".
  45. (September 28, 2018). "Google ha decido de invertir millones de dólares en su centro de datos en Chile".
  46. Tanwen Dawn-Hiscox. (April 18, 2017). "Google is planning a massive data center in Nevada".
  47. Jason Hidalgo. (November 16, 2018). "Nevada approves Google's M data center near Las Vegas, M in tax incentives".
  48. Jason Hidalgo. (September 16, 2020). "Google to invest $600 million in data center near Reno, gets tax break".
  49. "Google Cloud region in Salt Lake City now open".
  50. "New GCP Region in Seoul".
  51. "Google Cloud expands to Australia with new Sydney region".
  52. (October 19, 2022). "Google launches GCP region in Tel Aviv, Israel".
  53. "Google Cloud region in Tel Aviv Israel now open".
  54. "Google Cloud Toronto region now open".
  55. "New Google Cloud region in Turin, Italy now open".
  56. (September 27, 2019). "Google to Build Cloud Data Centers in Poland".
  57. (April 9, 2018). "Google kicks off construction on M Alabama data center".
  58. "Die Schweizer Google Cloud Platform zieht zu Green in den Aargau".
  59. (June 2021). "Google{{dead link".
  60. "Introducing 5 new Google Cloud regions".
  61. (2022-09-30). "Google plant ein großes Rechenzentrum südlich des Flughafens BER". Der Spiegel.
  62. "Berlin-Brandenburg Google Cloud region is now open".
  63. "Google Cloud Platform region updates".
  64. "Google affiliate's latest move signals selection of KC for $600M data center".
  65. "Bringing a new Google Cloud region to Kuwait".
  66. "Announcing new Google Cloud regions in Asia Pacific".
  67. "Announcing a new Google Cloud region in Mexico".
  68. (September 12, 2019). "Google purchases land for new data center in Tainan".
  69. (September 11, 2019). "Google to set up data center in Tainan".
  70. (September 11, 2019). "Google to set up second data center in Taiwan".
  71. (September 3, 2020). "Google confirms plans to build 3rd data center in Taiwan".
  72. (2024-03-14). "Our work to build a more sustainable future in Arizona".
  73. (2024-01-18). "Our $1 billion investment in a new UK data centre".
  74. (2024-08-29). "A new data center in Latin America".
  75. (2024-08-30). "Google y la piedra fundamental (comenzó la construcción del centro de datos en Parque de las Ciencias)".
  76. (2025-07-30). "Google to invest $15 billion in India for Asia's biggest AI datacentre project: Report". The Times of India.
  77. (2025-07-30). "Google to invest $6 billion in southern India data centre, sources say". Reuters.
  78. ""Google Stanford Hardware"".
  79. (March 2003). "Web search for a planet: The Google cluster architecture". IEEE Micro.
  80. Merlin, Marc. (2013). "Case Study: Live upgrading many thousand of servers from an ancient Red Hat distribution to a 10 year newer Debian based one".
  81. (2004<!--). "Strategies for E-business". Pearson Education.
  82. {{YouTube. M5wfv7RE_J4. Google's secret power supplies
  83. Computer Architecture, Fifth Edition: A Quantitative Approach, {{ISBN
  84. [https://www.cnet.com/news/google-uncloaks-once-secret-server-10209580/ Google uncloaks once-secret server], April 1, 2009.
  85. "Google Sustainability".
  86. "Analytics Press Growth in data center electricity use 2005 to 2010".
  87. [http://blogs.nmscommunications.com/communications/2008/05/google-surpasses-supercomputer-community-unnoticed.html Google Surpasses Supercomputer Community, Unnoticed?] {{Webarchive. link. (December 5, 2008 , May 20, 2008.)
  88. (2010). "Research".
  89. Lam, Cedric F.. (2010). "FTTH look ahead — technologies & architectures".
  90. "Peering DB".
  91. "Speakers". Open Network Summit.
  92. [http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/googlecluster-ieee.pdf Web Search for a Planet: The Google Cluster Architecture] (Luiz André Barroso, Jeffrey Dean, Urs Hölzle)
  93. "Warehouse size computers".
  94. [http://research.google.com/pubs/archive/37069.pdf Denis Abt High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities]
  95. Fiach Reid. (2004). "Network Programming in .NET". Digital Press.
  96. Rich Miller. (March 27, 2008). "Google Data Center FAQ". Data Center Knowledge.
  97. Brett Winterford. (March 5, 2010). "Found: Google Australia's secret data network". ITNews.
  98. (2015). "Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication".
  99. Markoff, John; Hansell, Saul. "[https://www.nytimes.com/2006/06/14/technology/14search.html Hiding in Plain Sight, Google Seeks More Power.]" ''[[New York Times]].'' June 14, 2006. Retrieved on October 15, 2008.
  100. Google "[https://www.google.com/datacenter/thedalles/ The Dalles, Oregon Data Center]" Retrieved on January 3, 2011.
  101. (February 12, 2009). "Stora Enso divests Summa Mill premises in Finland for million". [[Stora Enso]].
  102. (November 2017}} {{Cite journal). "Stooora yllätys: Google ostaa Summan tehtaan". [[Kauppalehti]].
  103. (February 4, 2009). "Google investoi 200 miljoonaa euroa Haminaan". [[Taloussanomat]].
  104. "Hamina, Finland".
  105. [http://www.fincloud.freehostingcloud.com/ Finland – First Choice for Siting Your Cloud Computing Data Center.] {{webarchive. link. (July 6, 2013 Accessed August 4, 2010.)
  106. Rory Carroll. (October 30, 2013). "Google's worst-kept secret: floating data centers off US coasts".
  107. Rich Miller. (April 29, 2009). "Google Gets Patent for Data Center Barges".
  108. Martin Lamonica. (September 8, 2008). "Google files patent for wave-powered floating data center".
  109. (September 7, 2008). "Google's ship based datacenter patent application surfaces".
  110. (November 6, 2013). "Google barge mystery solved: they're for 'interactive learning centers'".
  111. Brandon Bailey. (August 1, 2014). "Google confirms selling a mystery barge". [[San Jose Mercury News]].
  112. Chris Morran. (November 7, 2014). "What Happened To Those Google Barges?". [[Consumerist]].
  113. Mark Levene. (2005<!--). "An Introduction to Search Engines and Web Navigation". Pearson Education.
  114. (January 10, 2006). "Python Status Update". Artima.
  115. "Warning". Blog-city.
  116. "Quotes about Python". Python.
  117. (November 22, 2008). "Google Architecture". High Scalability.
  118. Fikes, Andrew. (April 2019). "TechTalk".
  119. (November 29, 2012). "Colossus: Successor to the Google File System (GFS)". SysTutorials.
  120. Dean, Jeffrey 'Jeff'. (2009). "Ladis". Cornell.
  121. (2012). "Research".
  122. (2008-07-28). "Google alums rev up a new search engine".
  123. "Google official release note".
  124. (August 18, 2009). "Google Developing Caffeine Storage System | TechWeekEurope UK". Eweekeurope.co.uk.
  125. "Developer Guide – Protocol Buffers – Google Code".
  126. (2006). "Bigtable: A Distributed Storage System for Structured Data".
  127. windley on. (June 24, 2008). "Phil Windley's Technometria | Velocity 08: Storage at Scale". Windley.com.
  128. "Message limit – Protocol Buffers | Google Groups".
  129. "Jeff Dean's keynote at WSDM 2009".
  130. Daniel Peng, Frank Dabek. (2010). [https://research.google.com/pubs/pub36726.html Large-scale Incremental Processing Using Distributed Transactions and Notifications]. Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation.
  131. The Register. [https://www.theregister.co.uk/2010/06/09/google_completes_caffeine_search_index_overhaul/ Google Caffeine jolts worldwide search machine]
  132. The Register. [https://www.theregister.co.uk/2010/09/24/google_percolator/ Google Percolator – global search jolt sans MapReduce comedown]
  133. Chandler Evans. (2008<!--). "Future of Google Earth". Madison Publishing Company.
  134. Chris Sherman. (2005<!--). "Google Power". McGraw-Hill Professional.
  135. Michael Miller. (2007<!--). "Googlepedia". Pearson Technology Group.
  136. (February 2013). "The tail at scale". Communications of the ACM.
  137. (October 30, 2013). "NSA infiltrates links to Yahoo, Google data centers worldwide, Snowden documents say". The Washington Post.
  138. (October 30, 2013). "N.S.A. Said to Tap Google and Yahoo Abroad".
  139. Gallagher, Sean. (October 31, 2013). "How the NSA's MUSCULAR tapped Google's and Yahoo's private networks". [[Condé Nast]].
  140. Miller, Claire Cain. (October 31, 2013). "Angry Over U.S. Surveillance, Tech Giants Bolster Defenses".
  141. Humphries, Matthew. (March 27, 2012). "Google's most efficient data center runs at 95 degrees".
  142. Hölzle, Urs. (December 6, 2016). "We're set to reach 100% renewable energy — and it's just the beginning".
  143. Statt, Nick. (December 6, 2016). "Google just notched a big victory in the fight against climate change". [[Vox Media]].
  144. Etherington, Darrell. (December 7, 2016). "Google says it will hit 100% renewable energy by 2017". [[AOL]].
  145. (27 October 2025). "NextEra Energy partners with Google to restart Iowa nuclear plant". Reuters.
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Google data centers — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report