office interior design logos
[electronic music] sandeep: hi.my name is sandeep, a developer advocateon the google cloud platform. welcome to the googledata center at the dalles, oregon. take a look around. before we go inside,we need to make sure that we have the appropriatesecurity clearance. most google employees can'teven get in here.
so let's go on a specialbehind-the-scenes tour. [keypad beeps, door opens] i'm here with noah from the site reliabilityengineering team. noah, can you tell usa little bit more about the sre role at google? noah: yeah, sres write andmaintain the software systems designed to keepour services running. sandeep: so what happens ifone of these systems goes down?
noah: we've designed our systemsfrom the ground up to be able to handleany unexpected failures that might occur. we have highly redundantpower, networking, and serving domains so thateven if we do lose an entire cluster, we're ableto re-direct those workloads and live migrate data inorder to minimize any impact. in addition, we have a teamof sres on call 24/7 that can tackle any problemsthat might arise.
sandeep: thanks, noah. now we've learned moreabout the systems that manage our fleetat google, let's take a deeper look at the data centerinfrastructure itself. before we cancontinue further, we need to go throughthe biometric iris scan and circle lock. these only allow one personin at a time
and requiredual authentication to continue further. i'll see youon the other side. [control beeps] computer voice: pleasesituate your eyes to begin the procedure. please come a little closerto the camera. [beep] sandeep: welcome tothe data center floor.
as you can tell, we havea lot of servers, and this is a single clusterin a single floor in a single building. managing all of these serverson a global scale is quite a challenge. to utilize our fleet,we use tools such as borg, colossus,and spanner. you may be familiarwith similar tools, such as kubernetes,google cloud storage,
and bigquery. these tools allowgoogle engineers and cloud customers to more easily manageinfrastructure, allowing everyone to build innovative and scalableapplications. here at google, a lot of ourinfrastructure is custom-made. this gives us the flexibilityand performance we need to runall of our services at scale.
oh, hey, it's virginia,one of our network engineers. virginia: hey, sandeep. sandeep: virginia,what are you working on today? virginia: today i'm workingwith hardware ops to expandthis data center network to deploy additional machinesin this building. our fleet is constantly growingto support new capacity for google productsand our cloud customers. sandeep: that sounds likea lot of work,
to be constantly adding capacityaround the globe. virginia: well, we designedour network so that this kind of capacitygrowth isn't very hard. jupiter, our current data centerand network technology, is a hierarchical design using software-definednetworking principles. so just like with our servers, we abstracted away the specificdetails of our network and can manage them like they'resoftware programs and data.
sandeep: abstracting seems to bea common theme here at google. i've also noticedthere's a lot of fiber running in our data centers.virginia: that's right. a single building cansupport 75,000 machines, and carry over one petabitper second of bandwidth, which is actually more thanon the entire internet. sandeep: wow.virginia: this allows us to reliably access storageand compute resources with low latencyand high throughput.
sandeep: so how isthis data center connected to all our other data centersaround the globe? virginia: google runs b4, our own private, highlyefficient backbone network, which is actually growing faster than our internet-facingnetwork. it connects allour data centers together and allows services toefficiently access resources in any location.sandeep: nice.
i finally know what all thisgoogle fiber is really used for. thanks, virginia.virginia: no problem. sandeep: so now you've seen all the compute and networkinghorsepower required to run your workloadsin the cloud, let's take a lookat where your data is safely and securely stored. let's go. whether you're queryingterabytes of data on bigquery
or storing petabytesin google cloud storage, all of your data needs tobe stored on a physical device. our data center infrastructureallows us to access our storagequickly and securely. at our scale, we need to handlehard drive and ssd failure on a daily basis. while your datais replicated and safe, we need to destroy or recycleused hard drives so no one can access your data.
from the time a discis removed from the server to the time it's decommissioned, we maintain a very strict chainof custody. the discs are completely wipedand then destroyed in a huge shredder. let's go shred some hard drives. [beeping] we've looked ata lot of the hardware that runs in our data centers,but it doesn't end there.
we need to cool and powerour infrastructure in an environmentallysustainable and reliable way. let's take a lookat how we cool our servers. welcome to the mechanicalequipment room. looks pretty cool,doesn't it? oh, hey, it's brian,one of our data centerfacilities technicians! brian: hey, sandeep.sandeep: hey, brian. brian, can you tell us a littlebit more about this room?
brian: sure.this is a cooling plant for one of the data centersthat we have on site. so a lot of heat is generatedon the server floor, and it all has to be removed, and that starts right herein the cooling plant. so it's basically two loops. we have the condenserwater loop and we havethe process water loop. the process water loop are theseblue and red pipes over here.
so they take the heatoff the server floor and they transfer it tothese heat exchangers here. the condenser water loop are these green and yellowpipes here. they take the cold waterfrom the basin underneath us, they transfer it tothese heat exchangers here, and they send it up to thecooling towers up on the roof. sandeep: i notice our pipesare google colors. it's pretty cool.
so how efficient isour data center? brian: well, google hassome of the most efficient data centersin the world. in fact, when we startedreporting our power usage effectiveness or p.u.e., in 2008, most data centerswere around 100% overhead. at that point in time,google was 20% overhead, but since then,we've reduced it to just 12%, and that even includesour cafeterias.
sandeep: whoa!that is so low! also what's this biggreen machine for? brian: oh, well,this is a chiller. we very rarely use them, but it helps keep the processwater temperature in the desired temperature range when it gets really hot outside, basically helpingthe cooling tower do its job, and some ofour newer data centers,
they have no chillers at all. sandeep: i love how our new datacenters are even more efficient. by the way, can we go up andtake a look at a cooling tower? brian: sure.let's go. sandeep: wow,what a view up here! brian: so, sandeep,this is a cooling tower. it uses evaporationto rapidly cool the water from the condenser loop andsends it back down to the basin. you could say we're makingactual clouds with the cloud.
sandeep: clouds making actualclouds--welcome to google! so, brian, how dowe power the cloud? brian: well, that all starts atgoogle's power substation. let's go take a look. so this is the google-ownedpower substation. this is where the high voltagepower enters the site. it's reduced and then sent to multiple powerdistribution centers such as this one right here.
sandeep: what happens ifa power distribution center loses power? brian: if it loses power,we have multiple generatorand utility backup sources available to maintain powerto those servers. sandeep: and where doesall the power come from? brian: it actually comes from multiple hydroelectricpower plants that are nearby.sandeep: i love how google uses
reliable green energywhenever possible. brian: we are100% carbon neutral actually. sandeep: that's pretty cool you know, it seems likegoogle builds reliability from the ground up,from the power and cooling all the way to the softwaresystems that manage our fleet. thanks for showing me around,brian. brian: no problem.have a great day. sandeep: thank youfor joining me on
this specialbehind-the-scenes tour. please check outcloud.google.com to learn how you can buildwhat's next.