Facebook in Prineville, a slightly different view

Oooh! Pretty blinkenlights!

On Friday, Facebook’s Senior Open Programs Manager, David Recordon, took a group of us from the OSL on a fantastic behind-the-scenes tour of the new Facebook data center in Prineville, Oregon. It was an amazing experience that prompted me to think about things I haven’t thought about in quite a few years. You see, long before I was ever a server geek I spent my summers and school holidays working as an apprentice in my family’s heating and air conditioning company. As we were walking through the data center looking at the ground-breaking server technology, I found myself thinking about terms and technologies I hadn’t considered much in years – evaporative cooling, plenums, airflow, blowers. The computing technology is fascinating and ground-breaking, but they’ve been covered exhaustively elsewhere. I’d like to spend some time talking about something a bit less sexy but equally important: how Facebook keeps all those servers from melting down from all the heat they generate.

First, though, some scale. They’re still building the data center – only one of the three buildings has been built so far, and it has less than half of its server rooms completed – but even at a fraction of its proposed capacity the data center was reportedly able to handle 100% of Facebook’s US traffic for a while when they tested it last week. The students we brought with us did a bit of back-of-the-envelope calculation: when the facility is fully built out, we suspect it’ll be able to hold on the order of hundreds of thousands of servers. It’s mind-boggling to think how much heat that many servers must generate. It’s hard enough to keep the vastly-smaller OSL data center cool, the idea of scaling it that large is daunting to say the least. As the tour progressed, I found myself more and more fascinated by the airflow and cooling.

The bottom floor of the facility is all data center floor and offices, while the upper floors are essentially giant plenums (the return air directly above the main floor, and the supply above the return). There is no ductwork, just huge holes (10′x10′) in the ceiling of the data center floor bring the cool air down from the “penthouse”, and open ceilings above the “hot” side of the racks to move the hot air out. A lot of the air movement is passive/convective – hot air rises from the hot side of the racks through the ceiling to the second floor and the cooled air drops down from the third floor onto the “cool” side of the server racks, where it’s pulled back though the servers. The air flow is certainly helped along by the fans in the servers and blowers up in the “penthouse”, but it’s clearly designed to take advantage of the fact that hot air rises and cold air sinks. They pull off a bit of the hot air to heat the offices, and split the rest between exhausting it outside and mixing with outside air and recirculating.


(Click to enlarge)

OK, enough with the talking, here are some pictures. Click on the images to enlarge them. Walking through the flow, we start at the “cool” side of the server racks:
  
Notice there are no faceplates to restrict the airflow. The motherboards, power supplies, processor heat sinks, and RAM are all completely exposed.

Then we move on to the “hot” side of the racks:
    
The plastic panels you can see on top of the racks and in the middle image guide the hot air coming out of the servers up through the open ceiling to the floor above. No ductwork needed. There are plastic doors at the ends of the rows to completely seal the hot side from the cold side. It was surprisingly-quiet even here. The fans are larger than standard and low speed. While uncomfortably warm, it was not very loud at all. We could speak normally and be heard easily. Very unlike the almost-deafening roar of a usual data center.

The second “floor” is basically just a big open plenum that connects the exhaust (“hot”) side of the server racks to the top floor in a couple of places (recirculating and/or exhaust, depending on the temperature). It’s a sort of half-floor between the ground floor and the “penthouse” that isn’t walk-able, so we climbed straight up to the top floor – a series of rooms (30′ high and very long) that do several things:

First, outside air is pulled in (the louvers to the right):

The white block/wall on the left is the return air plenum bringing the hot air from the floor below. The louvers above it bring the outside air into the next room.

Mix the outside air with the return air and filter it:

The upper louvers on the right are outside air, lower are return air bringing the hot air up from the servers. The filters (on the left) look like standard disposable air filters. Behind them are much more expensive high-tech filters.

Humidify and cool the air with rows and rows of tiny atomizers (surprisingly little water, and it was weird walking through a building-sized swamp cooler):
    
The left image shows the back of the air filters. The middle image shows the other side of the room with the water jets. The right image is a closer shot of the water jets/atomizers.

Blowers pull the now-cooled air through the sponges (for lack of a better word) in front of the atomizers and pass it on to be sent down to the servers:

They were remarkably quiet. We could easily speak and be heard over them and it was hard to tell how many (if any) were actually running.

Finally the air is dumped back into the data center through giant holes in the floor:
    
The first image shows the back of the blowers (the holes in the floor are to the right). The middle image shows the openings down to the server floor (the blowers are off to the left). The third image is looking down through the opening to the server room floor. The orange devices are smoke detectors.

The last room on the top floor is where the the unused hot return air is exhausted outside:

None of the exhaust fans were actually running, the passive airflow was sufficient without any assistance. The grates in the floor open down to the intermediate floor connecting to the hot side of the racks.

No refrigerant is used at all, just evaporative cooling (and that then only when needed). The only electricity used in the cooling system is for the fans and the water pumps. All of it – the louvers, the water atomizers, and the fans – are automatically controlled to maintain a static temperature/humidity down on the data center floor. When we were there, none of the fans (neither intake nor exhaust) appeared to be running, it was cool enough outside that they were passively exhausting all of the air from the data center and pulling in 100% outside air on the supply. As best I could tell, the only fans that were actually running were the little tiny 12V fans actually mounted on the servers.

This design makes great sense. It’s intuitive – hot air rises, cool air falls – and it obviously efficiently takes advantage of that fact. I kept thinking, “this is so simple! Why haven’t we been doing this all along?”

Share this:
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • Furl
  • Print
  • Reddit
  • Slashdot
  • StumbleUpon
  • Technorati
  • TwitThis
  • Fark
  • LinkedIn
  • Ma.gnolia
  • NewsVine
  • Ping.fm
  • Pownce
  • Tumblr

5 Responses to “Facebook in Prineville, a slightly different view”


Leave a Reply