Facebook’s New Data Center – What can we learn from it?
When I read about a data center like the new Facebook facility in Princeville Oregon being commissioned and learn of all the innovations I'm heartened at the headway our industry is making, but I'm also forced to think of an analogy. The Facebook facility is very much like the NASA space program, there's lots of great tech created, but it takes a while before Tang is in everyone's fridge. Ewe, I can still remember the taste of that orange colored vile brew.
This blog is in no way a negative on what Facebook has done, quite the contrary. This new facility is an excellent example of how real innovation can occur when you break down the assumptions that most of us operate under. Things like high temp or outside air being a problem, assuming what our vendors told us is "all there is" is true, etc., etc..
I led a team that built a very efficient facility for VMware in Wenatchee Washington two years ago. Many of the basic characteristics of the Facebook facility (not their IT equipment) mirror what we did in Wenatchee. We used outside air, we conserved water through a grey water system, we heated the offices with hot air from the servers and didn't use ducting. We also had hot air containment, no raised floor, and a modular design for build out of the larger pods and the smaller containment units. I'm no longer at VMware, so I don't know what the PUE is, but during commissioning and first use, we were seeing 1.25 or less as the expected efficiency. However, the point is that Facebook has taken several known opportunities and improved on them and they've pushed the boundaries on equipment design with their partners and suppliers.
Why won't the Facebook design apply to everyone?
The Facebook design won't apply to everyone, just like it probably doesn't apply for some of Facebook's own IT application environments. The variety of hardware and legacy application and physical architectures in most large IT shops mean that it's a non starter to consider building something that is one size fits all. That being said, it doesn't mean there aren't one size fits all environments, they just aren't designed to the same efficiency ratings being claimed by Facebook. Also, besides the fact that Facebook can buy large numbers (1000s at a time) of servers with every order, they can also buy the same kind of server. The goal of homogeneity is still extremely elusive to enterprise IT environments.
What are the positive learning's to take away from the Facebook solution?
- Higher temp in the data center is OK. If you're still running your facility at the standard 68-72 degrees F, you're wasting a bunch of energy.
- Using outside air is being proved out yet again. After some early adopters began using it as early as 5-6 years ago, we're finally starting to accept it as a fact.
- You can and should push back on your suppliers to give you gear that does the job without being wasteful.
- Reduce packaging
- Eliminate unnecessary additions to servers that don't add to functionality, efficiency or availability
- Demand higher efficiency power supplies
- Look for modularity in virtually everything you implement
- Server design
- Data Center building design
- Power distribution
- UPS capacity
- Network design
When you push your suppliers you'll be surprised what you can get. But remember, you have to know what you need and why you need it or others will define what you need for you.
Most of us know what to do, and we just have to decide to do it. Just remember that even the coolest sounding efficiency benefit can sometimes cost more to implement than you'll get in reduced energy or management costs, so do your homework.
I'd like to close by saying that this Facebook data center generally supports the message in several of my previous blogs (Manufactured Data Center and Cookie Cutter Data Center ). As data center builders, many of us hold on to our creations like they are our personal Frankenstein, it is time to let go. The complexity of building, owning and operating your own facility effectively is just too much risk and overhead for the average IT organization and for the enterprise itself.