Facebook’s New Data Center – What can we learn from it?

 

When I read about a data center like the new Facebook facility in Princeville Oregon being commissioned and learn of all the innovations I'm heartened at the headway our industry is making, but I'm also forced to think of an analogy. The Facebook facility is very much like the NASA space program, there's lots of great tech created, but it takes a while before Tang is in everyone's fridge. Ewe, I can still remember the taste of that orange colored vile brew.

This blog is in no way a negative on what Facebook has done, quite the contrary. This new facility is an excellent example of how real innovation can occur when you break down the assumptions that most of us operate under. Things like high temp or outside air being a problem, assuming what our vendors told us is "all there is" is true, etc., etc..

I led a team that built a very efficient facility for VMware in Wenatchee Washington two years ago. Many of the basic characteristics of the Facebook facility (not their IT equipment) mirror what we did in Wenatchee. We used outside air, we conserved water through a grey water system, we heated the offices with hot air from the servers and didn't use ducting. We also had hot air containment, no raised floor, and a modular design for build out of the larger pods and the smaller containment units.  I'm no longer at VMware, so I don't know what the PUE is, but during commissioning and first use, we were seeing 1.25 or less as the expected efficiency. However, the point is that Facebook has taken several known opportunities and improved on them and they've pushed the boundaries on equipment design with their partners and suppliers.  

Why won't the Facebook design apply to everyone?

The Facebook design won't apply to everyone, just like it probably doesn't apply for some of Facebook's own IT application environments. The variety of hardware and legacy application and physical architectures in most large IT shops mean that it's a non starter to consider building something that is one size fits all. That being said, it doesn't mean there aren't one size fits all environments, they just aren't designed to the same efficiency ratings being claimed by Facebook.  Also, besides the fact that Facebook can buy large numbers (1000s at a time) of servers with every order, they can also buy the same kind of server. The goal of homogeneity is still extremely elusive to enterprise IT environments.

What are the positive learning's to take away from the Facebook solution?

  • Higher temp in the data center is OK. If you're still running your facility at the standard 68-72 degrees F, you're wasting a bunch of energy.
  • Using outside air is being proved out yet again. After some early adopters began using it as early as 5-6 years ago, we're finally starting to accept it as a fact.
  • You can and should push back on your suppliers to give you gear that does the job without being wasteful.
    • Reduce packaging
    • Eliminate unnecessary additions to servers that don't add to functionality, efficiency or availability
  • Demand higher efficiency power supplies
  • Look for modularity in virtually everything you implement
    • Server design
    • Data Center building design
    • Power distribution
    • UPS capacity
    • Network design
    • Etc.

When you push your suppliers you'll be surprised what you can get. But remember, you have to know what you need and why you need it or others will define what you need for you.

Most of us know what to do, and we just have to decide to do it.  Just remember that even the coolest sounding efficiency benefit can sometimes cost more to implement than you'll get in reduced energy or management costs, so do your homework.

I'd like to close by saying that this Facebook data center generally supports the message in several of my previous blogs (Manufactured Data Center and Cookie Cutter Data Center ). As data center builders, many of us hold on to our creations like they are our personal Frankenstein, it is time to let go. The complexity of building, owning and operating your own facility effectively is just too much risk and overhead for the average IT organization and for the enterprise itself.

 

Comments

Learning From those who have explored before us

Mark:


I whole heartedly agree with everything you said about the Facebook design until the last sentence.


Facebook, Google, Microsoft, Yahoo and the other industry leaders like NASA are the leaders with a great desire to be the best and the deep pockets to carry thru with proof of concept.


Unfortunately, the bad news is that much of the rest of the industry does not have the deep pockets so they are not listening nor do they place energy efficiencies as a priority. As a data center Construction Manager, far too often we are asked to build designs based upon 20 year old engineering concepts and to provide high density power/cooling for clients that are still below 5kw/cabinet. The real world mantra is still very much over build capacity because we will eventually use it. This is particularly true for those with data center needs <20ksf and those who built their last data center more than a decade ago.


The good news is that in the past year I have had the good fortune to be working with three clients (from concept thru construction) who decided to go that extra mile and have allowed us to introduce them to the worlds of efficiency, modularity and measured results. These clients included Enterprise, Higher Education and Collocation. They each have gone all out during the early concept and planning phases to understand their options and how each option affects, CAPEx, OPEx and reliability. It has been rewarding to see these clients get lower PUE’s in flexible/Scalable designs for the same or less Capital cost than the old data center designs. Yes, choices were made but in the process the clients became more knowledgeable in what they were getting and more importantly “What they really needed”.


As an industry, we urgently need to collect and promote real data on the multitude of solutions available today. We need to demonstrate to the great masses out there that they do not need deep pockets and that they can get reliable solutions for the same or less expense. Our clients are smart but have a difficult time securing reliable data due to all the vendor marketing hype plus the many false rumors that quickly spread about how the new solutions perform. So to all you industry leaders out there I say keep those case histories coming, keep sharing your successes (and failures) and the world will be a better place for it.


With respect to your last sentence: " The complexity of building, owning and operating your own facility effectively is just too much risk and overhead for the average IT organization and for the enterprise itself". 


Llet me say that Collocation facilities are usually great and there are certainly a wealth of services, so you can buy just what you need however there will always be a large segment of the marketplace whose business model will require that company owned Enterprise data center. Data Centers are not complex and are not too much risk for those who seel professional and informed partners.


 Dennis Cronin

Response to Comment from Dennis Cronin

Hi Dennis,

Thanks for the comments.

I have to respectfully disagree with your comment about my last sentence.

Data Centers are in fact very complex otherwise we could have perfected them years ago. They may not seem complex to a person who's been building them for years (you), but they are the most complex environment in the average enterprise. Most data centers are not optimized, with issues that include, but aren't limited to the following;

-          Stranded capacity

-          Inappropriate tier design and or all one tier

-          Not monitored effectively

-          No single owner for the entire data center stack

-          Limited external exposure for staff to changing tech

-          Limited understanding of where costs are

-          Limited understanding of what's real vs. myth

-          No "working" disaster avoidance & recovery plans

-          Ineffective security

-          Etc., etc..

While it's true that professional data center companies are doing much better these days, like Facebook, these are not examples of data centers that the average company operates. I come from the perspective of the owner, operator more than the builder. I've experienced the difficulty in managing data center space effectively, even when I'm the one pushing the business for the improvement. I can only imagine what happens in companies where there isn't anyone that actually "owns" the lifecycle of the data center top to bottom.

As I suggested in my blog "The Manufactured Data Center" we should no longer need to build "one-off" facilities for each company. There are designs available that can be adapted to any regional or technical design characteristic and still provide a PUE of 1.3 or better. So, the only decision the company should have to make is whether to continue owning the capacity themselves or let professional operators do it. I would argue that like anything, if you don't have a plan to own it effectively, then you shouldn't buy it.

I hope we can continue the debate, as only through debate can we find the truth.