Google is making its own custom servers, but they don’t disclose many important details. On the contrary, Facebook is trying to provide as much information as possible about their own custom servers — in the hope it will benefit the whole community.
What both big companies seem to be currently missing is the emergent shift to water cooled servers, the change that is already taking place in the HPC world. The open hardware movement will allow reuse of best engineering ideas, including field-proved implementations of water cooling.
I was inspired to write this post after I saw “The Story of Send” yesterday, hyperlinked from Google’s main page. What grabbed my attention was this video about custom Google server technology: “Google Data Center Efficiency Best Practices. Part 5 — Optimize Power Distribution”.
There, Tracy Van Dyk, power engineer at Google, explains how they power their servers: instead of having a big UPS somewhere in the machine room, Google installed a small battery to every server. See for yourself in the following screenshot from the video:
The video explains why this design avoids two conversions typical for traditional UPS systems: AC/DC and then DC/AC. Only one conversion remains: the server’s power supply does the AC/DC conversion, and the DC current both powers the server and charges its on-board battery.
If Google procures servers at mass scale and opted for a custom server architecture despite its higher costs, why didn’t they go one step further towards utilizing economies of scale? They could use DC power supply for their servers.
The scheme is not new. One big AC/DC converter powers many servers at once, and simultaneously charges many batteries. In case the converter goes down, the batteries will provide power.
Bigger converters are cheaper than a bunch of small power supply units, one per server. Bigger (and therefore cheaper) batteries near the converter could also be used instead of many small batteries in each server. DC power — 12Vdc or 48Vdc — would then go directly into servers.
I am sure Google investigated this approach, but why did they decline it? There is a possible answer about why 12Vdc was ruled out. It is because at 12Vdc the power distribution becomes less efficient compared to the typical 48Vdc, due to increased power loss caused by active resistance in wires. It therefore might be overall more efficient to provide 48Vdc and transform it into 12Vdc locally (at the motherboard level), sustaining relatively small losses, than to provide 12Vdc and sustain bigger ohmic losses. Very well, but Google could use 48Vdc then. Why didn’t they?
On 01 December 2008, “The Green Grid” published White Paper #16: “Quantitative Analysis of Power Distribution Configurations for Data Centers”. They compared 8 contemporary power supply options and came to the conclusion that they are all mostly identically efficient, and no single option is best for all possible usage scenarios. The paper says:
“The end-to-end efficiencies of all of the contemporary implementations are generally within about 5% of each other at loads above 20%”
The 480Vac to 48Vdc conversion is described on page 19 as “Distribution Configuration 5”. And if all configurations were found to be approximately equally efficient, this may explain why Google didn’t bother to power their servers with DC.
Facebook went further than Google by making their custom hardware designs open source. Currently, at their Open Compute Project website they have drawings and technical specifications of a rack, server chassis, power supplies, a special battery cabinet, a storage device (“Open Vault”) and tons of other useful information.
Their Open Rack is designed to contain some power conversion equipment. The rack expects either AC or high-voltage DC power input (such as 360Vdc..400Vdc), and provides 12,5Vdc power output to IT equipment via bus bars.
A battery cabinet provides backup 48Vdc power; this will be utilised by the rack when main AC power fails. Multiple power supply scenarios are possible. Very flexible, indeed.
The Open Compute Project is one year old by now. And, from my point of view, it is a very promising initiative. With time, it will standardise servers so that they can fit into the standard Open Rack.
With yet more time, it will standardise blade servers and blade chassis. Then, you will be able to use blade servers from any manufacturer inside a single blade chassis. No more vendor lock-in: HP, IBM, Dell, SuperMicro, etc. — all blade servers will work together seamlessly.
But there is one thing that is missing, and I even can’t see if anybody thought of it yet. I am talking about water-cooled servers. Now it is an established fact, especially in the HPC community, that water allows for more efficient (cheaper) cooling of hot racks than air. There are already server products that use (hot) water cooling.
One is the AURORA solution from RSC/Eurotech. Another is the Aquasar cooling system by IBM employed in the 3 petaflop/s SuperMUC system. Then, T-Platforms is also working on their solution. (Update: Bull announced their water-cooled blade servers in November 2011).
In other words, the technology is already mature. But, for it to be compatible with the Open Compute Project, measures must be taken early to ensure mechanical compatibility (so that water hoses don’t get in the way of rack walls or network cables) and electrical compatibility (so that a faulty water hose will not short-circuit an entire data centre).
I hope the Open Compute Project will review the possibility of using water-cooled servers with their hardware. I believe that if we start discussing this early then server manufacturers will have more incentive to design standards-compliant water-cooled commodity servers. And when it becomes commodity, it becomes cheaper — to the benefit of all of us.
To learn more about open-source hardware in general, refer to this Wikipedia article. You may also be interested to know that the 8-core OpenSPARC CPU by Sun Microsystems is also available as open-source (and an earlier version has been available since 2005)! So, the open-source hardware is clearly conquering the world!