When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.
The designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.
On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.
This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.
Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.
If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.