Do BYO data centers make sense anymore?

In this era of cheap-and-reliable renta-data centers by Amazon , Rackspace, others, does it make sense for a company to build a new data center on its own anymore?

Amazons data center guru James Hamilto is pretty clear thatthe sees no reason for most companies to build new data centers from scratch now, but if they have a huge compute load and really have to, they should go overboard, build way more capacity than they need and sell off the excess a la Amazon itself.

While Hamilton has a vested interest in people moving their compute loads to Amazons infrastructure , hisbuild big or dont build at all mantra resonates with other IT experts. The consensus: it makes sense for most companies to trust their data center needs to the real experts in data centers. More companies will start trusting their new compute loads maybe not necessarily all the mission critical stuff to the big cloud operators. That roster includes the aforementioned players as well as Google, Microsoft, IBM, Hewlett-Packard,, Oracle (ORCL) and others that are building out more of their own data center capacity for use by customers.

And for startup companies, the decision to not build is a no brainer. Connectivity to the cloud is the real issue for these companies. If I was starting a greenfield company, the data center would be the size of my bathroom there wouldnt necessarily even be a server, maybe a series of switches and all my backoffice apps, my salesforce automation, my storage would be handled in the cloud, said Dave Nichols, CIO services leader for Ernst& Young, the global IT consultancy

David Ohara, GigaPRO! analyst and co-founder of Greenm3holds a more nuanced view. Companies with mid-sized loads really have to think things through, he said. Once you get to 5 to the 7.5MW data center, thats just big enough to be super complex but the economics are weird. At that point you should probably build a 15MW data center and sell off the other 7.5MW to someone else or partner with Digital Realty Trust or some other company to share costs, Ohara said. Data center size is typically described in terms of megawatt (MW) consumption.

Its in that 5 to 7.5MW area where the company starts having to know about the niceties of chillers and power systems, he said.

When you break through the 10,000 server barrier thats when you start needing 3 to 5MW of power and now youre getting into major facility costs where you have to have multiple diesel generators, and complex power and cooling systems. And its in that 10,000 to 100,000 server zone where costs soar. At that point, there arent many companies on the planet that can achieve the scale of an Amazon, a Rackspace, a Google, or a Microsoft. So why not trust your loads to the experts?

There will always be pushback on this point but its starting to change. Asked what type of data or task should not be entrusted to a cloud provider, the CIO of one big company said, The formula for Coke. But thats about it.

Database guru Michael Stonebraker, co-founder and CTO of VoltDB, fully backs Hamiltons thesis. There is simply no way for more than a handful of huge companies to achieve Amazons data center scale; the same low electricity costs, the experience standing up data centers. As long as those companies are okay with running in the public cloud their decision is simple. Sooner or later, if youre a small guy, there will be huge incentive to move to the public cloud. Youve either got to be really big or run on someone elses data center, he said.

Photo courtesy ofFlickr userjphilipg

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


Comments

Popular posts from this blog

China Watch: Magical New Maglev, Fire the Ambassador?

Live Blog: GMIC G-Startup Competition 2011

Chinese Pinterest Huaban.com Grabs Money and Attention