Top 5 tips from cloud superstars Hamilton and Bechtolsheim
Amazon Web Services infrastructure guruJames Hamilton and Sun Microsystems co-founder Andy Bechtolsheimhad a lot to say about how data centers can be improved on Thursday. Speaking at theOpen Compute Foundation(OCF)debut,they weighed in with words of wisdom.
Their suggestions, coming at theOpen Compute Foundation(OCF)debut, could ease the construction of more efficient data centers that can scale out for immense cloud computing workloads. Perhaps surprisingly, some of their advice has more to do with common sense than high-tech wizardry.
Here are the top five tips (plus a bonus) from the event.
1: Dont sweat the cold
Common problem: Most data centers run way too cold. Its a good problem to have, because it s easy to solvejust raise the temperature, said Hamilton, VP and distinguished engineer at Amazon. Part of the reason facilities remain over-chilled is that ASHRAE recommends they run between 61 and 81 degrees F. (That ASHRAE, a trade group focusing on heating/cooling issuesmight have a vested interest in selling heating-cooling gear is an open question.) But even ASHRAEacknowledges that it is acceptable to run warmer data centers in the 85 to 95 degree! range. Somehow no one got that memo. Everyone runs way down in the mid-7os, Hamilton said. You can raise the temp and its free savings!
Its understandable why people worry about hot data centersthey hear tales of server mortality but theres not a server commercially available today that is not approved to run at temperatures up to 95 degrees, he said.
2: Wall off your servers!
In most data centers, a ton of air leaks around the server racks. For those data center operators, Hamilton had a suggestion: Dont do that!.
Its free to put a wall around the hot aisles. That is far and away the biggest change you can make to take your PUE 3.0 facility and take it down to a 2 PUE facility, Hamilton said. The PUE, or power use efficiency number, is a measure of how energy efficient a given facility is.
3: Substitute standardization for gratuitous innovation.
Forget about vendor-driven gratuitous differentiation, warned Bechtolsheim who is now chief development officer at Arista Networks, a newly minted OCF member. For the last ten years, the focus has been on blade servers and chassis with mobile servers plugged in. Previously companies, my own company included, said my blades are better than your blades, my fans are better than your fans. This is not productive, anymore, he said.
The problem with that scenario is that itbenefits the vendor rather than the customer, he said in what could be the mantra of the new OCF. Open system-level standards take away that gratuitous differentiation so you no longer need to invest to have a better RAID controller or BIOS or other products that are not fully interoperable with each other, he said. The goal of OCF is to spec out these components, make them standard, so third parties can build upon them while retaining base-level interoperability.
4: Build big, and sell off what you dont need.
If youre a big business, think bigger when you build data center capacity. If you have a big compute load, build for a bigger one, Hamilton! said.Wh y? Its the same principle that lets airlines oversell their seats. Its just like [Amazon] stole the idea for the spot market, when you have a valuable asset thats difficult to over-utilize, oversell it, he said.
There are ways to mitigate risks if there is huge demand. You can shed any load that is not vital. take you administrative tasks, the periodic scrubbing of your storage, you can do that an hour later and its not a problem. Or, you stop selling on the spot market till things settle down.
Amazon, of course, is the poster child for selling off capacity and its AWS has turned into what looks to be a $1 billion business. According to Harrison, AWS adds enough server capacity every day to support all of Amazons global infrastructure as it existed in the companys full fifth year of operation, when it was a $2.76 billion company. (Amazons annual revenue is now just under $40.3 billion.)
5: Look into evaporative cooling.
Innovative data center designs will tap into or at least evaluate evaporative cooling, Hamilton said. Down south they call them swamp coolers, big fans with mist rolling off them. Its a lot of water going through a state change . You can use porous media with water dribbling and evaporating or a water mist.
6: Bonus item: Lose the ducts.
Data center buildings themselves have to be redesigned to save energy. Every data center used to have ducts. There are two things wrong with thatfirst you have to pay for them, second, theyre not big enough [to do the job right] so why not use the entire building as one big duct as Facebook did in its Prineville [Ore.] facility?
In that building, the whole second floor is a duct, he said.
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.
- Infrastructure Q3: OpenStack and flash step into thespotlight
- What Amazons new Kindle line means for Apple, Netflix and onlinemedia
- Infrastructure Q2: Big data and PaaS gain moremomentum
Comments