It is estimated that the data storage sector consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5% of total U.S. consumption, or roughly equivalent to the amount consumed by 5.8 million average U.S. households). These numbers are only expected to grow.
The energy used by the nation’s servers and data centers is growing at an unsustainable rate. Not only that, but web servers are notoriously inefficient. For example, computer servers are used at only 6 percent of their capacity on average, while data center facilities operate at roughly 65% to 75% efficiency, meaning that 25% to 35% of all the energy consumed by servers is wasted (converted to heat).
If we are to even consider reducing our energy consumption and carbon footprint, the growing demands generated by our web servers must be near the top of the list of possible improvements. And the Department of Energy agrees.
Researchers at DOE’s Pacific Northwest National Laboratory (PNNL) in Washington and National Renewable Energy Laboratory (NREL) in Colorado are hard at work figuring out ways to make our data storage infrastructure more efficient by running them at lower temperatures. The technology exists to achieve efficiencies of 80% to 90% in conventional server power supplies. Moving this heat source away from the server allows the cooling efforts to be focused on the computing elements.
Alternative Cooling Approaches
(from PNNL’s Energy Smart Data Center)
* Evolutionary progress is being made with conventional air cooling techniques that are known for their reliability. Current investigation focuses on novel heat sinks and fan technologies with the aim to improve contact surface, conductivity, and heat transfer parameters.
* One of the most effective air cooling options is Air Jet Impingement. The design and manufacturing of nozzles and manifolds for jet impingement is relatively simple.
* The same benefits that apply to Air Jet Impingement are exhibited in Liquid Impingement technologies. In addition, liquid cooling offers higher heat transfer coefficients as a tradeoff for higher design and operation complexity.
* One of the most interesting liquid cooling technologies are microchannel heat sinks in conjunction with micropumps because the channels can be manufactured in the micrometer range with the same process technologies used for electronic devices.
* Liquid metal cooling, used in cooling reactors, is starting to be an interesting alternative for high-power-density micro devices. Large heat transfer coefficients are achieved by circulating the liquid with hydroelectric or hydromagnetic pumps. The pumping circuit is reliable because no moving parts, except for the liquid itself, are involved in the cooling process. Heat transfer efficiency is also increased by high conductivity. The low heat capacity of metals leads to less stringent requirements for heat exchangers.
* Heat extraction with liquids can be increased by several orders of magnitude by exploiting phase changes. Heat pipes and Thermosyphons exploit the high latent heat of vaporization to remove large quantities of heat from the evaporator section. The circuits are closed by either capillary action in the case of heat pipes or gravity in the case of Thermosyphons. These devices are therefore very efficient but are limited in their temperature range and heat flux capabilities.
* Thermoelectric Coolers have the ability to provide localized spot cooling, an important capability in modern processor design. Research in this area focuses on improving materials and distributing control of TEC arrays s
Capturing Waste Heat
Reusing the waste heat from a data center may not make the server room itself more efficient, but depending on how heat is reused, it can save a company a significant sum of money. In its report to Congress last year on data center energy consumption, the federal Environmental Protection Agency suggested the practice. And the idea has gained traction, according to Mark Fontecchio of SearchDataCenter.com.
For example, in Winnipeg, Canada a media company called, Quebecor, efforts have been made to take the heat from the 2,500-square-foot data center on the ground floor and use it to heat other parts of the building.
Because of the cool Winnipeg climate, engineers decided to make use of that cool air by installing air-side economizers that draw in outside air. The economizers include baffles that open to varying degrees depending on the outside temperature and how much cooling the data center needs.
After the air cycles through the approximately 100 eight-way servers, it warms up in the process. It then goes into an overhead plenum, where about 10% of the air is re-circulated to warm the outside air that comes into the data center.
Another duct out of the exhaust plenum to the intake duct of the editorial office upstairs. Quebecor also added a second thermostat to its editorial offices; the first controls the traditional heating furnaces. That whole process used up another 60% of the waste heat. The data center dumps the remaining 30% into the adjacent warehouse.
AC-DC?
For electricity flowing all the way from power plants to the wall socket, alternating current is far superior. But for the short transmissions inside those computers DC power prevails. The search for ways to convert AC to DC more efficiently is leading some data center companies to consider a DC-centric approach.
It’s easier to transmit AC over long distances; DC requires thick copper cables or bars, instead of comparatively lightweight wires. But DC becomes a more serious possibility for power once AC reaches a building.
Converting from one form of power to another in a computing environment may not be performed efficiently, especially at the server level, and even then, the resulting waste heat may be deposited in the rack or computer room at a point that requires further effort to dispose of it with the air handlers. Unfortunately, there is disagreement in the community over how to address these inefficiencies.
* DC advocates argue that plugging servers into AC power is inefficient, and switching systems to DC would cut down on waste heat and component failure.
* Proponents argue that using DC outside the server removes some of the inefficiencies of power supplies that convert AC electricity to DC. Servers without such power supplies don’t have to contend with as much waste heat and attendant component failure.
But according to NPPL, substituting DC power in data centers as a replacement for conventional AC power has not yet made significant inroads into many data centers because the technology is unfamiliar to many facility engineers.
Despite the wide-spread use of DC power in telecommunications, there is reluctance within the computer industry to switch to new technologies without field experience showing that the switch could be done safely and would have operational and economic benefits without causing unanticipated problems.
If DC would in fact be a more efficient type of power within servers themselves, might it be possible to site server farms to take advantage of the DC provided by integrated renewable energy generating systems such as solar PV and wind?