Increasing server room temperature to improve data center energy performance
The temperature is still kept too low in the vast majority of data centers. However, according to current standards, it could be increased by a few degrees to save energy in computer rooms, increase cooling capacity without any extra investment, but not compromise IT service continuity.
Data centre temperature: the guidelines have evolved
Cooling systems, essential for keeping servers in optimal operating condition, on average account for 30 to 50% of data center energy expenditure. For a long time, server room operators generally kept their facilities at an ambient temperature of around 22°C, leading to air conditioning unit outlet temperatures of 15 to 16°C. In this scenario, lots of power was required for cooling, resulting in large energy bills.
Since 2015, ASHRAE (the American Society of Heating, Refrigerating and Air Conditioning Engineers) has recommended rack level intake temperature ranges of 18 to 27°C, and a humidity level of 8 to 60%. For most new devices, the recommended temperature ranges are as wide as 15 to 32°C, with a humidity level of 8 to 80%. In other words, for several years, we have known that data center temperatures can be increased, without compromising IT service continuity.
What are the benefits of increasing server room temperatures?
This increase in operating temperatures opens up new possibilities for data center operators. It makes higher densities possible in server rooms without replacing air conditioning systems, and so without making additional investments. It also enables big cuts in energy usage assuming the same IT capacity.
Increasing rack level intake temperatures also enables the use of free cooling or free chilling (systems which use outdoor air to blow fresh air into the room or cool water instead of chilled water production units); these systems offer much more attractive costs, especially in temperate regions like France. With a recommended temperature of 25°C instead of 15°C in the server room, the periods of the year when free cooling can be used without turning the air conditioning on are considerably extended. This generates substantial energy savings and improves PUE (Power Usage Effectiveness). The same applies to free chilling, which can be used more often during the year to cool the water coils, with recommended temperatures now being set at 15°C rather than 7°C for water.
Free cooling or free chilling?
While direct and indirect free air cooling has become much more popular in recent years, especially in large data centres, the fact still remains that it imposes structural constraints because of the need for space to install the ducts that bring air into the rooms. It also demands a degree of expertise in order for it to be operated properly. Deploying a free chilling solution often proves a more suitable response, especially when renovating data centers equipped with a chilled water production unit. For example, in a data centre in Lyon, ROI was achieved in less than two years by setting the supply air temperature in the room at 27°C and chilled water temperatures at 15-20°C.
Controlling air flows: server room layout and containment
To increase the overall temperature of a data center you need to fully understand the specific features of the air conditioning systems and IT equipment hosted in the rooms, and to control the flows of air circulating in the centre. You need to check the rack level intake temperature and not the ambient temperature in the room, or even the air return temperature at the air conditioning units. Mismanaged air flows, even around a single rack, can cause “thermal pollution” of the whole room. Hence the need to lay out a data center in cold aisle/hot aisle configuration to ensure that cool air is channelled correctly to the servers and the hot air, generated by heat loss from the computer equipment, does not mix with the cooling air. This entails deploying containment and sealing systems so that air circulation can be managed effectively: panels to cover unused spaces in a rack or perforated panels, cable grommets to limit air flow around cables, etc.
Warmer data centres? Yes, but not any old how!
Optimizing energy use in a data centre is often a complex business because, most of the time, sites need to be upgraded while in operation, in compliance with continuity of service requirements. It requires a detailed audit and design phase to take account of all the parameters of the data center: surface area, power of the equipment, load ratio, physical infrastructures, cooling systems, air flows, environmental constraints (dust, humidity levels, etc.).
In any case, implementing such a project, and the results in terms of energy savings, depend on the characteristics and infrastructures of each site, the level of upstream steering (frequency of inspection, performance indicators calculated, proximity of project teams, etc.) and on the downstream operations policy adopted (process design, training, resources called upon, etc.).
By Nicolas Miceli, head of energy performance and innovation, APL
Reducing data center energy consumption and costs
APL supports you in reducing the electrical consumption of your IT sites while maintaining their continuity of service.
To know more