In the era of energy efficiency, plotting a path to agile, responsible data centers
In a global environment where energy efficiency is central to all debates, people often point the figure at energy-hungry data centers. They account for around 4% of world energy consumption, and this figure is growing by nearly 5% per year (1). To cut the economic and environmental bill, firms have for several years been working to make these facilities more energy efficient, while maintaining the high quality and continuity of service standards vital for the provision of IT services.
Continuity of service and energy efficiency: a paradox?
The digital universe is doubling in size every four years. Approximately 2.5 billion GB of data are created every day and have to be processed, stored and delivered to users with increasingly high service quality expectations. Data centers, which host IT services, must therefore be designed and operated in such a way that they minimise the risks of failure. To this end, their operators have long favoured hardware redundancy, and facility security. The main consequence of these priorities is an oversizing of cooling systems which alone account for nearly 40% of a data center’s total energy bill (2). This over-capacity, still a feature of most data centers, results in power use that far exceeds actual needs. While the price of a kWh in France is on average 25% lower than in the rest of Europe, this consumption is not environmentally responsible.
Mastering the energy cascade: the key to agile, responsible data centers
Conscious of these economic and environmental challenges, several years ago data center operators began focusing on the issue of energy efficiency. They are now looking to master the “energy cascade” in its entirety, i.e. to reduce consumption at every level: IT components, technical equipment, layout and the integration of data centers into their geographical ecosystem.
The first area for optimisation relates to IT components. The development of superconducting materials, for use in processors in particular, improves the dissipation of heat, making it possible to work at higher temperatures. The widely-accepted temperature in server rooms now is 23-24°C, compared with 16°C in the late 1980s. Moreover, in very many cases, servers are used at less than 50% capacity, while consuming almost as much energy as servers running at 100% capacity. Better sizing, combined with the use of technologies like virtualisation, can optimise their use and performance. Lastly, it is important that purchasing policies factor in the energy consumption of items of computer equipment: the oldest should be replaced by new generation hardware, which is less energy-hungry, dissipates less heat and can run at higher temperatures.
The second main area involves optimising the technical systems that consume the most energy, i.e. those that cool and deliver power to the data center. For example, new generation cooling and power distribution systems deliver better hardware performance levels and cut energy consumption and waste.
Data center architecture is the third key energy efficiency parameter. Let’s take the case of a data center operating at 75% of its IT capacity. Its operating cost is six times smaller than that of a data center with a 10% load rate. However, the utilisation rate is still not as high as it could be, and hosting needs can rise as well as fall. To respond to this volatility, data centers are increasingly being designed in modular fashion, in which separate, adaptable units are put into production as demand increases. For existing data centers, major energy and operational gains can be made by rearranging data centers, including increasing server density per m² (higher production from an equal surface area) and containing hot and cold aisles. In this respect, mass air delivery is an anachronism, as cool air is not directly blown towards the IT equipment that needs cooling. To maximise the efficiency of cooling systems, it is essential to separate heat flows in order to channel cold air and prioritise air delivery to cabinets with the highest load rates.
The final element in the energy cascade is the geographical location of the data center and its integration into the target environment. Several parameters need to be examined, including temperature, seasonal humidity variation, and atmospheric pollution. Most of the time, systems that use “free” energy (air, water) to cool data centers make big energy savings. However, they need to be combined with humidifiers and filtration systems. At the design stage, therefore, the needs and constraints associated with the future operation of the data center need to be assessed carefully in order to make the most appropriate choices.
Tomorrow: further action for substantial energy savings
The data center sector is experiencing a transition phase. This transition is leading operators to leverage energy management, flexibility and agility all at the same time. Technological innovation and evolution are pointing in the direction of new sources of energy savings.
Because hardware can now tolerate higher operating temperatures, it is now possible to envisage more routine use of cooling systems that use the temperature of the outside air (free-cooling/free-chilling systems), groundwater and waterways (geo-cooling). Furthermore, the potential of Data Center Infrastructure Management (DCIM) solutions has yet to be fully tapped for the purposes of data center monitoring and the detection of energy-hungry cabinets. There are some very promising initiatives aimed at making better use of dissipated heat by capturing it to warm the rest of the building, or even buildings in the immediate vicinity. Finally, more and more alternative energy sources (photovoltaic panels, etc.) are being used around the world to meet a proportion of data center power needs, with positive energy efficiency results.
In conclusion, there are already numerous tried and tested solutions that cut data center energy bills, at all levels, present no obstacle to continuity of service and do not require prohibitive investment. Each “small” optimisation initiative is a step on the road to environmental and economic progress. It is all about finding the right combination, depending on existing arrangements.
(1) Source: RTE – Réseau de Transport Electrique
(2) Source: Gimélec