Click to download and read pdf

Growing energy costs and environmental responsibility have placed the data centre industry under increasing pressure to improve its energy efficiency. The cooling system typically consumes the second largest portion of a data centre’s energy, the first being the IT equipment. For a 1.0 MW data centre, for example, the cooling system can consume about 36% of the energy used by the entire data centre and about 75% of the energy used by the physical infrastructure. Given this large energy footprint, optimising the cooling system provides a significant opportunity to reduce energy costs.

Schneider Electric data center cooling

In general, the following three high-level tasks can be used to establish an efficient cooling system for a new data centre design: selecting an appropriate cooling architecture; adopting effective cooling control systems; and managing airflow in the IT space.

At the starting point of selecting an appropriate cooling architecture is the choice of the heat rejection method, the economiser mode and the indoor air distribution method. This must be based on key questions about the data centre: Is chilled water or outside air allowed into the IT space; and is a raised floor used for cold air supply or a drop ceiling for hot air return?

An economiser mode can help data centre cooling systems to significantly reduce the energy consumed by reducing compressor-based mechanical cooling in favour of suitable outdoor air conditions, especially for locations with a cool climate. These are discussed in a separate white paper called Economiser Modes of Data Centre Cooling Systems based on different cooling architectures.

Selecting an appropriate cooling architecture is not enough to establish an efficient cooling system without effective cooling controls. For example, in many of our assessments, we have found data centres where the cooling system seldom operated under economiser mode. In all cases, the reason was that the system became unstable during periods of partial economiser mode because of cooling control issues. Operators would, therefore, manually operate the cooling system during these periods, only switching over to economiser mode late into the winter season, which wastes significant economiser hour opportunities.

Another example of an inefficient cooling system due to a control issue is demand fighting. This is where some cooling units are cooling while others are heating or humidifying/dehumidifying. This happens because of the lack of a group control system. Selecting a cooling system that includes group level control or system level control can minimise energy consumption while solving the challenges of data centre cooling.

The last task of managing the airflow in the IT space and controlling the IT environment must be based on the latest ASHRAE thermal guidelines. A best practice for airflow management is to separate the hot and cold air streams by containing the aisle and/or the rack. Rack- or room-level airflow management not only achieves energy savings but also enhances data centre availability, because it minimises hotspots.

Variables influencing cooling performance

Cooling system dynamics are complex. Take an air-cooled packaged chiller design for example: When the IT temperature setpoint is increased, which will increase the chilled water temperature, the chiller energy decreases for two reasons; the data centre can operate in economiser mode(s) for a larger portion of the year, and because the chiller efficiency increases. However, if the IT supply air temperature is not increased proportionally to the chilled water temperature, the cooling capacity decreases and the fans need to speed up to compensate for this decrease, which results in greater energy consumption.

Energy from the dry cooler, which operates in economiser mode instead of the chiller, increases because the number of economiser hours increases. As a result, it’s difficult to say how much energy saving is being achieved. Furthermore, the total energy savings also depend on data centre location, server fan behaviour, and percentage IT load.

Other variables influencing savings include:

Cooling system capacity is always oversized due to availability requirements leaving cooling capacity larger than the actual IT load. In addition, data centres typically operate under 50% load.

Data centres are dynamic environments where the equipment population and layout change over time. The heat load also changes constantly in response to computing traffic while non-uniform rack layouts and rack densities in the IT space lead to non-uniform cooling capacity requirements.

The cooling system efficiency varies with data centre load, outdoor air temperatures, cooling settings, IT room dew point, and control approaches.

A cooling system normally comprises devices from different vendors. Compatibility and coordination between these devices is a big challenge.

Traditional data centre cooling systems are normally designed to handle a constant heat load while monitoring operation parameters such as temperature, humidity and pressure. As a result, cooling devices are normally controlled in a standalone and decentralised mode based on their return air temperature, humidity or chilled water setpoints. There are also several other limitations that make these systems ineffective at managing the complexities of data centre cooling.

Characteristics of effective control systems

An effective control system looks at the cooling system holistically and analyses the dynamics of the system to achieve the lowest possible energy consumption. It also helps data centre operators solve the challenges discussed above, while providing other benefits such as improving thermal management and maximising cooling capacity. The main characteristics of effective control systems include:

Automatic control: the cooling system should automatically shift between different modes to optimise energy savings based on IT load and environmental factors such as outdoor air temperatures.

Centralised control based on IT inlet: all indoor cooling devices should work in coordination with each other to prevent demand fighting.

Centralised humidity control: IT space humidity should be centrally controlled by maintaining dew point temperature at the IT intakes.

Flexible controls: A good control system allows flexibility to change certain settings based on customer requirements.

Simplifies maintenance: A good cooling control system makes it easy to enter into maintenance mode during maintenance intervals.

Effective cooling controls can maximise cooling capacity, simplify cooling management, eliminate hot spots, ensure that temperature SLAs are met, reduce operations cost, and enhance data centre availability. Specifying the right level(s) of control for a data centre cooling system will provide these benefits.

Schneider Electric offers four cooling control levels – Device level control; Group level control; System level control; and Facility level control, which MechChem Africa hopes to unravel in a follow up article.

 

Pin It

CONTACT

Editor
Peter Middleton
Email: mechchemafrica@crown.co.za or peterm@crown.co.za
Phone: +27 11 622 4770
Fax: +27 11 615 6108

Assistant Editor
Phila Mzamo
Email: philam@crown.co.za
Phone: +27 11 622 4770
Fax: +27 11 615 6108

Advertising Manager
Brenda Karathanasis
Email: brendak@crown.co.za
Phone: +27 11 622-4770
Fax: +27 11 615-6108


More Info

crown publications logo reversed

Crown Publications, one of South Africa’s largest business-to-business publishing houses, came into existence in 1986. Since then, the company has grown from producing a single magazine, Electricity SA (renamed Electricity+Control), to publishing six monthly magazines, three quarterlies, and a number of engineering handbooks.

EDITOR’S PICK

BLOG

POST GALLERY