losses will exist. As the IT load shrinks (e.g., from consolidation) these fixed losses become
a higher proportion of the total data center energy use. This means PUE will worsen. This
ENERGY
also36
means that PUE
is alwaysMANAGEMENT
better at higher IT loads and worse at lower loads. Figure 3
shows a typical PUE curve illustrating the relationship between efficiency and the IT load.
5
Worse
PUE
4
Efficiency degrades dramatically at low loads
due to “fixed losses” inherent in unused
power/cooling capacity
3
2
Better
1
0%
No load
20%
40%
60%
IT Load
80%
100%
Full load
% of data center’s power capacity
Typical data centre infrastructure efficiency curve
infrastructure to take account of a reduced IT load if they
are to maintain a satisfactory PUE result.
Consider by way of example a 1MW data centre with
1000 physical servers, each of which draws 250W of
power. If these were to be virtualised with a conservative
consolidation ratio of 10:1, with each physical server
operating at a CPU utilisation of 60% (instead of the typical
5-10% for a standalone server) the overall energy savings
over the course of a year could be 75%
If the power and cooling infrastructure is left unchanged,
PUE will increase after virtualisation. Adjusting these values,
the top part of the PUE ratio, is a challenge because there
are inherent fixed losses in power and cooling equipment
that can’t be reduced linearly with load; they will be
incurred no matter what the load is. Therefore as IT load
shrinks, thanks to virtualised consolidation, these fixed
losses account for a higher proportion of the total energy
consumption of the data centre.
The variation of PUE with the IT load of a data centre is
represented as a curve with PUE degrading dramatically at
lower IT load utilisations. To improve PUE after virtualisation
it’s necessary to address the issue of fixed losses. This means
that as far as possible, power and cooling capacity must be
scaled down to match the load. This can be best achieved in
a new data centre by following a standardised and modular
approach to designing the infrastructure; making it easier to
switch off unneeded cooling units, slowing down or turning
off fans and deploying scalable UPS systems. Careful design
of the cooling architecture, including containment alongside
the use of hot or cool aisles, helps to manage capacity
better resulting in a reduction in losses.
Existing data centres
For existing data centres, such a modular approach may not
be possible. However there are other options to be considered
Rev 5to IT
to make the cooling effort more scalable with regard
load. Blanking panels and air containment solutions could
be installed to reduce or virtually eliminate hot and cold
air mixing to improve the efficiency of cooling equipment.
Adjustable fans could be installed to better match cooling
effort to IT load. Unused power modules in UPSs should be
removed. With a reduced IT load, it may also be possible
to turn off one or more CRAC/CRAH units to further reduce
infrastructure losses.
It should be reiterated that PUE and overall power
consumption are not necessarily contradictory concepts;
the misconception can arise only when PUE is thought
of as a green metric in itself, which it’s not. PUE takes no
account, for example, of how the electricity for a data centre
is generated – or how much is generated. It’s irrelevant
(to PUE) whether power comes from a green renewable
source such as solar or wind power or more traditional
environmentally damaging sources such as oil or coal.
PUE is merely a measure of how efficient power and
cooling systems are for a given load. Its importance is only
increasing, thanks to growing awareness among industry
and the public at large of the need for greater energy
efficiency. For truly environmentally friendly data centres,
which also operate cost effectively and reliably, PUE and
overall consumption have to be considered as separate,
though inextricably related topics, and designers should
optimise their facilities for both. n
Schneider Electric – Data Center Science Center White Paper 118
5
www.networkseuropemagazine.com