CASE STUDY
xxxxxxx
The upgrade has seen major improvements in energy efficiency
the cooling system were operating
well or poorly,” Hugh told us, “The
ISX’s instrumentation inside the room
monitored the power feeds to the
main pumps, but we had very little
instrumentation outside the room. So,
we didn’t know what was happening in
the chillers, or about coolant flow rates
or water temperatures. The instruments
monitoring these were part of an
entirely separate Building Management
System (BMS) and there was no link
between that and what we could see
with the DCIM.”
The prospect of further multi-million
pound research projects coming to
the University requiring ARCCA’s
high-performance computing included
one attempt to verify Einstein’s theory
of gravity waves and another on
genomics. To meet these additional
30 NETCOMMS europe Volume V Issue 6 2015
compute requirements Beedie knew
that the infrastructure would have to be
improved: “We could see that with the
new power demands we would rapidly
get to a point where we didn’t have
any resilience in our cooling,” he said.
When making the business case for a
second upgrade to the cooling system,
Beedie realised that improved power
efficiency, as evidenced by a better data
centre PUE, could also result in energy
savings that would offset the additional
investment cost over time. However,
essential to proving the business case
would be an improvement to the
monitoring and analysis of all elements
of the infrastructure. “The data centre
had an estimated annual PUE between
1.7 and 1.8 at that time, but they weren’t
precise numbers and they certainly
weren’t being generated in real time.
We were just making calculations based
on performance over selected periods,”
said Beedie. Assuming an annual PUE
of 1.7, which was very much a best-case
scenario, Hugh Beedie calculated that
reducing the PUE to 1.4 would see the
cost of the cooling upgrade pay for itself
easily over the working lifetime of any
new equipment.
Calculating PUE accurately for a
data centre with such mixed functions
as Cardiff’s presents its own challenges.
Cooling provision for the systems
supporting general IT needs remains
reasonably consistent, whereas for the
high-performance work that ARCCA
performs the systems, when they
are running, tend to be operated at
peak power. “It’s quite a complicated
picture,” said Beedie, “but we could
only ever see the big-number totals. We
couldn’t see down to the rack level so we
had to make it part of the business case
to demand more monitoring, so that
we could fine tune operations to get a
better PUE rating. This would also give
us a much better understanding of how
everything was performing and that
would inform all our future designs.”
Energy Efficiency Upgrade
As part of the power and cooling
upgrade managed by the University’s
Estates division, Comtec deployed
Schneider Electric’s Data Centre
Operation: Energy Efficiency module
as an additional component to the
previously installed StruxureWare for
Data Centres. Working with data inputs
from extensive instrumentation that
Comtec had previously installed, the
new software module provided a much
more comprehensive picture of power
and cooling consumption throughout
the data centre infrastructure and
presented it on a centralised console
where it could be easily viewed and
analysed by Hugh and his team. It
allowed them to get much deeper, more
granular insights into energy usage,
not just at overall site level, but also at
subsystem level and, critically, it did so
in real time. This enabled Cardiff to, for
example, monitor the effects on energy
consumption of changing fan speeds, or
of CPU utilisation on a server rack, or
of raising the temperature of the chilled
water supply.
The new cooling services design
had some specific elements aimed
at improving energy efficiency. For
example, the replacement of all three
existing chillers with new high efficiency
300kW models to provide a symmetrical
system also saw the introduction of a
secondary cooling circuit. Individual
high efficiency Variable Speed
Drive (VSD) pumps were also fitted
individually to each chiller to give better
‘turn-down’ ratios. “Originally we had a
primary circuit that pumped cold water
from the chillers directly into Schneider
Electric’s InRow RC units,” said Beedie.
“With this upgrade, a primary circuit
connected the chillers to a large heat
exchanger and another set of pumps
drove water in the secondary circuit
from there into the room. The new
pipework and pumps allowed the extra
degree of control needed to make the
system more efficient in practice.”
Improved Efficiency
The new cooling equipment upgrade
together with the new monitoring
software has seen major improvements
in the energy efficiency of the data
centre, despite the additional HPC
servers. Depending on ambient heating
conditions, the real time PUE rating
has been as low as 1.2, according to
Beedie. “The additional information
about energy consumption has enabled
www.netcommseurope.com