service continuity
became possible engineers were able
to deploy the generators and other
disaster recovery equipment.
While Vodafone claims that
its data centres weren’t hit by
the flooding, data centres around
the world can be severely hit by
flooding and other natural disasters.
In fact a recent survey by Zenium
Technology has found that half of
the world’s data centres have been
disrupted by them. They are both
disruptive and costly. Hurricane
Sandy is case in point.
Hurricane Sandy
In October 2012 Data Center
Knowledge reported that at least two
data centres located in New York
were damaged by flooding. Rich
Miller’s article for the IT magazine,
‘Massive Flooding Damages
Several NYC Data Centers’ said:
‘Flooding from Hurricane Sandy has
hobbled two data centre buildings
in Lower Manhattan, taking out
diesel fuel pumps used to refuel
generators, and a third building at
121 Varick is also reported to be
without power…’ Outages were
also reported by many data centre
tenants at a major data hub at 111
8th Avenue.
Overcoming limitations
One of Bridgeworks’ large insurance customers
wanted replication for disaster recovery purposes
between two of their sites, but the latency killed
performance and so they still had to use a man in
the van to ensure the data was at both sites, thus
allowing them to meet their required recovery time
objectives (RTO). They had invested in dark fibre to
improve performance of the data transfer, but their
replication continued to fail and thus their stated
RTO was not met without resorting to the man in
the van. With WANrockIT installed on both sides,
returning to their cheaper 10Gb pipes not only
did the customer achieve their replication targets,
they were able to achieve simultaneous cross-site
replication. Thus the CIO was able to not only meet
the Service Level Agreement required of him but
indeed dramatically reduce the Recovery Point
Objective and therefore the risk to his business in
the event of a disaster.
it’s important to take a step back to
consider how to maintain business
continuity whenever a natural or man
made disaster occurs.
Top IT service continuity tips
Yet a number of customers
were affected as they experienced
intermittent issues with Vodafone’s
voice and data services in the North
East of England, and the company
experienced some power disruption.
The flooding restricted access to the
building, which was needed in order
to install generators after the back up
batteries had run down. Once access
With data centre downtime
potentially costing millions or
billions of pounds it’s important to
take a step back to consider how
to maintain business continuity
whenever a natural or man made
disaster occurs. The problem is that
most companies leave it too late,
much like people do with insurance
policies. As a result, whenever
disaster strikes it’s too late. For
this reason it’s worth noting the
following tips:
P
repare not just for disaster
but for IT service and business
continuity by ensuring that you
have a plan in place that is
regularly reviewed, tested and
audited. Customers shouldn’t
be affected, and they shouldn’t
know that an issue has arisen. If
one data centre is affected, then
there should be another data
centre at the ready to maintain
seamless service continuity.
37