DCN December 2016 | Page 48

final thought PRESSURE TO PERFORM Adrian Barker of RF Code highlights the need for a reliable data centre. T he amount of data created each day is now at an astronomical 2.5 quintillion bytes. What is even more astonishing is that 90 per cent of the world’s data was created in the last two years. This data is created by our use of social media, streaming films online, Internet banking – almost every daily task involves data. The increase in data creation in such a short space of time means the data centres powering this growth are under increasing pressure to perform. Data centre downtime has always been costly but even more so now daily tasks are so dependent on an uninterrupted service from these facilities. The move to the cloud and the impending boom of connected devices and sensors to power the Internet of Things means our need for reliable data centre performance is greater than ever. The damage of downtime The cost of data centre downtime is more than just the financial implications of fixing failed equipment. The damage to brand value, reputation, business and productivity can be just as substantial. Delta Air Lines’ outage in August is reported to have cost $150m, including customer refunds for 48 cancelled and delayed flights. The data centre outage was only for five hours but this was enough time to cause disruption to 2,000 flights and damage the company’s revenue and reputation. Delta had initially stated the issue was due to a power cut but Delta’s CEO later revealed the cause was a lack of back up power for some crucial servers. With organisations increasingly creating complex hybrid IT environments by combining owneroperated data centres, colocation and cloud technologies, there is unfortunately more room for error. The latest Ponemon Institute survey revealed that there has been a 38 per cent increase in downtime costs since 2010 and maximum downtime costs have increased by 81 per cent - as data centres become more business critical, a data centre failure has become even more expensive. This is due to hardware costs, data loss, legal costs and the negative effects to the brand’s reputation. And with data creation continuing to grow, the costs of outages will likely continue to increase. More data, more problems There are many factors that can affect data centre and service availability. For example, a spike in data centre traffic due to a peak shopping period or cultural event can cause data centre equipment to work overtime. This can cause a large fluctuation in the thermal output of equipment which can cause even previously effective cooling systems to become drastically overburdened. Another effect is overheating if efficient cooling is not in place, and if this scenario continues over a prolonged period of time, equipment will be damaged or fail. The same outcome can occur if there is a lack of capacity available: demand for service suddenly increases, but without sufficient availability the data centre is unable to deliver. Just look at what Pokémon Go’s developer Niantic’s expectations were for the game’s server load against the reality – the company was 50 times out. This meant a lot of disgruntled players and a lot of downtime. Luckily the game’s popularity was so great that people were willing to persevere but it could have very easily caused many people to stop using the App altogether. That said, data centre managers need to be cautious of overprovisioning and the purchasing of unnecessary services. Working with a hybrid IT environment can easily lead to overprovisioning if there are no up to date records of capacity and equipment across all facilities. It has been known for companies to have boxes of brand