DCN June 2017 - Page 18

big data & IoT the facility provider must also be able to demonstrate how it will bridge between the grid outage and auxiliary power kicking in. This must be based on all workloads managed within the facility being maintained. Therefore, UPS and auxiliary power systems must be capable of supporting all workloads running in the facility at the same time, along with overhead and enough redundancy to deal with any failure within the emergency power supply system itself. As part of this power management, the systems in place must also act as power cleansers, ensuring that the power that is fed to the IT equipment is kept firmly within defined parameters of voltage and current at all times, with spikes, surges and brown-out power fades all being dealt with by the in-line power management systems. Cooling is also an issue as HPC requires more targeted approaches. Simple computer room air conditioning (CRAC) or free air cooling systems such as swamp or adiabatic coolers are unlikely to have the capabilities required. The data centre facility provider must provide either sufficient cooling capability for all the HPC platforms under its roof, and/or must be able to effectively 18 | June 2017 β€˜The rise and rise of Big Data and the IoT is creating considerable IT challenges.’ remove any excess heat that is removed from the HPC systems by built-in in-row cooling systems. With the amount of power and heat generated by HPC rack clusters, ensuring energy efficiency and low PUE are also priorities. Latency By its very nature the IoT makes reliable, low latency connectivity an absolute prerequisite. Many problems with connectivity come down to physical damage, such as cables being broken during roadworks, so ensuring that connectivity is through multiple different directions from the facility is crucial. Such connectivity solutions should also be of the right quality – basic public connectivity solutions will generally not be sufficient for HPC systems. Look for providers that have specialised connectivity solutions – such as BT Optical Nodes and Cloud Connect. Last but not least, the physical location of the data centre will impact directly on rack space costs and power availability. In the case of colocation there are often considerable differences in rack space rents between regional facilities and those based in or around large metro areas. Perhaps of more concern, the availability and reliability of power supply will likely vary from region to region. The majority are not directly connected to the grid and several pylon hops from substations. Some facilities in power strapped areas are already pushed to supply 4kW per rack. Fortunately, the ever decreasing cost of high speed fibre is bringing greater freedom to build modern colo facilities much further away from metro areas, giving greater access to power but without incurring the latency issues of old. Examples here include out of town locations such as the NGD mega data facility in South Wales, where renewable power is in abundant supply (180MW) and is directly connected to the national grid; and of course some of the emerging facilities in the Nordic region where hydroelectric power is plentiful and low cost. In summary, the rise and rise of Big Data and the IoT is creating considerable IT challenges. More than ever, those responsible must rigorously evaluate their existing and potential data centre partners to guarantee the resilience and connectivity required in a new world order where everything and everyone are always connected.