Networks Europe Mar-Apr 2018 | Page 24

24 CLOUD COMPUTING

24 CLOUD COMPUTING

growth of cloud environments today and in five , 10 or even 15 years ’ time . With this , power and connectivity are absolute priorities and must meet demand .
Delivering power Historically , the electricity required by IT equipment has trended upwards but the huge demand for cloud services is accelerating this dramatically as well as the amount of rack space required . There is a common misconception that running low-density racks instead of higher density ones will be less costly when it comes to power , but the reverse is actually the case . Running fewer high-density racks than lower density will yield a lower total cost of ownership because they have far superior compute capabilities while using significantly less data centre resource : switchgear , UPS , power , cooling towers and pumps , chillers , lighting and so on .
Therefore , it ’ s increasingly important that a data centre provider designs their facility to accommodate high-density racks and can achieve the right balance between rack space and power . Many racks installed in data centres now consume more than 10kW , and some even 60kW . Few can supply this level of power per rack today and this problem is only going to get worse .
Data centres are becoming trapped in a ‘ perfect storm ’ of rising demand for rack space and more power required per rack . This is why larger and hyperscale data centres with more abundant power and space are inherently more suited to meeting and future-proofing larger enterprise cloud hosting requirements and those of the major public cloud service providers . Facilities which are less dependent on the electricity distribution network , where the bulk of electricity supply failures occur , or better still , connect directly to the National Super Grid , are also likely to benefit from far greater reliability and fewer capacity limitations .
Trans-facility networking At the same time , as demand for hybrid cloud environments continues to grow , data centres must meet user expectations for application responsiveness and predictability . With the considerable amounts of data moving back and forth between the public and private cloud environments , and possibly legacy systems , a hybrid approach brings both latency considerations and the cost of connectivity sharply into focus .
Taking Microsoft Azure Stack as a working example , it can be run as a standalone or in a hybrid infrastructure alongside Microsoft Azure Public Cloud as a peer system . In the latter case , latencies between the Azure Stack system and the Azure Public Cloud will determine how fast and seamless a hybrid cloud system is once deployed .
The reality is that few private data centres will be able to afford to run the dedicated network links necessary for assuring consistent performance on an ongoing basis for workloads that may have variable resource needs . While for ‘ standard ’ interlinks between existing Microsoft environments and Azure Public Cloud , Microsoft offers ExpressRoute as a low-latency dedicated connection . But it ’ s only available as a trunk connection to certain colocation , public cloud and connectivity operators . These can connect directly with ExpressRoute at core data centre speeds and so largely eliminate latency issues , improve security and ensure bandwidth is optimised and predictable .
For those organisations not using private or colocation data centres directly connected to Microsoft ExpressRoute , the only alternative is to set-up a fast and predictable connection from their facility to an ExpressRoute partner endpoint . This means that there are two ‘ hops ’ to get to Azure rather than one so doubling the costs as well as the possibility of additional network problems . This is the case even where connectivity providers are offering ExpressRoute to a private or colocation facility , as they are layering their own connectivity from the edge of their network and the ExpressRoute core to the edge of the user network .
In addition , if an organisation is planning on using a colocation facility for hosting some or all the hybrid cloud environment but keeping legacy workloads operating in its own data centre , the colocation provider must offer a range of diverse connectivity options . Multiple connections running in and out of the facility will assure maximum performance and resilience .
The human touch Last but not least , cloud hosting data centres also demand highly skilled engineering personnel on site . Hybrid clouds are complex and cannot be built , tested and managed successfully without suitable facilities and training . Furthermore , studies usually show that the majority of outages are caused by human error . Engineers must be well trained , and critically , know when to intervene and when to allow the automated systems to do their job .
In summary , the major cloud and data centre providers are working hard to meet growing demand for all flavours of cloud solutions . There are many efficiency benefits to be realised when moving to a cloud infrastructure , however , care must be taken to ensure that the best location for the private cloud is chosen as it becomes increasingly important to the organisation . Delivering seamlessly interconnected and evergrowing public , private , and legacy environments necessitate hosting facilities with fit for purpose networking and of course , plenty of power on tap . n
www . networkseuropemagazine . com