Network Communications News (NCN) March 2017 | Page 23

DATA CENTRES
density infrastructure requires more energy and produces more heat . Of course , it seems like a waste to use energy to power servers , and then use even more energy to power cooling systems that ensure the servers don ’ t overheat . One way of tackling this issue is by simply moving the data centre to an environment cold enough to cool down the servers without using additional power . Facebook ’ s 300 x 100 metres data centre in Luleå , Sweden , for example , cools itself using freezing air . Facebook claim this is the most energy efficient computing facility ever built .
Increasing cable temperature impacts cables ’ electrical characteristics , affects insertion loss and attenuation and increases the likelihood of bit errors and physical damage to the cables . When there is unmanaged cable within a cabinet , it prevents the supply of cold air from removing heat from the hardware . The heat in the cables and bundles builds up faster and heat generated within the inner cables in a bundle has no opportunity to dissipate . A clutter free in-rack environment with high density cabling helps avoid this . Using small diameter patch cords also helps . When these are bundled together carefully , the volume of cable inside the cabinet is reduced , optimising heat removal and reducing the amount of energy required for cooling . Good cable management also contributes to alleviating temperature buildup
, as does ensuring there are obstructions near air vents in racks and equipment .
For longer cabling links , larger conductor cross-sections and / or shielded cables should be used , as these are more resistant to temperature increases . Using modules and plugs with insulation displacement technology is also advisable . This creates stable connections between cables and connecting contacts that are similar to soldering joints .
New designs
Many data centre designers and operators are spreading
‘ Adapting and optimising cooling systems has always been one of the main ways of improving data centre energy performance .’
their facilities to achieve easier back up and data recovery , redundancy , faster access and better uptime at local office and so on . Virtualisation makes it possible to converge all disparate location into one big data centre and all functions run smoothly for individual users , regardless of where hardware is located . So , in theory , the location of servers isn ’ t always relevant , providing there ’ s enough bandwidth and low latency .
This enables another promising approach : Ensuring heat doesn ’ t simply dissipate , but is put to good use . By moving server racks further apart and locating them in different buildings , heat can be applied usefully . Racks can be used to warm workplaces , public structures and homes . Our partner , Cloud Heat Technologies GmbH of Dresden , Germany , is developing this approach . By distributing networked servers over multiple buildings , computer capacity is spread to where low latency cloud resources are required . At the same time , the buildings in which the hardware is stored , such as MDUs or office buildings , are heated using the heat from the servers and that reduces CO2 emissions . In some places , the installation space available for servers may be limited , but this is compensated for with extremely compact , centralised , fully automated concepts .
March 2017 | 23