model. As such, they frequently white box technology from
multiple vendors – often before industry standards have
even been agreed.
The fact is, ICPs have experienced rapid growth and need
to be agile to operate in an ultra-fast paced world. Yet for
data centre infrastructure, all this can create problems of
interoperability and downtime. If those challenges weren’t
enough, data centres are also having to buckle up for faster
network speeds.
Maximise speed but reduce power
Speeds at data centre interconnects (DCIs) and intra-
connects are already at 100G, and soon 400G will be the
norm. Yet as speeds increase, infrastructure managers will
have to maintain that momentum while living within their
power constraints. A major challenge for data centres is to
reduce power consumption across their infrastructure while
delivering high-speed connectivity and feeding the growing
demand for data. A study last year found that data centres
globally had consumed well over 400 terawatt hours of
electricity – far higher than the UK’s total consumption
– and this could triple in the coming decade. As pressure
mounts to reduce energy consumption, some ICPs have
looked to colder climates such as the Nordics for facilities.
Kolos, a US/Norwegian joint venture, is working on the
world’s largest data centre in the Arctic Circle that could tap
into hydropower and cut energy costs by 60%.
Tested to the limit
As ICPs continue to expand, they’ll build more data centres
to accommodate the rising levels of content and require
seamless DCI to deliver services to users at lightning fast
speeds. Given the pace at which these businesses have
grown, ICPs have had little time to put in place the rigorous
procedures necessary for testing to ensure seamless DCI.
This has been a major challenge for some data centre
managers who have also had to grapple with the rising costs
of cabling infrastructure as well as a plethora of protocols to
interoperate. All these challenges might seem like a tsunami,
but there are steps that data centre managers can take in
the realm of test and measurement – inside the data centre,
within DCI and in network monitoring – to steady the ship.
Within the data centre, automated testing tools can
inspect and certify fibre endfaces for faster network build-
outs and test functionality for MPOs (multifibre push-on).
Effective AOC (Active Optical Cables) and DAC (Direct
Attach Cable) test practices are essential to ensure optimum
network performance and to address the challenges brought
on by the growth of multifibre connectivity.
To stay ahead and prepare for increasing DCI speeds,
ICP engineering labs need to test 400G interfaces with a
versatile platform that can handle different applications
and ports. Running simultaneous test modules, comparing
and evaluating the results and performance of open APIs/
protocols such as NETCONF/YANG on racks at high speeds
of 100G, 200G and 400G can help pinpoint potential issues
and troubleshoot infrastructure complications before they
arise.
Network monitoring needs to be au