D ATA C E N T R E S
Reinventing the Network
The Future of the Data Centre
By: Dr Andrew Rickman OBE, Chief Executive, Rockley Photonics
Introduction
Andrew Rickman
explores the key
concerns surrounding
the traditional
approach to data
centre implementation
In today’s interconnected world, we
send or post approximately 168 million
emails, 11 million instant messages,
98,000 tweets and 695,000 Facebook
updates every 60 seconds. Alongside the
data transfers created by people, IoT
applications will generate a further 3.9
exabytes of data by 2017 as a result of
machine-to-machine communication.
All this Internet activity creates
in excess of 1,820 terabytes of new
data every minute, which has to be
stored, processed and shared between
a burgeoning number of data centres
located across the world. Without the
data centre, there simply is no cloud.
The Internet has grown by about
a factor of 100 over the past 10 years.
To accommodate that growth, we have
had to increase data centre compute
capacity by a much greater amount
– about a factor of 1,000. To meet
future demands on the Internet over the
next 10 years, we will need to increase
capacity by the same amount again.
Currently, nobody really knows how we
will get there.
The Data Explosion
Today, operators are thinking big and
looking to mega data centres to provide
the capacity we need. Mega data centres
will make more use of software to define
the infrastructure and take advantage of
open architectures for both software and
hardware.
However, the industry has serious
concerns about the viability of scaling
up present-day data centre architectures
to provide the capacity that we need.
According to James Hamilton, Vice
President and distinguished engineer for
Amazon Web Services, we are on ‘red
alert’ for the future of the data centre.
A Perfect Storm
One can simplify the data centre into
two constituent parts: the server, or the
compute function, which performs data
processing and storage, and the network,
which interconnects the vast number
of servers (typically 100,000+) within a
mega data centre.
Moore’s Law, still alive and
kicking, has enabled microprocessor
manufacturers to double the number
of transistors on their chips every two
years. The benefits of cheaper silicon
propagate up to the server, allowing
higher performance machines and more
storage to be built at less cost – factors
that have been instrumental in driving
the growth of the cloud.
Will it be possible to build data centre networks to the size needed in the future?
34 NETCOMMS europe Volume V Issue 6 2015
Unfortunately, the benefits derived
from Moore’s Law in relation to the
compute function don’t fully apply
to the networking part of the data
centre. For instance, data throughput,
which is a key metric for the network,
is determined by transistor speed, the
number of physical pins available on a
chip and increasingly by aspects of new
fiber optic transport technologies – none
of which is helped by Moore’s Law.
Consequently, while silicon is getting
cheaper, networking costs are rising,
and this situation is compounded as the
server count grows. In the face of the
anticipated scaling required, data centre
operators are now focusing on the
network component.
A large part of the problem is derived
from the small size, or radix, of the basic
network building block – the electrical
CMOS switch chip. A data centre
with over 100,000 servers that must
deliver the ability for any-to-any server
communication requires an immense
network interconnected via a vast array
of CMOS switching chips that are
constrained to 24 or 32 ports each.
Due to the port count limitations
of these CMOS switch chips, vast
numbers of network switch nodes and
interconnections in the data centre
serve solely as intermediary connections
between other switching nodes in an
incredible spider web-like network
fabric. These fabric switching nodes and
their interconnections are an expensive,
power-hungry necessity dictated by the
small size of the CMOS switch chips.
The radix of the basic switch building
block has a direct impact on the total
number of switching nodes required
in a data centre of a specified size.
The size of the radix dictates how
individual switches and the data centre
as a whole are constructed (cost and
power requirements), and limits the
scalability based on cost and complexity.
Connecting vast numbers of servers
now takes a huge network fabric, and,
even worse, the number of intermediary
switching nodes and interconnections
required grows as a multiple of the
server count.
Spiralling data centre costs and
power consumption are limiting factors
that have been well documented and
are serious issues. However, it is the
www.netcommseurope.com