Network Communications News (NCN) June 2016 | Page 10
COLUMN
talking point
Put to the test
Think you’re ready for Big Data and IoT? Testing times await according to Areg Alimian at Ixia.
R
apid time-to-market is
increasingly important in the
rollout of new applications
and services, or to put it
in simpler terms: everyone
wants to be first. So new
architectures are planned with
virtual environments and hybrid clouds
and then implemented, only to find
out that customers are complaining
about a loss of VoIP quality, and online
gamers for long ping times. Waiting
for customers and users to complain is
one of three basic ways to learn about
the performance and resilience of your
network, but certainly not the most
promising. The second option is waiting
for a hacker attack to paralyse your
network, and that’s not popular either.
The third option is called ‘testing’.
However, not all test methods are
suitable for ensuring the availability
of services and applications. Trying
to validate performance and security
without being realistic about application
loads and attack techniques, quickly
leads to a false sense of security. Only
tests based on real-world expected load
conditions – and beyond what you might
expect – will give reliable information
about how the network and security
infrastructure behaves.
It’s forecast that by 2020, there will
be about 50 billion devices connected
to the Internet, 10 times more than
there are today. Many of these devices
run complex applications that need to
communicate with each other around
the clock. This not only automatically
generates more data, but also places
greater demands on the performance
and availability of networks. In particular,
HD video, and social networking,
combined with big data and IoT have a
virtually unlimited hunger for bandwidth.
Attacks are also getting bigger.
In a report published in January 2016,
the European Agency for Network and
Information Security (ENISA) stated
that the number of DDoS attacks with
bandwidths over 100Gb/s had doubled
in 2015, and will continue to increase.
Meeting these growing demands
on infrastructure requires a massive
upgrade to the data centre, ranging from
migration of their top-of-rack to server
connectivity from 10 Gigabit Ethernet to
25 and 50 Gigabit Ethernet to enhancing
the core network with 100 Gigabit
Ethernet technology. The expected result
of this type of upgrade is significantly
higher data rates with approximately the
same footprint and power consumption,
as well as a higher server density and
reduced cost per bandwidth unit. But
what guarantees do enterprises have
that these expectations will be achieved
under real world conditions?
In addition, unique characteristics of
network devices, storage, and security
systems, coupled with the virtualisation
of resources, the integration of cloud
computing, as well as SaaS, can
significantly slow the introduction and
delivery of new services. To ensure you
get the throughput needed to deliver
new services anytime, anywhere, requires
infrastructure tests that go above and
beyond standard performance tests of
individual components.
Customers and internal stakeholders
do not care how many packets a web
application firewall can inspect per
second. They only care about the
application response time, which
depends on a number of factors. These
include the individual systems in the
network and their interaction, the
application specific protocols and traffic
patterns, as well as the location, and
ti