“My opponent's reasoning reminds me of the hea-
then, who, being asked on what the world stood,
replied, ’On a tortoise.’ ‘But on what does the tor-
toise stand?’ ’On another tortoise.’ With Mr.
Barker, too, there are tortoises all the way down.”
(Vehement and vociferous applause.)
— Second Evening: Remarks of Rev. Dr. Berg
You may have heard of containers and think of them
as yet another way to virtualize and decouple appli-
cations from hardware. But, more importantly, con-
tainers represent another layer of an ever evolving
stack of tools and technologies that most modern
cloud native applications developers use to speed the
delivery of value to customers. Eric Pearson, the CIO
of Intercontinental Hotels Group sums it up well:
"The battle is no longer about large organizations
outperforming the small; it's about the fast beat-
ing the slow"
As anyone who has built cloud native applications
knows, moving to the cloud is not about optimizing
cost. It is instead about the speed and agility of deliv-
ery. Containers are fueling the way modern applica-
tions are built. They represent the tip of the iceberg,
and open source is fueling innovation like never
before. As an IT professional, you should understand
why containers are important, how they relate to vir-
tualization and the ecosystem that is being built on
the shoulders of giants.
VMs and Containers
Virtual machines (VMs) have indeed revolutionized
the IT industry. Significant offerings include: VMware
ESX hypervisor; the Linux KVM hypervisor, which is
the foundation of OpenStack; Microsoft Hyper-V; and
the Xen hypervisor, which is largely the foundation of
the AWS IaaS. These have allowed organizations to
run an entire distributed operating environment
independent of the underlying hardware. But virtual-
ization is nothing new. For decades, mainframes have
employed virtualization, with time-sharing systems
developed by IBM in the late 1960’s and early 1970’s.
Before then, mainframe computers were single use
systems. But innovative products, such as the IBM
360 and 370 and the CP/CMS time-sharing operating
system, brought to market breakthrough technolo-
gies, and marked the shift from single use computers
to multi-user and multi-tasking systems. Then, the
introduction of x86 processors with MMUs and the
commercialization of hypervisor technology from
VMware brought mainframe virtualization to the
masses.
Docker and container technology is now all the rage,
but many often confuse containers with replace-
ments for virtual machines. On the contrary, con-
tainers can run happily on any operating system,
whether that OS is running in a VM, or booted on a
bare-metal server. Containers are instead a method
for decoupling the application runtime environment
and its dependencies from the underlying OS kernel;
and in doing so, they address the issue of software
portability (see Figure 1). With containers, you build
once and run anywhere.
At its core, Docker is built on Linux Containers (LXC).
A Linux Container is comprised of cgroups, which
provide limit controls for a set of resources (memory
limits, prioritization, etc.) and namespaces. These
controls effectively give a set of processes its own
sandbox through constructs like PID isolation, a
rooted file system (i.e., a writable snapshot chroot'd
into the container) and private user/group ID spaces.
Docker containers build on top of these Linux kernel
capabilities an application centric packaging system
that allows you to bundle your code and all its depen-
dencies into a single artifact that is portable across
any Docker enabled machine. Moreover, containers
can be layered and re-used similar to objects in
SPRING 2018 | THE DOPPLER | 59