We’ll explore many of these questions in future articles. For
now, given the impact the technology is expected to have
over the next decade, it’s worth looking more closely at
what containers are and why they’re becoming so popular.
Anybody involved in enterprise IT has at least a passing
knowledge of containers. Like their physical counterparts,
these virtual operating system configurations pack items
away for future use. They contain all the executables an IT
team needs to run everything from a small microservice like
a single HTTP endpoint to a much larger application like a
payroll program. Each one has its own binary code, libraries
and configuration files – but it doesn’t contain any operat-
ing system images. That makes them lighter and easier to
transport than applications in traditional hardware or VM
environments.
Containers offer a wide variety of benefits. Chief among
them are speed, choice and the ability to optimize based on
the situation.
The Need for Speed
Speed, of course, is critical in today’s IT world. Moving soft-
ware through all the various stages of development
improves efficiency, increases productivity and allows more
56 | THE DOPPLER |
FALL 2019
time for testing and quality control. Fast processes enable
firms to get to market faster and update more frequently.
That’s the name of the game.
Using containers, your teams can speed up delivery two
ways. First, because VMs contain entire operating systems,
they take longer to boot up each time they’re used. Contain-
ers don’t need to boot up; the operating system is already
there.
Second, teams using containers can release software in
smaller segments than they can in legacy waterfall pro-
cesses. Containers eliminate those pieces of software that
stand between the application’s execution and the actual
hardware that performs the task at hand. You want to have
purpose-built hardware that ideally serves the app alone. If,
for instance, you have AI app and want to use a graphics
processing unit (GPU), you want to use that GPU as effec-
tively as possible. The more software between the GPU and
the orchestration function, the less effectively it will work.
Stripping away unnecessary software gives you a higher
density of containers per computer and better utilization
for that system, thus increasing that speed that can process
for that particular use case.