Containers have become an effective tool to facilitate rapid migration of appli-
cations, often to streamline the movement from on-premises to cloud infra-
structure. Containers offer many benefits to increase the speed of migration
by decoupling application dependencies from underlying hardware, operating
systems, network connectivity and systems management tools. However, while
containers have become a common tool for facilitating migration, they demand
architectural changes to ensure proper data integrity, application scalability
and recoverability.
Many organizations use containers as a lifeboat to accelerate the migration of
applications to the cloud, creating a base architecture that can then evolve
over time to become more and more cloud-native. The challenge is that for
applications to function properly, they need changes from the architectural
principles used 10, 15 and 20 years ago when many enterprise applications were
first developed.
Let’s take a look at some lessons learned about data handling patterns when
moving containers to cloud:
Lesson 1 - One of the design choices that many organizations use is to “just
add a volume” to containers and trick them into acting like persistent data
stores. This is an anti-pattern and treats containers almost as if they were vir-
tual machines. This runs counter to the container model of being lightweight,
easy to move around and replaceable as containers fail, are removed and
upgraded in flight.
Users
User Logic
User Logic
User Logic
User Logic Independently
Scalable
Container Group
Business
Logic Independently
Scalable
Container Group
PaaS Publish/Subscribe Service
Business
Logic
Business
Logic
Business
Logic
PaaS Data Store
Figure 1: Functionality in a Containerized Application Stack
SPRING 2018 | THE DOPPLER | 49