The Doppler Quarterly Summer 2018 | Page 26

That leaves Container Orchestration and Serverless – two legitimate platforms that are positioned to handle the demands cloud native imposes on IT environments , now and in the future .
Container Orchestration — These platforms – such as Kubernetes , Swarm and Mesos – give developers power they never had on PaaS or other conventional development platforms . They can build and deploy portable applications , and run them anywhere , without having to reconfigure and deploy for different environments .
This capability gives developers a tremendous amount of flexibility and control over which exact image versions to deploy , and where . They can essentially oversee the whole infrastructure – giving them the final say over runtimes , reusability of images and movements of containerized apps to the cloud .
The downside to Container Orchestration ? It introduces a new level of complexity to the process . Building a Kubernetes cluster that is highly available is a complicated task . Adding more container orchestrators requires developers to pay more attention to things like service discovery , load balancing , capacity management , monitoring , logging , version upgrades and other common services . So , developers will l have their work cut out for them , or else they will have to rely on managed Kubernetes orchestrators , such as EKS , Fargate , AKS or GKE , which come with a certain degree of vendor lock-in and versions which lag behind the latest .
Serverless — Serverless platforms involve much less hands-on care than container orchestrators . Using tools such as AWS Lambda , Azure Functions or IBM Openwhisk , development teams can write logic as small pieces of code that respond to specific events . Serverless is essentially a managed service . Developers can focus on applications that respond to triggers , and let the platform take care of all the incidentals – autoscaling , patching , elastic load balancing , etc .
This is great for developers who want to leverage a pay-as-you-go model , which charges only for the time that code is actually running . This works well for event-driven and unpredictable workloads , such as
IoT , big data and messages . Middleware layers can be optimized to improve application performance over time . And Serverless also allows for easy integration to third-party APIs and plug-ins .
But there are some downsides . Serverless is a less mature computing model , so there are less comprehensive and stable samples , tools , best practices and documentation . It is harder to debug than other platforms . Due to the on-demand structure , revving up “ cold starts ” after the system sits idle can trigger problems . Serverless workload runtimes are also capped at five minutes ; anything longer requires additional orchestration or refactoring into multiple microservices .
Arriving at a Platform Solution
So , which platform do you choose ? There is no single right answer – no one-size-fits-all solution . Certain types of workloads fit better in containers , and others are more attuned to serverless environments . You can split workloads up , depending on their characteristics and the organization ’ s needs . The important thing is to prepare for the project and ask the right questions , so you can make the best match .
Bottom Line
Serverless environments tend to be the best fit for greenfield apps , apps being moved to the cloud , event driven workloads and anything that requires a lot of scaling . IoT apps , data streams and workloads that need file transfers are also good candidates for serverless . In addition , modern day ops teams are looking at how to leverage serverless technology to automate tasks such as log parsing , auto-tagging , security event remediations , alarms , auto-scaling , etc . On the other hand , legacy workloads that need more controls and have to be moved around , in and out of the cloud , do better in containers . The more granular the requirements , the more inviting containers become .
That is a general blueprint to follow . But situations vary , and organizations need to go through an evaluation process before locking down a platform strategy for cloud-native computing .
24 | THE DOPPLER | SUMMER 2018