The Doppler Quarterly Spring 2018 | Page 66

Service Discovery
You need to manage the high degree of complexity that comes with these highly scalable and decoupled architectures . You no longer have a single , monolithic , straightforward application to reason about . You now have a highly distributed system with complex interactions that must be scheduled efficiently , monitored for failures and brokered to ensure conflicts do not arise between resources , such as ports and IP addresses . We can let Docker , for example , assign random ports for us , but how do other services know which ports to use ? Moreover , a distributed system running across hundreds or thousands of individual computers is a highly dynamic environment . At that scale , failure is assumed as the norm rather than the exception . As physical nodes come and go , container Pods fail and get reassigned to other nodes in the cluster . Additional problems include avoiding conflicts , discovering services and ensuring efficient scheduling .
Service discovery tools are necessary to coordinate all these services without requiring a tedious manual configuration across the entire cluster . At its core , a discovery service is a persistent key-value store , which itself must be scalable and fault tolerant , so it does not become a bottleneck or single point of failure . Service discovery tools can be built on top of distributed key-value stores , such as ZooKeeper and etcd , where services register themselves so that other services can find their endpoints . Consul by HashiCorp is another tool , which is a batteries-included service discovery system that is horizontally scalable and includes health checks , notifications and a persistent registry .
Serverless
Serverless architectures or what some call Function as a Service , or " FaaS ," provide a platform for ephemeral services that run for short durations triggered by well-defined events . The best known FaaS is AWS Lambda . Instead of having a set of containers running in a reserved EC2 instance , which you pay for regardless of the activity of the code within that instance , you can deploy your code in Lambda , which is only scheduled and run when necessary ( i . e ., triggered by an event queue ). With Lambda , you define the event queue you want to attach to your function , and when the event fires , Lambda schedules your function , runs the function , which possibly creates events in queues that trigger other functions , then terminates the function upon exit .
The AWS Alexa service , combined with custom skills implemented in AWS Lambda , is a great practical example of FaaS . Through the AWS Alexa service , you can register your Alexa device and create a custom skill that can be invoked through a series of defined voice commands , allowing you to personalize your Alexa device . For example , you can easily create a custom skill where when prompted with “ Alexa , tell me something inspiring ,” Alexa will reply back with an inspiring quotation for the day . Such a personalization could be implemented by defining a skill in the Alexa service comprised of “ utterances ” or voice fragments ( e . g ., “ tell me something inspiring ”) used to associate an intent which triggers a function that implements the selection of a random quotation and ultimately returns a speech directive to Alexa . This

You now have a highly-distributed system with complex interactions that must be scheduled efficiently , monitored for failures and brokered to ensure conflicts do not arise between resources , such as ports and IP addresses .

64 | THE DOPPLER | SPRING 2018