The Doppler Quarterly Winter 2019 | Page 52

Kubeflow Google developed Kubeflow, a machine learning stack for its popular TensorFlow ML frame- work. It is designed to simplify and scale the framework-agnostic modeling, training, serving and management of containerized AI models across Kubernetes multicloud based ecosystems. AI-driven intelligence can be thoroughly embedded in every edge, hub and cloud service. This has made it easier to set up and productionize ML workloads on Kubernetes. It changes the game by allowing engineers to consistently deploy the entire life cycle of a model, starting from setting up Jupyter Notebooks and training environments, to packaging and serving the trained models on production environments using a single framework. Kubeflow abstracts the underlying resources, and the same deployment works on any environment. Hyper parameter tuning Validated Model Training model Tensorflow PyTorch Serving Frameworks MxNet Ingress Controllers Training Frameworks Data Scientists Model Developmental tools API Microservices DevOps CUDA drivers CUDA drivers NVIDIA GPU NVIDIA GPU Model Training Model Serving Datasources Training Data Real time Data Amazon S3 Azure Blog Storage RDS IOT Devices Figure 1: Machine Learning Using Kubeflow 50 | THE DOPPLER | Webapp Microservice WINTER 2019 Users