Building Machine Learning workflows with Kubernetes and Amazon SageMaker
Kubernetes and Kubeflow are becoming common tools for building ML Platforms, but require significant investment to configure the open source tools for reliability and scalability. SageMaker is a managed service and provides direct integrations with Kubernetes Operators and Kubeflow Pipeline Components. We’ll talk about the benefits and best practices for adding workloads to your ML environment, while leveraging managed services to add new features and cut costs.
We can use Amazon SageMaker to extend the capacity and capability of your Kubernetes cluster for machine learning workloads. If you’re part of a team that trains and deploys machine learning models frequently, you probably have a cluster setup to help orchestrate and manage your machine learning workloads.
Learning Objectives:
*Learn how to do ML experimentation with Kubeflow Pipelines
*Learn how to take your ML experiments in Kubernetes to production
*Learn how you can extend your kubernetes based ML platform with SageMaker
***To learn more about the services featured in this talk, please visit: https://aws.amazon.com/sagemaker/
Until recently, data scientists have spent much time performing operational tasks, such as ensuring that frameworks, runtimes, and drivers for CPUs and GPUs work well together. In addition, data scientists needed to design and build end-to-end machine learning (ML) pipelines to orchestrate complex ML workflows for deploying ML models in production. With Amazon SageMaker, data scientists can now focus on creating the best possible models while enabling organizations to easily build and automate end-to-end ML pipelines. In this session, we dive deep into Amazon SageMaker and container technologies, and we discuss how easy it is to integrate such tasks as model training and deployment into Kubernetes and Kubeflow-based ML pipelines.
#machinelearning #amazonwebservices #artificialintelligence #cloudguru #aws
Written by admin