This video tutorial demonstrates how to deploy your Machine Learning (ML) model to AWS Sagemaker with mlflow and with Docker Desktop application. With this course I tried to explain everything in detail without any video cuts or interrupting, including even the smallest steps which you must do to finish this tutorial successfully. To finish this tutorial you will need: - mlflow (recommended version 1.18.0). You can install it by typing command in your terminal: pip install mlflow==1.18.0 - Docker Desktop application. You can download it from the official website: https://www.docker.com/products/docker-desktop - Anaconda software, to create a conda environment with Python 3.6 kernel. You can download it from official website: https://www.anaconda.com/products/individual-d In this tutorial we will use: - MLflow: it is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. Check official website here: https://www.mlflow.org - AWS Elastic Container Registry (AWS ECR): is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images and artifacts anywhere (https://aws.amazon.com/ecr/). - AWS SageMaker: this service helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning models quickly by bringing together a broad set of capabilities purpose-built for machine learning. - AWS IAM (Identity and Management): it enables you to manage access to AWS services and resources securely (https://aws.amazon.com/iam). - AWS CLI (Command Line Interface): is a unified tool to manage your AWS services (https://aws.amazon.com/cli). The main idea of this stream is to make your ML model readable for MLflow User Interface (MLflow UI), then you will be able to track model performance across experiments. By using MLflow functionality, you will create two Docker images: the first one will be placed locally, and another one on AWS ECR, where a special repository will be created for our image containing all information about our ML model. Then we use this image on AWS ECR to deploy our ML model to the AWS SageMaker. Remember to add required IAM roles and permissions to SageMaker and S3 where all model artifacts will be saved. Finally, we will be able to use the model and make new predictions with new data ingested to the model from anywhere using Python scripts. The content of the tutorial: 0:00 - Intro 0:43 - P1. Prepare you Python virtual environment 2:26 - P2. Install dependencies on your virtual environment 5:47 - P3. Setup AWS IAM user and AWS CLI configuration 12:29 - P4. Test if mlflow is working good 14:09 - P5. Adapt your ML training code for mlflow 25:24 - P6. Build a Docker Image and push it to AWS ECR 33:50 - P7. Deploy Image from AWS ECR to AWS SageMaker 49:18 - P8. Use the deployed model with the new data and make predictions 52:46 - Bonus: Github repo of this tutorial and Thank you! At the end of this lesson, you will be able to make predictions on your ML model from anywhere using boto3 using model inference and endpoints you have built on AWS Sagemaker. The full explained steps are clearly written with screenshots in this repo: https://github.com/vb100/deploy-ml-mlflow-aws/blob/main/README.md If you need any clarifications and add more details on any step, let me know. #mlflow #sagemaker #docker