MENU

Fun & Interesting

Getting Started with ML Monitoring & AI Observability

WhyLabs 477 lượt xem 2 years ago
Video Not Working? Fix It Now

Workshop links:
- WhyLabs free sign-up: https://whylabs.ai/free
- Google Colab: https://bit.ly/ml-monitoring-fundamentals
- whylogs github (give us a star!) https://github.com/whylabs/whylogs/
- Join The AI Slack group: https://bit.ly/r2ai-slack

Have more questions about WhyLabs? Book some time with us: https://whylabs.ai/contact-us

In this workshop we’ll cover how to get started with ML monitoring by introducing basic concepts and hands-on exercises performing data drift detection.

Deploying machine learning (ML) models is only part of the journey. Monitoring data pipelines and model performance is critical to ensure AI applications are robust and responsible.

This workshop will cover:

ML monitoring concepts
How to detect data drift
How to detect data quality issues
This workshop can be taken on its own or part of the complete ML Monitoring Fundamental Series. Visit the event page for all upcoming dates.

The workshop series will provide participants with an understanding of the importance of ML monitoring and AI observability and equip them with the necessary tools and techniques to monitor and manage ML models and AI systems effectively.

101: Getting Started with ML Monitoring & AI Observability
102: Monitoring ML Models and Data in Production
103: ML Monitoring for Bias, Fairness, and Tracing
104: Understand ML Models with ML Explainability & Monitoring
Receive a certificate for each workshop completed!

What you’ll need to follow along:
A modern web browser
A Google account (for saving a Google Colab)
Sign up free a free WhyLabs account (https://whylabs.ai/free)
Who should attend:
Anyone interested in AI Observability, Model monitoring, MLOps, and DataOps! This workshop is designed to be approachable for most skill levels. Familiarity with machine learning and Python will be useful, but it's not required.

By the end of this workshop series, you’ll be able to implement data and AI observability into your own pipelines (Kafka, Airflow, Flyte, etc) and ML applications to catch deviations and biases in data or ML model behavior.

About the instructor:
Sage Elliott enjoys breaking down the barrier to AI observability, talking to amazing people in the Robust & Responsible AI community, and teaching workshops on machine learning. Sage has worked in hardware and software engineering roles at various startups for over a decade.

Connect with Sage on LinkedIn: https://www.linkedin.com/in/sageelliott/

About WhyLabs:

WhyLabs.ai is an AI observability platform that prevents data & model performance degradation by allowing you to monitor your data and machine learning models in production.

Do you want to connect with the team, learn about WhyLabs, or get support? Join the WhyLabs + Robust & Responsible AI community Slack: http://join.slack.whylabs.ai/

Comment