MENU

Fun & Interesting

Monitoring LLMs in Production with Hugging Face and WhyLabs

WhyLabs 458 1 year ago
Video Not Working? Fix It Now

Workshop links: WhyLabs Sign-up: https://whylabs.ai/free Code: https://bit.ly/whylabs-hf-ipynb LangKit GitHub (give us a star!): https://github.com/whylabs/langkit Slack group: http://join.slack.whylabs.ai/ This workshop will cover how to: - Evaluate user interactions to monitor prompts, responses, and user interactions - Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts. - Set up monitors and alerts to help prevent undesirable behavior What you’ll need: - A free WhyLabs account (https://whylabs.ai/free) - A Google account (for saving a Google Colab) Who should attend: Anyone interested in building applications with LLMs, AI Observability, Model monitoring, MLOps, and DataOps! This workshop is designed to be approachable for most skill levels. Familiarity with machine learning and Python will be useful, but it's not required to attend. By the end of this workshop, you’ll be able to implement ML monitoring techniques to your large language models (LLMs) to catch deviations and biases. Bring your curiosity and your questions. By the end of the workshop, you'll leave with a new level of comfort and familiarity with LangKit and be ready to take your language model development and monitoring to the next level.

Comment