Keeping up with the fast-paced world of AI requires the ability to monitor and control large language models (LLMs) like GPT from OpenAI. WhyLabs now offers enhanced tools to guard and redirect LLM-powered applications during transactions, providing a new level of security and oversight.
Join our workshop designed to equip you with the knowledge and skills to use LangKit with LangChain and OpenAI's GPT models. Guided by WhyLabs Senior Data Scientist Bernease Herman, you'll learn how to evaluate, troubleshoot, and safeguard LLMs.
This workshop will cover how to:
- Evaluate user interactions to monitor prompts, responses, and user interactions
- Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts.
- Set up monitors and alerts to help prevent undesirable behavior
What you’ll need:
- A free WhyLabs account (https://whylabs.ai/free)
- A Google account (for saving a Google Colab)
- An OpenAI account (for interacting with GPT)