MENU

Fun & Interesting

LLM safety and red-teaming

Evidently AI 484 lượt xem 4 months ago
Video Not Working? Fix It Now

What can go wrong with LLM apps? In this video, we cover risks in LLM applications (jailbreaks, hallucinations, off-topic use, data leaks), how to protect your app, and how to test for safety with red-teaming.

00:30 Off-topic use
00:59 LLM hallucinations
01:44 LLM jailbreaks
02:15 Data leaks
02:45 Risk assessment
03:00 Product design choices
03:48 Guardrails
04:21 Safety testing and red-teaming

LINKS:
OWASP Top 10 for LLM https://owasp.org/www-project-top-10-for-large-language-model-applications/
Learning from LLM failures https://www.evidentlyai.com/blog/llm-hallucination-examples
AI failures examples: https://www.evidentlyai.com/blog/ai-failures-examples

Sign up for LLM evaluation course: https://www.evidentlyai.com/llm-evaluations-course
Instructor: Elena Samuylova, CEO Evidently AI.
Full playlist: https://www.youtube.com/playlist?list=PL9omX6impEuMgDFCK_NleIB0sMzKs2boI

Comment