MENU

Fun & Interesting

Current state-of-the-art on LLM Prompt Injections and Jailbreaks

WhyLabs 308 10 months ago
Video Not Working? Fix It Now

Keeping up with new developments in AI and LLMs is important for any organization with AI-powered offerings. This is even more true in areas of safety, such as malicious attacks and use. Issues arise when third-party LLMs change and block your legitimate LLM use cases as well as when your users attempt to take actions that are not appropriate. In this workshop, we'll take a deeper look at two recent LLM developments related to prompt injections and jailbreaks that are relevant to industry professionals considering these models for production.

Comment