MENU

Fun & Interesting

Jailbreaking LLMs - Prompt Injection and LLM Security

Mozilla Developer 3,161 lượt xem 1 year ago
Video Not Working? Fix It Now

Building applications on top of Large Language Models brings unique security challenges, some of which we still don't have great solutions for. Simon will be diving deep into prompt injection and jailbreaking, how they work, why they're so hard to fix and their implications for the things we are building on top of LLMs.Simon Willison is the creator of Datasette, an open source tool for exploring and publishing data. He currently works full-time building open source tools for data journalism, built around Datasette and SQLite.

Comment