MENU

Fun & Interesting

Open Approach to Trust & Safety: Llama Guard 3, Prompt Guard & More | Llama for Developers

AI at Meta 2,789 8 months ago
Video Not Working? Fix It Now

Download Llama 3.1 ➡️ https://go.fb.me/kbpn54 Zacharie Delpierre Coudert & Spencer Whitman from the Llama Trust & Safety team at Meta join us for a discussion on what’s new with system-level safety approaches with Llama 3 and how developers can approach building AI applications with safety in mind from the start. ### Timestamps 00:00 Intro 00:37 Evolution of LLMs and Safety Considerations 03:37 Moving from Model-Level to System-Level Safety 06:53 Modularizing Safety Tools 09:20 Purple Llama Suite and Open Safety 11:01 Llama Guard and Content Moderation 13:18 Prompt Guard, Prompt Injections and Jailbreaking 17:04 CodeShield and Secure Code Generation 21:23 Resources for Developers: Responsible Use Guide and GitHub Repos 23:02 CyberSecEval Benchmark 24:23 Conclusion ### Additional Resources • Llama Trust & Safety Tools: https://go.fb.me/nnttf9 • Trust & Safety code recipes: https://go.fb.me/bb4ro8 • PurpleLlama Github: https://go.fb.me/h4g50j • CyberSecEval 3 Research Paper: https://go.fb.me/7we26j • Expanding our open source large language models responsibly: https://go.fb.me/zjurpr - - - Subscribe: https://www.youtube.com/aiatmeta?sub_confirmation=1 Learn more about our work: https://ai.meta.com ### Follow us on social media Follow us on Twitter: https://twitter.com/aiatmeta/ Follow us on LinkedIn: https://www.linkedin.com/showcase/aiatmeta Follow us on Threads: https://threads.net/aiatmeta Follow us on Facebook: https://www.facebook.com/AIatMeta/

Comment