Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.
🎧 / 🎥 Listen or watch the full episode on our page: https://twimlai.com/go/694.
🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_confirmation=1
🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai/
Follow us on Twitter: https://twitter.com/twimlai
Follow us on LinkedIn: https://www.linkedin.com/company/twimlai/
Join our Slack Community: https://twimlai.com/community/
Subscribe to our newsletter: https://twimlai.com/newsletter/
Want to get in touch? Send us a message: https://twimlai.com/contact/
📖 CHAPTERS
===============================
00:00 - Introduction
02:31 - LLM novel use cases
07:15 - Fine-tuning LLMs
08:45 - When do you want to fine-tune?
13:42 - Fine-tuning trade-offs
19:03 - Fine-tuning vs continued pre-training
22:41 - Repositories
25:33 - Process of fine-tuning LLMs
41:42 - LoRa
44:31 - Inference frameworks
55:00 - Evaluation measurement for LLMs
1:03:50 - Frameworks vs tools in LLM evaluation
1:08:31 - Domain-specific vs general use case
1:15:55 - Recap and future directions
🔗 LINKS & RESOURCES
===============================
Talks from the Mastering LLMs Conference on Building Applications - https://www.youtube.com/playlist?list=PLgIaq8VgndJtZ_G6gxyuhHGLUy9zXV9JC
Axolotl - https://github.com/axolotl-ai-cloud/axolotl
Exploring the FastAI Tooling Ecosystem with Hamel Husain - 532 - https://twimlai.com/podcast/twimlai/exploring-fastai-tooling-ecosystem-hamel-husain/
📸 Camera: https://amzn.to/3TQ3zsg
🎙️Microphone: https://amzn.to/3t5zXeV
🚦Lights: https://amzn.to/3TQlX49
🎛️ Audio Interface: https://amzn.to/3TVFAIq
🎚️ Stream Deck: https://amzn.to/3zzm7F5