The Future of LLMs: Smaller, Faster, Smarter with Lili Mou
Discover the secret to training AI with less data! 🤫
On this episode of Approximately Correct, we talk with Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns.
Watch more episodes of Approximately Correct here - https://www.youtube.com/playlist?list=PLKlhhkvvU8-ZWnapBRKKwWJnKhLrvIm1S