MENU

Fun & Interesting

Low-rank Adaption of Large Language Models Part 2: Simple Fine-tuning with LoRA

Chris Alexiuk 25,705 lượt xem 1 year ago
Video Not Working? Fix It Now

In this video, I go over a simple implementation of LoRA for fine-tuning BLOOM 3b on the SQuADv2 dataset for extractive question answering!

LoRA learns low-rank matrix decompositions to slash the costs of training huge language models. It adapts only low-rank factors instead of entire weight matrices, achieving major memory and performance wins.

🔗 LoRA Paper: https://arxiv.org/pdf/2106.09685.pdf
🔗 Intrinsic Dimensionality Paper: https://arxiv.org/abs/2012.13255
🔗 Colab: https://colab.research.google.com/drive/1iERDk94Jp0UErsPf7vXyPKeiM4ZJUQ-a?usp=sharing

About me:
Follow me on LinkedIn: https://www.linkedin.com/in/csalexiuk/
Check out what I'm working on: https://getox.ai/

Comment