MENU

Fun & Interesting

Transformers^2 - Self-Adaptive LLMs | SVD Fine-tuning | End of LoRA fine tuning? | (paper explained)

AI Bites 6,406 lượt xem 2 months ago
Video Not Working? Fix It Now

Transformers^2 - Self-Adaptive LLMs | SVD Fine-tuning
We have come a long way with fine-tuning LLMs. Low-rank Adaptation or LoRA has been established as the go-to method for fine-tuinig LLMs.

We also have QLoRA which has also been established as the established approach for inference on compute budget.

But none of the methods adapt the LLM weights. This paper proposes a self-adaptive approach that performs better than LoRA with fine-tuining. They call it Singular Value Fine-tuning (SVF).

This video explains all the ideas proposed in the transformers^2 paper.

RELATED LINKS
Our Blog post: https://medium.com/@AIBites/self-adaptive-dynamic-llms-are-here-7f766080c107
Transformers^2 paper: https://arxiv.org/abs/2501.06252
Transformers^2 Blog: https://sakana.ai/transformer-squared
LoRA video: https://youtu.be/X4VvO3G6_vw?si=VSos73WUzCqbFDss
Fine-tuning with LoRA: https://youtu.be/_xxGMSVLwU8?si=CPPpZK6Ipbbzfhjy

AI BITES KEY LINKS
Website: http://ai-bites.net
YouTube: https://www.youtube.com/@AIBites
Twitter: https://twitter.com/ai_bites​
Patreon: https://www.patreon.com/ai_bites​
Github: https://github.com/ai-bites​

Comment