A parameter efficient fine tuning technique that makes use of a low rank adapter to (1) reduce storage required per task by decreasing the number of trainable parameters added to the network per task (2) remove inference latency ensuring the stored parameters are applied to the existing network architecture instead of adding more
RESOURCES
[1 📚] Paper that introduces LoRA: https://arxiv.org/pdf/2106.09685
[2 📚] Paper that introduced the additive adapter: https://arxiv.org/abs/2312.12148
[3 📚] Other PEFT techniques: https://arxiv.org/pdf/2312.12148
[4 📚] My video on Parameter Efficient Fine Tuning (PEFT): https://youtu.be/HcVtpLAGMXo
[5 📚] Neural networks tend to be more complex than required, hence we can make use of a "low rank" adapter without losing performance accuracy: https://arxiv.org/pdf/1804.08838
ABOUT ME
⭕ Subscribe: https://www.youtube.com/c/CodeEmporium?sub_confirmation=1
📚 Medium Blog: https://medium.com/@dataemporium
💻 Github: https://github.com/ajhalthor
👔 LinkedIn: https://www.linkedin.com/in/ajay-halthor-477974bb/
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: https://www.youtube.com/playlist?list=PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU
⭕ Natural Language Processing 101: https://www.youtube.com/playlist?list=PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE
⭕ Reinforcement Learning 101: https://youtube.com/playlist?list=PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha&si=AuThDZJwG19cgTA8
Natural Language Processing 101: https://youtube.com/playlist?list=PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE&si=LsVy8RDPu8jeO-cc
⭕ Transformers from Scratch: https://youtube.com/playlist?list=PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE
⭕ ChatGPT Playlist: https://youtube.com/playlist?list=PLTl9hO2Oobd9coYT6XsTraTBo4pL1j4HJ
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: https://imp.i384100.net/MathML
📕 Calculus: https://imp.i384100.net/Calculus
📕 Statistics for Data Science: https://imp.i384100.net/AdvancedStatistics
📕 Bayesian Statistics: https://imp.i384100.net/BayesianStatistics
📕 Linear Algebra: https://imp.i384100.net/LinearAlgebra
📕 Probability: https://imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: https://imp.i384100.net/Deep-Learning
📕 Python for Everybody: https://imp.i384100.net/python
📕 MLOps Course: https://imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): https://imp.i384100.net/NLP
📕 Machine Learning in Production: https://imp.i384100.net/MLProduction
📕 Data Science Specialization: https://imp.i384100.net/DataScience
📕 Tensorflow: https://imp.i384100.net/Tensorflow
CHAPTERS
0:00 Introduction
1:49 Pass 1: Low Rank Matrices
8:00 Quiz 1
8:52 Pass 2: Adapters
16:38 Quiz 2
17:47 Pass 3: Low Rank Adapters
26:37 Quiz 3
27:54 Summary