Get exclusive access to AI resources and project ideas: https://the-data-entrepreneurs.kit.com/shaw Here, I discuss 3 ways to do model compression on LLMs (i.e. Quantization, Pruning, and Knowledge Distillation/Model Distillation) with example Python code. Resources: 📰 Blog: https://medium.com/towards-data-science/compressing-large-language-models-llms-9f406eea5b5e?sk=2e8a9606c78e60eab725c9a5d4170299 🎥 Training the Teacher Model: https://youtu.be/4QHg8Ix8WWQ 💻 GitHub Repo: https://github.com/ShawhinT/YouTube-Blog/tree/main/LLMs/model-compression 👩🏫 Teacher Model: https://huggingface.co/shawhin/bert-phishing-classifier_teacher 🧑🎓 Student Model: https://huggingface.co/shawhin/bert-phishing-classifier_student 👾 4-bit Student Model: https://huggingface.co/shawhin/bert-phishing-classifier_student_4bit 💿 Dataset: https://huggingface.co/datasets/shawhin/phishing-site-classification References: [1] https://arxiv.org/abs/2001.08361 [2] https://arxiv.org/abs/1710.09282 [3] https://machinelearning.apple.com/research/model-compression-in-practice [4] https://arxiv.org/abs/1710.09282 [5] https://arxiv.org/abs/2308.07633 [6] https://arxiv.org/abs/2402.17764 [7] https://arxiv.org/abs/1710.01878 [8] https://arxiv.org/abs/1503.02531 [9] https://crfm.stanford.edu/2023/03/13/alpaca.html [10] https://arxiv.org/abs/2305.14314 [11] https://www.researchgate.net/publication/372248458_ChatGPT_in_the_Age_of_Generative_AI_and_Large_Language_Models_A_Concise_Survey -- Homepage: https://www.shawhintalebi.com Intro - 0:00 "Bigger is Better" - 0:40 The Problem - 1:35 Model Compression - 2:14 1) Quantization - 3:11 2) Pruning - 5:44 3) Knowledge Distillation - 8:04 Example: Compressing a model with KD + Quantization - 11:10