MENU

Fun & Interesting

Fine-Tune Mixtral 8x7B (Mistral's Mixture of Experts MoE) Model - Walkthrough Guide

Brev 16,029 1 year ago
Video Not Working? Fix It Now

Hi! Harper Carroll from Brev.dev here. In this tutorial video, I walk you through how to fine-tune Mixtral, Mistral’s 8x7B Mixture of Experts (MoE) model, which outperforms Llama 2 70B on most tested benchmarks. Thanks everyone for 1K subscribers! So glad these are helpful. You rock. Notebook: https://github.com/brevdev/notebooks/blob/main/mixtral-finetune.ipynb My explanation on how QLoRA works: https://brev.dev/blog/how-qlora-works Notebook to fine-tune on own dataset: https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb More AI/ML notebooks: https://github.com/brevdev/notebooks/ Join the Discord: https://discord.gg/NVDyv7TUgJ Connect with me on 𝕏: https://x.com/HarperSCarroll

Comment