Learn how to master Apple MLX fine tuning with these simple tricks. This tutorial will help you understand the ins and outs of Apple MLX and optimize your workflow.
fine tuning models with your own data is pretty hard. everything from choosing your model to building a balanced dataset without having the model forgetting what it learned previously.
in this video, chris shows you how to train your own models using Apple MLX and qwen 2.5 coder models, and show how to choose a model for training, and what the effect of different models are. he also shows how to build a balance dataset, and remove unintended consequences from the training set.
he shows the difference between full fine tuning from lora fine tuning
https://github.com/chrishayuk/mlx-finetune-record
00:00 - intro
01:16 - installing mlx
01:30 - prompting with mlx
02:35 - Qwen 2.5 Coder
05:20 - building a dataset
09:24 - fine tuning a qwen-2.5-coder-7b model
16:35 - comparing a 500M parameter model
19:40 - fine tuning with lora
24:21 - fixing dataset diversity
31:21 - fixing model forgetfulness using mixers
39:16 - Qwen 2.5 Code 3B
40:25 - Fine tuning for chat
43:35 - Fusing Models