In this video, I will show you how to create a dataset for fine-tuning Llama-2 using the code interpreter within GPT-4. We will create a dataset for creating a prompt given a concept. We will structure the dataset in proper format to fine tune a Llama-2 7B model using the HuggingFace auto train-advanced package.
Happy learning :)
#llama2 #finetune #llm
▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: https://discord.com/invite/t4eYQRUcXB
▶️️ Subscribe: https://www.youtube.com/@engineerprompt?sub_confirmation=1
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: https://calendly.com/engineerprompt/consulting-call
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LINKS:
One-liner fine-tuning of Llama2: https://youtu.be/LslC2nKEEGU
ChatGPT as Midjourney Prompt Generator: https://youtu.be/_RSX2WKuVbc
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Timestamps:
Intro: [00:00]
Testing Vanila Llama2: [01:20]
Description of Dataset: [02:14]
Code Interpreter: [03:24]
Structure of the Dataset: [4:56]
Using Base model: [06:18]
Fine-tuning Llama2: [07:25]
Logging during training: [10:36]
Inference of the fine-tuned model: [12:44]
Output Examples: [14:36]
Things to Consider: [15:40]
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
All Interesting Videos:
Everything LangChain: https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr
Everything LLM: https://youtube.com/playlist?list=PLVEEucA9MYhNF5-zeb4Iw2Nl1OKTH-Txw
Everything Midjourney: https://youtube.com/playlist?list=PLVEEucA9MYhMdrdHZtFeEebl20LPkaSmw
AI Image Generation: https://youtube.com/playlist?list=PLVEEucA9MYhPVgYazU5hx6emMXtargd4z