MENU

Fun & Interesting

Installing Ollama to Customize My Own LLM

Decoder 40,626 lượt xem 1 year ago
Video Not Working? Fix It Now

Ollama is the easiest tool to get started running LLMs on your own hardware. In my first video, I explore how to use Ollama to download popular models like Phi and Mistral, chat with them directly in the terminal, use the API to respond to HTTP requests, and finally customize our own model based on Phi to be more fun to talk to.

Watch my other Ollama videos - https://youtube.com/playlist?list=PL4041kTesIWby5zznE5UySIsGPrGuEqdB

Links:
Code from video - https://decoder.sh/videos/installing-ollama-to-customize-my-own-llm.html
Ollama - https://ollama.ai
Phi Model - https://ollama.ai/library/phi
More great LLM content - https://www.youtube.com/@matthew_berman

Timestamps:
00:00 - Intro
00:29 - What is Ollama?
00:41 - Installation
00:53 - Using Ollama CLI
02:06 - Chatting with Phi
02:41 - Ollama API
04:36 - Inspecting Phi's Modelfile
06:27 - Creating our own modelfile
07:34 - Creating the model
08:25 - Running our new model
08:48 - Closing words

Comment