MENU

Fun & Interesting

新手必看 AMD显卡本地使用DEEPSEEK-r1完全指南

九九吐 298 lượt xem 1 week ago
Video Not Working? Fix It Now

The ultimate guide to running DeepSeek-R1 locally on AMD graphics cards! 💻✨
Video description.
Are you unable to use the Big Language Model because you don't have an NVIDIA graphics card? Don't worry! Today, I'm going to take you step-by-step through the entire process of running DeepSeek-R1 locally on an AMD graphics card! From environment preparation to final run, hand-on instruction is super detailed!

Tutorial Highlights
✅ Zero cost: no need to buy an expensive NVIDIA graphics card
✅ Running locally: Run LLM (Large Language Model) entirely on your computer!
✅ AMD user friendly: optimized for AMD graphics cards, supports multiple models

Tutorial Catalog
Environment preparation 🛠️

Download Ollama for AMD
Download ROCmLibs runtime library
Installation and configuration 🔧

Install Ollama
Configure the ROCm runtime library
Run DeepSeek-R1 💡

Start the model using the command line
Checking if GPU acceleration is used
Web UI operation (easier!) 🎮

Recommended tool: Page Assist extension
Tutorial links
Ollama for AMD Download: https://github.com/likelovewant/ollama-for-amd
ROCmLibs Download: https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.1.2
DeepSeek-R1 Official Repo: https://github.com/deepseek-ai/DeepSeek-R1
Ollama Official Website: https://ollama.ai/
Special note 🔍
If you are not sure about the shader codename of your graphics card, you can visit TechPowerUp to check.
If Ollama is still running in CPU mode, please download the AMD-adapted version of Ollama for CPU.
End

Comment