MENU

Fun & Interesting

How to Install Bolt new AI Locally : Complete Guide 2024

Bring Me Ai 9,938 lượt xem 3 months ago
Video Not Working? Fix It Now

🔧 Complete guide to installing and running Bolt.new AI locally with Ollama models - Including ALL error fixes other tutorials don't cover!

⚡ Timestamps:
00:00 Intro
00:46 Installing Git
01:26 Installing Python
02:00 Installing Node.js
02:21 Installing VS Code
02:29 Installing Ollama
03:28 Installing Ollama's Model
04:30 Setting The Execution Policy
04:52 Cloning The Bolt.New Repository
05:18 Installing npm
05:40 Installing pnpm
05:50 Setting .env.example File
06:42 Creating Modelfile
07:18 Creating Custom Ollama Model
08:06 Running Bolt.New UI
09:15 How To Delete Ollama Model
10:17 Ollama Model Not Visble In Bolt.New UI FIX
10:38 Alternatives
11:06 Final Thoughts
11:41 Bonus Video

🛠️ What You'll Learn:
- Step-by-step Bolt.new AI local installation
- How to set Execution Policy
- How to fix npm and pnpm installation errors
- Proper Ollama model configuration
- Setting the .env file
- Larger Context Window Modelfile creation guide
- Correct Ollama localhost address
- Fixing Ollama Models Not Appearing Inside Bolt.New UI

💡 Common Issues Solved:
- npm installation errors
- pnpm setup problems
- Ollama models not appearing in Bolt UI
- Localhost connection issues
- Modelfile configuration errors

🔗 Useful Links:
Git: https://git-scm.com/downloads/win
Python: https://www.python.org/downloads/
Node.js: https://nodejs.org/en
VS Code: https://code.visualstudio.com/
Ollama: https://ollama.com/
Ollama qwen2.5-coder: https://ollama.com/library/qwen2.5-coder
Bolt.new Repository: https://github.com/coleam00/bolt.new-any-llm.git
How to setup your personal chatgpt:
How to run ollama models on low vram: https://www.youtube.com/watch?v=FQTorLqMyMU&t

🤗 Support me on Patreon: https://bit.ly/3AOmhw7

📝Commands:
Check Git Version: git --version

Check Python Version: python --version

Check Node.js Version: node --version

Check Ollama models: ollama list

Install qwen2.5-coder: ollama run qwen2.5-coder

Set Execution Policy : Set-ExecutionPolicy RemoteSigned

Clone Bolt.new Git repo: git clone https://github.com/coleam00/bolt.new-any-llm.git

Navigate to directory: cd bolt.new-any-llm

Install npm: npm install

Install pnpm: npm install -g pnpm

Ollama address: http://127.0.0.1:11434

Modelfile content:
FROM qwen2.5-coder
PARAMETER num_ctx 32768

Create custom model: ollama create -f Modelfile qwen2.5-coder-bolt

Run bolt.new: pnpm run dev

Deleting ollama model: ollama rm [put exact model name here without the brackets]


🔔 Subscribe for more AI development tutorials and troubleshooting guides!

#BoltAI #aitutorial #ollama

Comment