Build a private, self-hosted LLM server with Proxmox, PCle passthrough, Ollama & NixOS
Protect your data by running a self-hosted, offline AI locally. This video shows you how to set up a self-hosted AI server with Proxmox, PCIe GPU passthrough, and NixOS.
**No cloud, no subscriptions—just full control over your AI.**
In this video I will show you how to setup up and configure yourself a self-hosted AI basecamp VM, and access it from anywhere using Tailscale.
We'll cover how to do this atop of one my favorite homelab pillars, Proxmox. We'll use PCIe passthrough to make our Nvidia A4000 GPU available to the VM running NixOS.
Personal accounts are always free on Tailscale and can include up to 3 users and 100 devices. Get started today at https://tailscale.com/yt.
---------------------
Links:
- https://github.com/tailscale-dev/video-code-snippets/tree/main/2025-02-nix-nvidia-ollama
- https://tailscale.com/kb/1084/sharing
- https://ollama.com/
- KTZ Systems - An Epyc Homelab Monster - https://youtu.be/91dp5l44X8A
- Craft Computing - PCIe passthrough tutorial - https://www.youtube.com/watch?v=_hOBAGKLQkI
---------------------
Chapters:
03:09 - What are we doing today?
05:49 - NixOS prep
09:26 - NixOS Installation
16:00 - PCIe Passthrough
19:35 - docker compose
28:23 - OpenWebUI + Ollama
===