gpu-pass-through/passthrough.md
2026-04-04 03:56:47 +03:00

3.5 KiB

GPU Passthrough Status for All Your Containers

Already Fully Configured — No Changes Needed

CT 120 (nvidia-gpu-cuda) and CT 122 (ollama) already have complete GPU passthrough config. Nothing to do there.


Containers That Need GPU Added

Based on your workloads, only CT 108 (openclaw) is the AI container that needs GPU. CT 130 (video.takeone.bh) could optionally benefit for hardware video encoding.


Exact Lines to Add

CT 108 — openclaw (AI assistant)

Run on the host:

cat >> /etc/pve/lxc/108.conf << 'EOF'

# GPU Passthrough
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 510:* rwm
lxc.cgroup2.devices.allow: c 235:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia1 dev/nvidia1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=dir
lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=compute,utility
lxc.environment: CUDA_DEVICE_ORDER=PCI_BUS_ID
EOF

CT 130 — video.takeone.bh (optional, for hardware video encoding)

cat >> /etc/pve/lxc/130.conf << 'EOF'

# GPU Passthrough (hardware encoding)
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 510:* rwm
lxc.cgroup2.devices.allow: c 235:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia1 dev/nvidia1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=dir
lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=compute,utility,video,encode,decode
lxc.environment: CUDA_DEVICE_ORDER=PCI_BUS_ID
EOF

Note: CT 130 has unprivileged: 1 — the bind mounts will still work because of the optional flag.


What to Do Inside Each Container After Restart

Once you restart the container, install the matching NVIDIA userspace libs only (no kernel modules) inside it:

# Inside CT 108 or 130 (Debian/Ubuntu)
# Add the same CUDA repo from the host, then:
apt install -y libnvidia-compute-580 nvidia-utils-580
# Verify:
nvidia-smi

Or if the container already has the CUDA local repo mounted/copied from the host, just run:

nvidia-smi   # should show both RTX 3060s

Summary Table

CT Name GPU Needed Status Action
120 nvidia-gpu-cuda Yes Already done Nothing
122 ollama Yes Already done Nothing
108 openclaw Yes Missing Add config above
130 video.takeone.bh Optional Missing Add config above if you want HW encoding
All others web/DB/network No N/A Skip

The cgroup device majors to remember for this host: 195:* (GPU cores), 510:* (UVM/CUDA), 235:* (nvidia-caps/MIG).