Can you install proxmox on virt-manager
Reviewed 10 sources
Yes, you can install Proxmox VE as a VM under KVM/QEMU using virt‑manager, and it works well for labs, learning, and light nested workloads if your CPU supports nested virtualization and you configure it correctly. For production or heavy nested workloads, running Proxmox directly on bare metal is strongly preferred due to performance and complexity trade‑offs.saturnme+2
How it works in virt‑manager
You install Proxmox like any other OS in KVM: create a new VM in virt‑manager, attach the Proxmox ISO, assign CPU/RAM/disk, and complete the graphical installer inside the VM. For reasonable performance of VMs inside Proxmox, the KVM host must expose hardware virtualization extensions (VT‑x/AMD‑V) to the Proxmox VM (nested virtualization) and usually use a “host‑passthrough” CPU type.getlabsdone+3
Key setup points:
- Enable nested virtualization on the host (kernel module options for Intel/AMD).pve.proxmox+1
- In virt‑manager, set CPU model to “host-passthrough” (or similar) so Proxmox sees VT‑x/AMD‑V.saturnme+1
- Allocate enough resources: at least 2–4 cores and 4–8 GB RAM for a comfortable lab, plus disk space for Proxmox and its guests.proxmox+1
- Use bridged or VLAN‑aware networking if you want the Proxmox guests to appear on your LAN.ecintelligence+1
Pros of Proxmox inside virt‑manager
- Safe lab and learning environment
- Great for experimenting with Proxmox clustering, storage, backups, HA concepts, and API without touching bare‑metal installs.pve.proxmox+1
- Easy to snapshot/clone the entire Proxmox node from the KVM host for rollback and testing.getlabsdone+1
- Convenient for laptop/desktop setups
- Lets you run Proxmox on a single Linux workstation that already uses KVM/virt‑manager, without dedicating a whole box.servermania+1
- Good for short‑term PoCs, training, and trying different Proxmox versions or configurations side by side.proxmox+1
- Networking and tooling flexibility
- You can build complex lab topologies using libvirt networks, bridges, and things like Open vSwitch, then have Proxmox sit inside that virtual lab.ecintelligence+1
- Integrates with existing KVM tooling/scripts if you already manage everything with libvirt and virt‑manager.servermania+1
Cons and limitations
- Performance overhead and nested virtualization issues
- Every VM or container you run inside Proxmox becomes “nested” under KVM; without proper nested hardware virtualization, KVM guests inside Proxmox can be an order of magnitude slower and nearly unusable.pve.proxmox
- Even with nested extensions enabled, performance is still worse than running Proxmox directly on bare metal, especially for heavy IO or CPU‑bound workloads.servermania+1
- Not ideal for production
- Resource and complexity overhead
- You incur two layers of scheduling and memory management: KVM host and Proxmox guest, which increases overhead and tuning complexity.pve.proxmox+1
- Storage stacks can become complex (e.g., LVM/ZFS on the host plus ZFS/Ceph/LVM in Proxmox), and mis‑configuring caching or layers can hurt performance or reliability.proxmox+1
When it makes sense vs bare metal
- Good use cases: homelab, certification prep, testing Proxmox features, building small virtual labs on a laptop/desktop, and trying configurations before applying them to real servers.pve.proxmox+1
- Use bare metal instead: any serious production environment, performance‑sensitive workloads, or scenarios where you want to maximize Proxmox features like clustering, live migration, and storage performance with minimal overhead.servermania+2
If you share your host specs (CPU, RAM, and whether VT‑x/AMD‑V nesting is enabled), more concrete virt‑manager settings (core counts, RAM, disk layout, and networking mode) can be suggested.
Can Proxmox be installed on a USB NVME drive?
Yes, Proxmox VE can be installed onto an NVMe drive that is attached via a USB enclosure, as long as the system firmware (BIOS/UEFI) can see that USB device as a bootable disk and the Proxmox installer detects it.proxmox+2
Basic feasibility
Proxmox is just Debian with a custom installer, so from its perspective a USB‑attached NVMe is just another block device. If, during installation, the target disk list shows your USB NVMe (often as /dev/sdX or /dev/nvmeXn1), you can select it and proceed; if your BIOS/UEFI can boot from that device, Proxmox will run directly from it.thomas-krenn+3
Pros of USB NVMe for Proxmox
- You get NVMe‑class performance with the flexibility of an external drive, which can be handy for small home labs or reusing hardware without opening the chassis.learnlinux+1
- It allows installing Proxmox on systems whose internal disks you do not want to wipe (for example, preserving an existing OS on the internal drive).proxmox+1
- USB NVMe enclosures often have much better endurance and speed than cheap USB flash sticks, making them more suitable for an OS that logs heavily.proxmox
Cons and caveats
- Boot support is the main limitation: many older boards either cannot boot from USB‑attached NVMe or are unreliable doing so; in those cases, a common pattern is bootloader on a small internal disk or SD card and Proxmox root/storage on NVMe.gist.github+1
- USB links add another point of failure: loose cables, power issues, or enclosure glitches can take the entire hypervisor down, which is risky for anything beyond a lab.proxmox+1
- Proxmox writes a lot of logs and metadata; while NVMe is fine with that, running entirely over USB still adds latency and reduces overall robustness versus an internal PCIe/NVMe slot.proxmox
Recommended usage
Using a USB NVMe as the Proxmox system and VM storage disk is reasonable for homelab, testing, and non‑critical workloads where you accept some extra risk and complexity. For production or always‑on services, an internal SSD/NVMe with proper power and cabling is strongly preferred, potentially combined with a separate small device (internal SATA/SD) for the bootloader on hardware that cannot boot directly from NVMe.proxmox+3
When installing Proxmox on a 4TB USB NVME drive do the Virtual Machines autmatically get GPU passthough.
GPU passthrough in Proxmox is not automatic for VMs just because Proxmox is installed on a USB NVMe drive. Passthrough requires explicit configuration at the host level and for each VM you want to assign a PCI device to. In other words, you must enable IOMMU, identify the GPU's PCI device, and add that device to the VM configuration; it is not an out-of-the-box, one-click feature. [citations unavailable in this turn]
Key points to consider
- GPU passthrough is a host‑side feature that needs hardware and firmware support (IOMMU/VT-d or AMD‑V for the platform) and proper host configuration before any guest can use the GPU. You then assign the GPU to a specific VM via PCI passthrough settings or in the VM’s conf, and ensure the guest drivers are installed. [general knowledge; no citation provided in this turn]
- USB NVMe as a Proxmox boot/storage path does not inherently enable or disable GPU passthrough; the capability depends on the host’s IOMMU grouping and the virtualization stack, not the boot device. [general knowledge; no citation provided in this turn]
- If the GPU is not in its own IOMMU group or the host lacks proper ACS/patching, passthrough may fail or be unstable. Troubleshooting typically involves checking IOMMU grouping with lspci -nn and adjusting boot parameters, then reconfiguring the VM to pass through the device. [general knowledge; no citation provided in this turn]
What you’ll need to do next (high level)
- Verify CPU and motherboard support for VT-d/AMD-V and ensure IOMMU is enabled in BIOS/UEFI.
- Enable IOMMU on the Proxmox host (kernel boot parameters) and update initramfs as needed.
- Determine the exact PCI device of the GPU (and any associated audio device) with a PCI scan, then configure PCI passthrough for that device in the VM.
- Install or update guest GPU drivers and verify that the VM can initialize the GPU (and that the host preserves IOMMU isolation).
If you want, share your host hardware (CPU/motherboard model, GPU model, Proxmox version) and I can outline a concrete step-by-step plan tailored to your setup.
YES. Outline a concrete step-by-step plan tailored to your setup. I'm now using CachyOS. lspci 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Root Complex 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge IOMMU 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Dummy Host Bridge 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge GPP Bridge 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Dummy Host Bridge 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge GPP Bridge 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge GPP Bridge 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Dummy Host Bridge 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Dummy Host Bridge 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Dummy Host Bridge 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Internal GPP Bridge to Bus [C:A] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Internal GPP Bridge to Bus [C:A] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71) 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 0 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 1 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 2 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 3 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 4 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 5 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 6 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge Data Fabric; Function 7 01:00.0 VGA compatible controller: NVIDIA Corporation AD104 [GeForce RTX 4070] (rev a1) 01:00.1 Audio device: NVIDIA Corporation AD104 High Definition Audio Controller (rev a1) 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Upstream Port (rev 01) 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 03:0d.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01) 06:00.0 Network controller: Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak] (rev 10) 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller (rev 15) 0b:00.0 Non-Volatile memory controller: MAXIO Technology (Hangzhou) Ltd. NVMe SSD Controller MAP1602 (DRAM-less) (rev 01) 0c:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset USB 3.2 Controller (rev 01) 0d:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset SATA Controller (rev 01) 0e:00.0 Non-Volatile memory controller: INNOGRIT Corporation NVMe SSD Controller IG5220 [RainierQX] (DRAM-less) (rev 01) 0f:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raphael (rev c5) 0f:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Radeon High Definition Audio Controller 0f:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 19h PSP/CCP 0f:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge USB 3.1 xHCI 0f:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge USB 3.1 xHCI 0f:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Ryzen HD Audio Controller 10:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Raphael/Granite Ridge USB 2.0 xHCI
You can set up GPU passthrough in Proxmox, but it will not be automatic just because you’re booting Proxmox from a USB NVMe or using CachyOS. Passthrough requires explicit, per-VM configuration and a properly prepared host with IOMMU support and well‑configured PCI device isolation. Below is a concrete, step‑by‑step plan tailored to your AMD platform and your NVIDIA RTX 4070 observed in your lspci dump.
Step-by-step plan
- Verify hardware IOMMU readiness
- Confirm AMD IOMMU is enabled in BIOS/UEFI (AMD-Vi). Then ensure the kernel sees IOMMU by checking dmesg for DMAR/IOMMU lines after boot. If not present, enable AMD IOMMU in the kernel boot parameters (amd_iommu=on) and update the bootloader, then reboot. [General guidance; adapt to your system]
- Prepare Proxmox host for PCI passthrough
- Enable IOMMU in the Proxmox host (same amd_iommu=on parameter for AMD CPUs) and ensure the VFIO modules load: vfio, vfio_pci, vfio_iommu_type1. Update initramfs and reboot. This is foundational for isolating the GPU to a VM.
- Identify IOMMU groups and the GPU PCI devices
- Run a PCI scan (lspci -nn) to locate the RTX 4070 and its HDMI/HD audio controller, and note their IDs (for example: 01:00.0 [10de:1e87] VGA controller, 01:00.1 [10de:10f0] Audio). Also check IOMMU groupings (for example, via: for d in /sys/kernel/iommu_groups/*; do echo "IOMMU group:"; ls -1 "$d"; done). If they share an IOMMU group with other devices, you may need ACS override or reordering PCIe slots. This step is critical for stable passthrough.
- Configure PCI passthrough for the VM
- In Proxmox, create or edit a Windows or Linux VM and add the NVIDIA GPU as a PCI device (and its associated audio device if present). Use PCI passthrough mode and ensure the guest has appropriate drivers (NVIDIA drivers for Windows or Linux VGPU-compatible stack for Linux). If you encounter boot issues, you may need to add kernel boot parameters to the guest (e.g., except for the host IOMMU, enable vfio-pci binding) and adjust IOMMU remapping settings on the host.
- Optimize and test
- Boot the VM and verify the GPU initializes inside the guest (device manager in Windows or lspci inside Linux). Install the correct NVIDIA driver inside the VM. Run a basic GPU test to confirm full GPU access. If the host becomes unresponsive or IOMMU groups are too coarse, revisit ACS overrides or PCI slot configuration.
- Stabilize and maintain
- If stability is an issue, consider aligning the VM with a hot-plug PCI device policy, keeping the host’s PCIe topology favorable (prefer a dedicated GPU PCIe slot, avoid sharing with critical host IO devices), and ensuring robust power and cooling for both host and VM workloads.
Notes and caveats
- GPU passthrough success depends on clean IOMMU groups; if the RTX 4070 shares a group with other devices, passthrough may be unstable or impossible without enabling more aggressive ACS overrides. This remains the most common blocker on many systems.
- Running GPU passthrough on a USB NVMe boot drive does not inherently enable or disable passthrough; it’s orthogonal to the task. The key is ensuring the host’s IOMMU stack and PCI devices are properly isolated and assigned to the VM.
- For AMD, you’ll likely rely on amd_iommu=on and potentially pcie_acs_override=downstream,multifunction to improve grouping, but exact parameters depend on the motherboard and firmware. Start with the standard AMD IOMMU guidance and tailor via testing.
If you share exact Proxmox version, motherboard model, and whether you see distinct IOMMU groups for the RTX 4070 and its audio device (and whether the host reports any blocking ACS issues), a more precise, command-by-command plan can be provided.
Can you make a downloadable pdf of this conversation?
Yes. You can download a PDF of this conversation, using a few simple methods.
How to create a PDF of this chat
- Browser print-to-PDF
- Open this chat in your browser.
- Use Print (Ctrl/Cmd + P), then choose Destination: Save as PDF. Adjust layout (portrait/landscape) and margins as needed, then Save.
- Copy and export
- Select the entire chat, copy it, and paste into a word processor (e.g., Word, LibreOffice) or a note app.
- Export or Save As PDF from that application.
Tips for a clean PDF
- Include timestamps or user prompts: ensure the chat history you export includes the time and speaker labels if available.
- If the conversation is long, you may need to save in chunks or enable “background graphics” for better fidelity when your browser compresses the content.
Notes
- The ability to export or print as PDF is typically built into most browsers and chat interfaces; if your platform provides a dedicated “Export as PDF” option, you can use that instead for a more streamlined result. If you’d like, I can guide you step-by-step for your specific browser (Chrome, Firefox, Edge) or platform.
Citations
- Not applicable here since this is a self-contained, client-side export method and no external sources informed this guidance. If you want sources about specific export tools or extensions, I can look them up and cite them.
Related