}

VMware ESXi PCI Passthrough via esxcli: Complete CLI Guide (GPU, NIC, NVMe)

VMware ESXi PCI Passthrough via esxcli hardware pci pcipassthru list: Complete CLI Guide

PCI passthrough (DirectPath I/O) lets a VM own a physical PCIe device — GPU, NIC, or NVMe — bypassing the hypervisor for near-native performance. The key command is esxcli hardware pci pcipassthru list, which shows every device and its passthrough state directly from the ESXi shell.

Table of Contents

  1. Prerequisites
  2. Step 1 — List All PCI Devices with esxcli hardware pci pcipassthru list
  3. Step 2 — Check Passthrough Status
  4. Step 3 — Enable Passthrough
  5. Step 4 — Verify and Reboot
  6. Step 5 — Assign Device to a VM
  7. Auto-Enable Script (Fixes Reboot Deactivation Bug)
  8. Advanced VM Configuration
  9. AMD vs Intel IOMMU Notes
  10. Common Errors and Fixes

Prerequisites

Before enabling PCI passthrough on ESXi you need:

  • ESXi 6.5 or later (6.7 and 7.x recommended)
  • VT-d / AMD-Vi enabled in BIOS/UEFI
  • IOMMU enabled in the ESXi boot configuration
  • SSH access to the ESXi host or access to the DCUI shell
  • The target VM must be powered off when adding a passthrough device
  • Memory reservation set to 100 % of VM RAM (hard requirement)

Check that VT-d is active before proceeding:

esxcli system settings kernel list -o iovDisableIR

If iovDisableIR is FALSE, interrupt remapping is enabled — which is required for most GPUs.


Step 1 — List All PCI Devices

Start by listing every PCIe device visible to ESXi:

esxcli hardware pci list

The output contains the slot address (e.g., 0000:03:00.0), vendor ID, device ID, device name, and class. Identify the slot address of the device you want to pass through — you will need it for all subsequent commands.

Useful filtering examples:

# Find NVIDIA GPUs
esxcli hardware pci list | grep -i nvidia

# Find NVMe controllers
esxcli hardware pci list | grep -i nvme

# Find Intel NICs
esxcli hardware pci list | grep -i "intel.*ethernet"

Note the full slot address in the format SSSS:BB:DD.F (segment:bus:device.function), for example 0000:03:00.0.


Step 2 — Check Passthrough Status with esxcli hardware pci pcipassthru list

Use esxcli hardware pci pcipassthru list to see which devices are passthrough-capable and which are already enabled:

esxcli hardware pci pcipassthru list

Sample output:

Address       Active  ConfiguredPassthru  PassthruCapable  VendorName        DeviceName
------------  ------  ------------------  ---------------  ----------------  --------------------------
0000:03:00.0  false   false               true             NVIDIA            GA102 [GeForce RTX 3090]
0000:03:00.1  false   false               true             NVIDIA            GA102 High Definition Audio
0000:04:00.0  false   false               true             Samsung           NVMe SSD Controller
0000:05:00.0  false   false               false            Intel             I350 Gigabit NIC

Key columns:

Column Meaning
Active true if the device is currently passed through to a running VM
ConfiguredPassthru true if passthrough is enabled and will persist after reboot
PassthruCapable true if the device hardware supports passthrough

If PassthruCapable is false, the device cannot be passed through. Devices showing false in all columns are the starting point for enablement.


Step 3 — Enable Passthrough

Once you have the slot address from esxcli hardware pci pcipassthru list, enable passthrough with:

esxcli hardware pci pcipassthru set --device-id=0000:03:00.0 --enable=true --apply-now

For a GPU that has an associated audio controller (function .1), enable both:

esxcli hardware pci pcipassthru set --device-id=0000:03:00.0 --enable=true --apply-now
esxcli hardware pci pcipassthru set --device-id=0000:03:00.1 --enable=true --apply-now

The --apply-now flag tells ESXi to activate the setting immediately without waiting for a full host reboot in some scenarios. However, a reboot is still required before assigning the device to a VM.

Verify the change took effect:

esxcli hardware pci pcipassthru list | grep 0000:03:00

ConfiguredPassthru should now be true.


Step 4 — Verify and Reboot

After enabling, double-check the configuration before rebooting:

esxcli hardware pci pcipassthru list

Look for your device and confirm: - ConfiguredPassthru = true - PassthruCapable = true

Then reboot the host:

esxcli system maintenanceMode set --enable=true
reboot

After the host comes back up, run esxcli hardware pci pcipassthru list again. If ConfiguredPassthru reverted to false, see the Auto-Enable Script section below — this is a known ESXi bug with certain hardware.


Step 5 — Assign Device to a VM

Option A: vSphere UI

  1. Power off the target VM.
  2. Edit VM settings → Add other devicePCI Device.
  3. Select the passthrough device from the dropdown.
  4. Set MemoryReservation to 100 % of the VM's RAM.
  5. Power on the VM.

Option B: vim-cmd (CLI)

Get the VM's vmid:

vim-cmd vmsvc/getallvms

Edit the .vmx file directly (replace /vmfs/volumes/... with the actual path):

# Get the vmx path
vim-cmd vmsvc/get.config <vmid> | grep vmPathName

Add the following lines to the .vmx file, replacing the PCI ID:

pciPassthru0.present = "TRUE"
pciPassthru0.id = "0000:03:00.0"
pciPassthru0.deviceId = "0x2204"
pciPassthru0.vendorId = "0x10de"
pciPassthru0.systemId = "<system-uuid>"

Also add the memory reservation parameter to avoid boot failures:

sched.mem.pin = "TRUE"

After editing, reload the VM configuration:

vim-cmd vmsvc/reload <vmid>

Auto-Enable Script

A well-known ESXi bug causes ConfiguredPassthru to revert to false after a host reboot on certain hardware combinations (frequently seen with AMD Radeon, some NVIDIA cards, and NVMe controllers). This script re-enables passthrough automatically at each boot by hooking into /etc/rc.local.d/local.sh.

The Script

#!/bin/sh

# /etc/rc.local.d/local.sh
# Auto-enable PCI passthrough for devices that reset on reboot
# Add the PCI slot addresses you want to keep enabled below.

PASSTHRU_DEVICES="0000:03:00.0 0000:03:00.1 0000:04:00.0"

for DEV in $PASSTHRU_DEVICES; do
    CURRENT=$(esxcli hardware pci pcipassthru list | grep "^${DEV}" | awk '{print $3}')
    if [ "$CURRENT" != "true" ]; then
        echo "Re-enabling passthrough for $DEV"
        esxcli hardware pci pcipassthru set --device-id="${DEV}" --enable=true --apply-now
    fi
done

exit 0

Installation

# Open the existing local.sh for editing
vi /etc/rc.local.d/local.sh

# Paste the script content before the final 'exit 0' line
# Save with :wq

# Make executable
chmod +x /etc/rc.local.d/local.sh

# Test without rebooting
/etc/rc.local.d/local.sh

# Persist across ESXi image updates
/sbin/auto-backup.sh

The auto-backup.sh call is essential — without it, changes to /etc/rc.local.d/ are lost when the ESXi host writes its configuration to disk during shutdown.

Alternative: grep + sed approach to patch vmx files

If you need to ensure the vmx always has the right passthrough settings, use sed to patch it after each reboot:

VMX_FILE="/vmfs/volumes/datastore1/myvm/myvm.vmx"

# Ensure sched.mem.pin is set
grep -q 'sched.mem.pin' "$VMX_FILE" || \
    sed -i '/^sched\./!b;/sched.mem.pin/b;$ a sched.mem.pin = "TRUE"' "$VMX_FILE"

Advanced VM Configuration

For GPU passthrough (especially high-VRAM cards like RTX 3090, A100, or AMD RX 7900 XTX), the default 32-bit MMIO window is too small. Add the following parameters to the VM's .vmx file:

# Enable 64-bit MMIO for GPUs with more than 4 GB VRAM
pciPassthru.use64bitMMIO = "TRUE"

# Set the 64-bit MMIO size in GB — use VRAM size * 2 as a rule of thumb
# For a 24 GB GPU use 64, for a 16 GB GPU use 32
pciPassthru.64bitMMIOSizeGB = "64"

# Move the PCI hole to avoid overlap with RAM
# Required when VM RAM + MMIO window exceeds 4 GB
pciHole.start = "2048"
pciHole.end = "4096"

These three settings prevent the VM from failing to boot with errors like Failed to initialize VMX or Out of memory in VMkernel when the GPU's BAR registers cannot be mapped.

For NVMe passthrough, no extra MMIO configuration is typically needed, but you must ensure the datastore on that NVMe device is unmounted from the ESXi host before passing it through:

esxcli storage filesystem unmount -l <datastore-label>

AMD vs Intel IOMMU Notes

Intel VT-d

  • Enable VT-d and VT-x in BIOS.
  • In ESXi, interrupt remapping is usually enabled by default.
  • Most consumer and server Intel platforms work reliably with PassthruCapable = true.
  • Verify in BIOS that Above 4G Decoding and Resizable BAR settings are compatible with your ESXi version.

AMD AMD-Vi (IOMMU)

AMD platforms require additional attention:

# Check if AMD-Vi is visible to ESXi
esxcli system settings kernel list | grep iommu

Common AMD-specific issues:

  1. ACS (Access Control Services) may not be present on consumer AM4/AM5 boards, causing all devices to share a single IOMMU group. This makes it impossible to pass through one device without also passing through everything else in the group.

  2. Interrupt Remapping — some older AMD platforms (pre-Zen 3) require setting iovDisableIR=TRUE in the ESXi boot parameters to work around interrupt remapping issues:

esxcli system settings kernel set -s iovDisableIR -v TRUE

Note: disabling interrupt remapping reduces security. Only do this if passthrough fails with the default settings.

  1. AGESA firmware — AMD IOMMU behavior can change significantly between AGESA versions. If passthrough stops working after a BIOS update, check the AMD AGESA changelog.

  2. IOMMU grouping — on EPYC/Threadripper platforms with multiple PCIe root complexes, IOMMU groups are typically well-isolated, making passthrough much more straightforward than on desktop AM4/AM5.


Common Errors and Fixes

Gray / Grayed-Out Devices in vSphere UI

Symptom: The device appears in the passthrough list but is grayed out and cannot be selected.

Causes and fixes:

  • The device is still in use by an ESXi driver. Identify and unbind the driver: bash esxcli system module list | grep -i vmw_ahci # or for NICs: esxcli network nic list Devices used by the vmkernel (e.g., management NIC, boot disk) cannot be passed through.

  • ConfiguredPassthru is true but Active is still false — a reboot is needed.

  • The device is in the same IOMMU group as an active device. Check grouping with: bash esxcli hardware pci list | grep -E "(Address|IOMMU)"

AMD IOMMU Incompatibility

Symptom: VM fails to power on with AMD-Vi: IOMMU page fault or the passthrough device is listed as not capable.

Fix: 1. Update BIOS to the latest AGESA version. 2. Try setting iovDisableIR=TRUE as described above. 3. Check if your board supports ACS — without it, you may need to pass through the entire IOMMU group. 4. On HEDT/EPYC platforms, ensure SR-IOV is enabled in BIOS alongside AMD-Vi.

Memory Reservation Requirement

Symptom: VM with a passthrough device powers on but immediately crashes, or vSphere refuses to power on the VM with an error about memory.

Fix: PCI passthrough requires that the VM's entire memory allocation is reserved (pinned). Without this, the hypervisor cannot guarantee the physical memory addresses needed for DMA.

In the UI: VM Settings → Memory → Reserve all guest memory (All locked).

In the .vmx file:

sched.mem.pin = "TRUE"

For large VMs (64 GB+ RAM), ensure the ESXi host has sufficient free physical RAM. The entire VM RAM will be locked on power-on.

Device Not Visible Inside the VM (After Successful Passthrough)

Symptom: The device shows as active in esxcli hardware pci pcipassthru list but does not appear in the guest OS.

Checks: - Confirm the correct driver is installed in the guest (e.g., NVIDIA drivers for a GPU). - Check the guest OS device manager for unknown/error devices. - For Windows guests, ensure Secure Boot is disabled if using unsigned drivers. - Verify that pciPassthru.use64bitMMIO is set correctly for high-VRAM GPUs.

esxcli hardware pci pcipassthru set Fails with Permission Error

Symptom: esxcli hardware pci pcipassthru set returns Permission denied or Not supported.

Fix: - Ensure you are running commands as root on the ESXi shell, not as a read-only user. - If the host is managed by vCenter, some CLI operations may be restricted. Run commands directly on the ESXi host via SSH, not through vCenter's built-in terminal.