Skip to main content
Server rack infrastructure for ESXi to Proxmox migration
Infrastructure / Virtualization

How to Migrate from VMware ESXi to Proxmox VE: Complete SMB Guide

Broadcom's acquisition changed the economics for every SMB on ESXi. Here is exactly how to migrate, including what broke for us and how to avoid it.

By William Bradshaw | March 23, 2026 | 12 min read

When Broadcom acquired VMware, the free ESXi hypervisor that tens of thousands of SMBs had been running without licensing cost was discontinued. Then came mandatory bundled subscriptions, per-core pricing, and the elimination of perpetual licenses. For organizations running a handful of ESXi hosts, the licensing math shifted from zero to tens of thousands of dollars annually overnight.

Proxmox VE is the most credible replacement for on-premises workloads. It is open-source, actively maintained, enterprise-ready, and runs on the same hardware you already have. This guide covers the technical process of migrating VMs from ESXi to Proxmox step by step. Not the strategic case for why, but the actual how: installing Proxmox, converting VMDK files, handling Windows VMs, and not getting surprised by the gotchas we hit in production.

If you want the strategic framing (production readiness concerns, phased migration planning, operational gaps to prepare for), read our companion article: Migrating from VMware to Proxmox: What Production Environments Actually Need. This guide assumes you have made the decision and want to execute.

Why Proxmox and Not the Alternatives

The common alternatives to ESXi for SMB on-premises workloads are Proxmox VE, Microsoft Hyper-V, and cloud migration (moving VMs to AWS/Azure/GCP). Here is why most organizations doing this migration end up on Proxmox:

vs. Hyper-V

Hyper-V is viable but ties you to Windows Server licensing for the host. Proxmox runs on Debian with no host OS licensing cost, supports ZFS natively, and has a more capable web UI without requiring System Center.

vs. Cloud Migration

Cloud works for some workloads. For file servers, local databases, and latency-sensitive applications serving on-site users, a cloud VM at $200-800/month per instance adds up fast compared to amortizing local hardware over 5 years.

vs. Staying on ESXi

ESXi 7.x continues to function without support, but you are accumulating security debt with no vendor patches and building on a platform with an uncertain roadmap for SMBs. Every year you stay makes the eventual migration harder.

What You Need Before You Start

Hardware Requirements for the Proxmox Host

If you are reusing ESXi hardware, Proxmox will run on it. Proxmox VE 8.x requires a 64-bit CPU with hardware virtualization (Intel VT-x or AMD-V) enabled in BIOS. Minimum 8 GB RAM for a host running multiple VMs; 32-64 GB is more practical. For storage, a dedicated OS drive (separate from VM storage) is strongly recommended: a small NVMe or SSD for the Proxmox system, with additional drives for VM storage.

The one hardware consideration that differs from ESXi: if you plan to use ZFS mirroring for your VM storage (recommended), you need at least two drives of equal or larger size. ZFS mirrors are managed at the OS level, not through a hardware RAID controller. In fact, ZFS works best without a hardware RAID controller in the data path.

Network Planning

Proxmox uses Linux bridge networking rather than VMware's virtual switches. Before installation, know your: management IP for the Proxmox host, which physical NICs will carry VM traffic, and whether you need VLAN tagging. If you have multiple VLANs in your ESXi environment, Proxmox handles this with VLAN-aware bridges. The configuration is explicit per bridge rather than managed through a vCenter-style interface.

Backup Everything First

This is not a formality. Before any VM moves: ensure you have a full backup of each VM in a format you can restore from independently of ESXi. ESXi snapshots are not backups. Exporting VMs as OVA files to a separate storage location gives you an independent restore point. Veeam, Nakivo, or any VADP-based backup that exports VMDK files works.

The specific risk: during a VMDK-to-qcow2 conversion, if something goes wrong mid-conversion, you want the original VMDK intact on a separate drive or NAS, not shared with the conversion target.

Tools You Will Need

  • Proxmox VE 8.x ISO: from proxmox.com/downloads
  • qemu-img: for VMDK-to-qcow2 conversion (included in Proxmox; or run on any Linux system)
  • VirtIO driver ISO: for Windows VMs; download from Fedora's VirtIO repo
  • virt-v2v: optional, for automated Windows VMDK conversion including driver injection
  • Proxmox Backup Server (PBS) ISO: optional but recommended; standalone VM backup target

Step 1: Install Proxmox VE

Download the Proxmox VE ISO, flash it to a USB drive with Balena Etcher or dd, and boot the target hardware from it.

The installer will ask for: target disk (install OS here, using the dedicated OS drive, not your VM storage), hostname, IP address and gateway for the management interface, and root password. These are permanent. Choose the hostname and IP carefully.

Storage Decision: ZFS or LVM?

The installer prompts for a filesystem. For a new Proxmox deployment, ZFS is the better default for most SMBs:

ZFS Mirror (RAID-1)

Two drives, mirrored. Data integrity checksumming, transparent compression, instant snapshots. If one drive fails, the system keeps running from the other. Recommended for VM storage. Requires two drives of equal or larger size.

LVM-thin (single disk)

Simpler, lower overhead, no redundancy. Fine for a single-drive lab or a Proxmox host where all VM data is on a separate NAS or SAN. Not recommended if this host's local disk is your only VM storage.

After installation, the Proxmox web UI is accessible at https://<host-ip>:8006. Log in as root with the password you set during installation.

Post-Install: Remove the Subscription Nag

The community edition shows a "no valid subscription" popup on login. It is cosmetic only and does not restrict functionality. To remove it, run on the Proxmox host:

sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy

Also configure the pve-no-subscription APT repository (not enterprise) so you receive community updates:

# Disable enterprise repo (requires subscription)
echo "# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list

# Add community repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
apt update && apt dist-upgrade -y

Step 2: Migrating Linux VMs from ESXi

Linux VMs are the straightforward case. The process is: export the VMDK from ESXi, convert it to qcow2, create a VM shell in Proxmox, attach the converted disk, and boot.

Export the VMDK from ESXi

From the ESXi web UI: power off the VM, select it, then Actions → Export. This creates an OVA (or separate OVF + VMDK files). The VMDK is what you need. Alternatively, if you have SSH access to the ESXi host, you can copy the VMDK directly:

scp root@esxi-host:/vmfs/volumes/datastore1/myvm/myvm-flat.vmdk /tmp/myvm.vmdk

Note: use the -flat.vmdk file (the actual disk data), not the descriptor .vmdk file.

Convert VMDK to qcow2

On the Proxmox host (or any Linux system with qemu-img):

qemu-img convert -f vmdk -O qcow2 myvm.vmdk myvm.qcow2

Conversion time scales with disk size, roughly 15-30 minutes per 500 GB on a modern SSD. For large disks, run this in a tmux or screen session.

Create the VM and Import the Disk

In the Proxmox web UI: Create VM, give it a VM ID and name, set OS type (Linux/Windows), skip the disk step (we will import), set CPU and RAM to match or exceed the original, skip the network config for now. Then on the Proxmox host CLI:

# Import the disk to the VM (replace 100 with your VM ID and local-zfs with your storage pool)
qm importdisk 100 /tmp/myvm.qcow2 local-zfs

After import, go to the VM's Hardware tab in the UI. You will see an "Unused Disk" entry. Double-click it, set the bus/controller to VirtIO Block (or SCSI with VirtIO-SCSI), and add it. Then go to Boot Order under Options and set the imported disk as the primary boot device.

First Boot Checklist (Linux)

  • Install QEMU Guest Agent: apt install qemu-guest-agent && systemctl enable --now qemu-guest-agent. This replaces VMware Tools and is required for clean backup quiescing
  • Verify network interface names. Linux may rename the interface from eth0 to ens18 or similar. Update /etc/network/interfaces or Netplan config accordingly
  • Remove any VMware-specific packages: open-vm-tools, vmware-tools
  • If you changed the disk controller to VirtIO and the VM won't boot, change back to SCSI (virtio-scsi) or IDE temporarily until the VirtIO storage drivers are installed

Step 3: Migrating Windows VMs from ESXi

Windows VMs are harder than Linux for one reason: drivers. VMware installs its own storage and network drivers (VMware Tools) and removes the generic drivers Windows would otherwise use. When you move a Windows VM to KVM/Proxmox, the storage controller changes from VMware's PVSCSI to VirtIO, and Windows will not boot without the right driver.

There are two approaches. The clean path is to install VirtIO drivers before the migration while the VM is still on ESXi. The pragmatic path is to handle it at first boot on Proxmox using a VirtIO driver ISO.

Option A: Install VirtIO Drivers Before Migration (Preferred)

While the VM is still running on ESXi, download the VirtIO driver ISO and mount it in the VM. Install the vioscsi and NetKVM drivers from the ISO. These drivers will sit dormant in Windows but activate automatically when you move the VM to a VirtIO-backed environment. After driver installation, do the VMDK export and conversion as described in the Linux section.

Download VirtIO ISO
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

Option B: Driver ISO at First Boot (When You Can't Pre-Install)

If you cannot pre-install drivers (because the VM is already powered off or you do not have access to it), use this approach:

  1. 1.Create the Proxmox VM with SCSI controller set to IDE and attach the imported disk as an IDE device. Windows can boot from IDE without additional drivers.
  2. 2.Attach the VirtIO ISO as a CD-ROM drive and boot the VM.
  3. 3.Once Windows boots, install the VirtIO storage and network drivers from the ISO (Device Manager or the installer in the ISO root).
  4. 4.Shut down, change the disk controller in Proxmox to VirtIO Block or VirtIO SCSI, and reboot. Windows will now boot from the VirtIO disk using the newly installed driver.

Windows Activation After Migration

The hardware change (from ESXi's virtual hardware to KVM/VirtIO) triggers Windows to request reactivation. For volume licenses (KMS or MAK), reactivation is automatic once the machine connects to your KMS server or the internet. For retail licenses, you may need to reactivate via the Windows activation troubleshooter. Volume licenses on Windows Server 2019/2022 almost never require manual intervention.

Using virt-v2v for Automated Windows Conversion

virt-v2v is a conversion tool that handles both VMDK conversion and VirtIO driver injection in one step. It is available on Fedora/RHEL and can be installed on Proxmox's Debian base:

apt install virt-v2v

# Convert from VMware (OVA) to Proxmox KVM format
virt-v2v -i ova /path/to/myvm.ova -o local -of qcow2 -os /var/lib/vz/images/100/

virt-v2v injects the VirtIO drivers automatically for supported Windows versions. It works reliably for Windows Server 2012 R2 through 2022. The conversion is slower than qemu-img convert alone (adds driver injection overhead) but eliminates the manual driver installation step.

What We Got Wrong (And How to Avoid It)

These are the specific problems we ran into during client ESXi migrations. None of them are exotic; they are all recoverable. But they are easier to avoid than to fix under pressure.

Mistake #1: Converting thin-provisioned VMDKs without checking actual vs. reported size

ESXi VMDKs can be thin-provisioned. A disk reporting 500 GB may only have 80 GB of actual data. qemu-img convert converts the reported size, not the actual data size. On a destination with limited disk space, a 500 GB thin-provisioned VMDK will write a 500 GB qcow2 file even if only 80 GB is used.

Fix: Check actual disk usage in the VM before export. du -sh /vmfs/volumes/datastore1/myvm/ on the ESXi host shows actual space. Use -c flag with qemu-img to enable compression: qemu-img convert -c -f vmdk -O qcow2 myvm.vmdk myvm.qcow2.

Mistake #2: Not testing PBS restores before cutting over production VMs

We set up Proxmox Backup Server, ran the first backup successfully, and then migrated production VMs before testing a restore. The backup was valid. But we had not confirmed the restore process, the restore performance, or that the PBS server had enough storage for weekly retention of all VMs.

Fix: Run a full test restore of at least one VM to a temporary datastore and boot it. Confirm it takes the expected time. Calculate retention storage needs (number of VMs × average compressed backup size × retention count) and provision accordingly before production VMs move.

Mistake #3: SCSI controller mismatch causing boot failure

After importing a disk, we set the controller to VirtIO SCSI for a Windows VM that did not have VirtIO drivers. The VM would not boot, just a blinking cursor. Took 20 minutes to diagnose because the error was silent.

Fix: Default to IDE for the first boot of any Windows VM without confirmed VirtIO drivers. Then install drivers, then switch controller. Never assume VirtIO drivers are present on a Windows VM migrated from ESXi without pre-installing them.

Mistake #4: Snapshot behavior is different from VMware

Proxmox ZFS snapshots are instantaneous and storage-efficient. They do not pre-allocate space for delta files like VMware snapshots do. But they work at the block device level, not the guest-application level. Taking a snapshot of a running database VM without quiescing the guest first is not a consistent backup.

Fix: Enable the QEMU Guest Agent on all VMs. PBS uses the guest agent to quiesce the filesystem before backup, equivalent to VMware's quiesced snapshots via VADP. Without the guest agent running, PBS backups of active databases may not be crash-consistent.

Running Proxmox in Production

Backup Strategy with Proxmox Backup Server

Proxmox Backup Server (PBS) is the native backup solution: incremental, deduplicated, compressed backups at the block level. Deploy PBS on dedicated hardware or as a VM on a separate Proxmox host (not the same host you are backing up). Configure daily backup jobs from the Proxmox datacenter backup schedule, set retention (e.g., keep last 7 daily, 4 weekly, 3 monthly), and verify that backup storage is monitored for capacity.

Replacing a Failed ZFS Drive

If you are running ZFS mirrored storage, eventually a drive will fail. The pool will report DEGRADED. Beyond zpool replace, the complete process requires syncing the EFI partition to the replacement drive and updating boot entries so the system boots from either drive.

We have published the script we use with clients for this: zfs_replace_drive.sh automates the full workflow: device identification, partition layout verification, resilver monitoring, EFI sync, and boot entry management.

curl -O https://raw.githubusercontent.com/bullium/bc-pub-scripts/main/pve/zfs_replace_drive.sh
chmod +x zfs_replace_drive.sh
sudo ./zfs_replace_drive.sh rpool ata-WDC_WD40EZAZ-00SF3B0_WD-WX12D97Y1234

Cluster Considerations for SMBs

A single Proxmox node is a valid production configuration for small environments. It lacks HA (no failover on host failure) but is simpler to operate and maintain. For HA, the minimum is three nodes: two compute nodes and one quorum node (which can be a lightweight VM or mini PC). Proxmox HA uses Corosync for cluster communication and a fencing mechanism (IPMI/iDRAC or hardware watchdog) to safely restart VMs when a node fails.

If HA is a requirement, factor the fencing mechanism into your hardware planning before purchasing. An SMB cluster without working fencing is not actually HA. It is just a cluster that might fence itself incorrectly during a node failure.

When to Call in Help

The technical process above is well-documented and manageable for an experienced IT admin. The migration gets complicated when:

  • Your environment has complex networking: multiple VLANs, NIC bonding, PBR, or non-standard routing that needs to be replicated across Proxmox nodes
  • You have legacy applications with Windows Server 2003, 2008, or applications that are brittle around driver or hardware changes
  • You need HA from day one: setting up a Corosync cluster with proper fencing the first time is not difficult, but the failure mode of getting it wrong is data loss
  • Your team has not run Proxmox before: the platform is not complex, but having a guide through the first production deployment significantly reduces the risk window

Planning a VMware Migration?

We have done this for clients across the Mid-Atlantic, from small environments running 3-5 VMs to multi-site deployments with 30+ virtual machines and complex HA requirements. Assessment, architecture design, or full managed migration engagements available.

Book a 30-Minute Migration Assessment →

Ready to Move Off ESXi?

Whether you are migrating 3 VMs or 30, we have done this before. From pre-migration architecture decisions through phased cutover and post-migration operational hardening, we provide the expertise to make it clean.