AI News Hub Logo

AI News Hub

💻 How to vm migrate from vmware to kvm — key tips and pitfalls

DEV Community
Python-T Point

Two virtual machines, identical in configuration and OS, migrated from VMware to KVM using different tools: one completes in 22 minutes with full network functionality; the other fails after 45 minutes with a kernel panic. Same hypervisor destination. Same source vCenter. Same guest OS. The difference? Whether virt-v2v was used — or avoided. If you need to vm migrate from vmware to kvm , this tool isn’t optional. It’s the only method that consistently produces bootable, production-ready KVM guests. 📑 Table of Contents 🚀 Prerequisites — What You Need Before Running virt-v2v 🔐 Access to VMware 💾 Destination Options 🧩 Supported Guest OSes 🔌 Connection — How virt-v2v Talks to VMware 💽 Conversion — What Happens During the Transformation 🔧 Windows-Specific Changes 📦 Output Formats 🚫 Common Conversion Failures 📤 Deployment — Getting the VM to KVM Efficiently 🔗 Network Configuration 🔁 Post-Migration Checks 🟩 Final Thoughts ❓ Frequently Asked Questions Can I convert VMs without powering them off? Does virt-v2v support encrypted VMware VMs? Can I automate migration of multiple VMs? 📚 References & Further Reading Need Before Running virt-v2v virt-v2v is not a standalone binary. It's a pipeline built on libvirt, QEMU, and libguestfs. You must run it from a Linux conversion host capable of connecting to both VMware (via vCenter or ESXi) and the destination KVM environment. The host requires: libvirt with QEMU/KVM driver virt-v2v (part of the virt-v2v package on most distributions) qemu-img for intermediate disk handling Network access to vCenter/ESXi and destination KVM host Sufficient scratch space — at least 1.5× the size of the largest VM being converted On Red Hat–based systems (RHEL, Rocky Linux, AlmaLinux): $ sudo dnf install virt-v2v libguestfs-tools-c qemu-img Expected output: Installed: virt-v2v-1.4.6-1.el9.x86_64 libguestfs-1:1.48.20-1.el9.x86_64 qemu-img-6.2.0-30.el9_3.1.x86_64 Complete! Under the hood, virt-v2v uses libguestfs to launch a minimal appliance via guestfsd. This mounts the source VM's filesystem to perform targeted modifications: removing VMware-specific drivers like vmxnet3, injecting KVM equivalents (virtio_net, virtio_blk), and rewriting bootloader configuration. This is not a blind disk copy — it’s a guest-aware transformation. virt-v2v uses URIs to connect to VMware: vpx:// — for vCenter-managed clusters esx:// — for standalone ESXi hosts You’ll need: vCenter or ESXi hostname/IP Username with read-only VM privileges Password (or keyring integration) Source VM name or inventory path Output formats include: Local libvirt storage pool (-o libvirt) Remote KVM host via SSH (-oo libvirt_uri=qemu+ssh://…) Raw file output (-o null -os /path/to/output) The most common production setup uses qemu+ssh to stream the VM directly to a remote KVM host. virt-v2v officially supports: RHEL/CentOS 6–9 Debian 10–12 Ubuntu 18.04–22.04 Windows Server 2008–2022 (requires virtio-win drivers) Unsupported or legacy distributions may boot, but often fail at initramfs or driver loading without manual fixes. Talks to VMware virt-v2v connects directly to the VMware vSphere API over HTTPS. No manual OVA export is required. Example command: $ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \ -it vddk -ip esx_password \ 'Windows-VM' Breakdown: -ic: input connection URI -it vddk: enables VMware’s Virtual Disk Development Kit (VDDK) -ip: prompts for password (prefer over plaintext) 'Windows-VM': VM name as registered in vCenter VDDK enables hot disk reading via VMware’s VixDiskLib , allowing direct access to .vmdk files on ESXi datastores — even while the VM is running. Without VDDK, virt-v2v falls back to NBD or HTTPS transport, which are 3–5× slower and require the VM to be powered off. Expected output snippet: [ 0.0] Opening the source -i libvirt -ic vpx://... [ 2.1] Creating an overlay to protect the source from being modified [ 3.5] Opening the overlay [ 10.2] Inspecting the overlay [ 15.0] Checking for sufficient free disk space in the overlay [ 15.1] Converting Windows-VM to run on KVM [ 16.0] Creating output metadata VDDK requires the VDDK library installed on the conversion host. Download from VMware and extract: $ tar -xzf VMware-vix-disklib-*.tar.gz -C /opt $ virt-v2v ... -oo vddk-libdir=/opt/vmware-vddk/lib64 This library path must point to the lib64 directory containing libvixDiskLib.so. For production migrations, VDDK is non-negotiable — skipping it increases transfer time and requires downtime. Transformation virt-v2v performs a deep guest reconfiguration, not a simple format swap. The process includes: 1. Disk download via VDDK → temporary qcow2 overlay 2. Guest inspection : reads /etc/os-release, bootloader, partitioning 3. Driver substitution : replaces vmxnet3 with virtio_net, pvscsi with virtio_scsi 4. Bootloader update : GRUB config rewritten for virtio block devices 5. Initramfs rebuild : dracut or update-initramfs regenerates with virtio modules 6. Disk export : final image pushed to target storage For a Linux VM named webserver-01: $ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \ -oo vddk-libdir=/opt/vmware-vddk/lib64 \ -o libvirt -os default \ 'webserver-01' Output: [ 50.2] Creating local storage path for the converted disk [ 51.0] Creating qcow2 disk (for libvirt) with size 21.5G [ 60.3] Setting a random seed for the new guest [ 61.5] Changing the root password [ 65.0] Installing virtio drivers (Linux) [ 68.2] Rewriting GRUB configuration [ 70.1] Updating initramfs [ 75.4] Building the libvirt XML [ 76.0] Creating libvirt domain... Domain created successfully. The initramfs rebuild is critical. If virtio_blk is absent during early boot, the kernel cannot detect the root device and will panic with: "ALERT! /dev/sda1 does not exist. Dropping to a shell." virt-v2v avoids this by chroot-ing into the guest disk and running: dracut -add-drivers virtio_pci,virtio_blk,virtio_net (RHEL/CentOS) update-initramfs -u (Debian/Ubuntu) This ensures the initramfs contains the drivers needed before the real root mounts. virt-v2v doesn’t just move a VM — it replatforms it, ensuring kernel, bootloader, and drivers align with KVM’s virtual hardware. For Windows VMs, virt-v2v injects virtio-win drivers into the offline registry using guestfs_win_inject_drivers(). This adds: viostor (virtio block) vioscsi (virtio SCSI) viorng (entropy) qemu-ga (optional) And sets each service Start value to 0 (boot time load) in: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ The virtio-win.iso must be accessible: $ virt-v2v ... -oo virtio-win-iso=/home/user/virtio-win.iso 'WinServer-2019' Without this, Windows fails to detect the boot disk and blue-screens. Default output is qcow2 with sparse allocation. To use raw: $ virt-v2v ... -of raw Raw is preferred for LVM, iSCSI, or direct device mapping. qcow2 supports snapshots and compression, but adds minor I/O overhead. " No OS found": guest OS not in supported list, or /etc/os-release missing/corrupted dracut-initqueue timeout : virtio_blk missing from initramfs (often due to chroot failure in scratch space) No network post-boot : vmxnet3 driver not replaced, or 70-persistent-net.rules locks old MAC Validate OS compatibility against the official list before starting. Efficiently After conversion, deploy the VM to KVM. The default -o libvirt registers it locally. For remote deployment: $ virt-v2v -ic vpx://vcenter.example.com/... \ -o null -os /var/lib/libvirt/images \ -oo output_mode=local \ -oo libvirt_uri=qemu+ssh://kvmhost.example.com/system \ 'webserver-01' This configuration: -o null: skips local libvirt registration -os /var/lib/libvirt/images: writes disk to local scratch -oo libvirt_uri=…: connects to remote libvirtd over SSH Then uses scp to transfer disk, virDomainDefineXML() to define domain This avoids double-transfer of large disks — a key efficiency when migrating dozens of VMs. On the target KVM host: $ virsh list --all Id Name State ---------------------------------- 3 webserver-01 running Check disk format: $ qemu-img info /var/lib/libvirt/images/webserver-01-sda image: webserver-01-sda file format: qcow2 virtual size: 50 GiB disk size: 14.2 GiB backing file: (none) cluster_size: 65536 Format specific details: compat: 1.1 lazy refcounts: false The disk size is much smaller than virtual size due to sparse allocation — the file only consumes space for written blocks. virt-v2v preserves NIC count and MAC addresses, but changes interface type from vmxnet3 to virtio. Ensure the KVM bridge (e.g., br0) is active and bridged to physical NIC. If no IP is assigned: Verify the bridge: ip link show br0 Check libvirt network: virsh net-list Confirm firewall allows traffic on bridge interface After boot: ip a — confirm interface (e.g., ens3) has link and correct IP dmesg | grep -i virtio — verify virtio_net, virtio_blk loaded lsmod | grep -E "(vmxnet3|vmmouse)" — ensure VMware drivers are absent Test SSH, service uptime, and baseline performance At this point, the vm migrate from vmware to kvm process is complete — with a fully operational guest. Migrating VMs from VMware to KVM is more than a cost play — it's about adopting open, auditable infrastructure. virt-v2v enables this transition not through brute-force copying, but by integrating deeply with libvirt, QEMU, and libguestfs to transform guest configuration at the kernel level. The tool doesn’t abstract complexity — it applies it correctly. You’re not relocating a VM; you’re converting its hardware identity from VMware to KVM. That involves device drivers, initramfs, bootloader logic, and registry entries on Windows. Skipping this (e.g., using qemu-img convert) results in boot failures, undetected disks, or degraded I/O. When you vm migrate from vmware to kvm using virt-v2v, the result isn’t a ported VM — it’s a native one, indistinguishable from a guest installed directly on KVM. Yes, with VDDK. The VMware Virtual Disk Development Kit allows hot reading of .vmdk files, so the source VM can stay powered on during migration. However, only data present at the start of the transfer is captured unless application-consistent snapshots are used. No. VMware VM encryption (VMCE) is not supported by VDDK in offline mode. The VM must be decrypted in vCenter before conversion. Yes. Use the vSphere API or vim-cmd to enumerate VMs, then script virt-v2v calls in a loop. Pair with SSH key authentication and shared storage (e.g., NFS) for efficient, scalable migrations. VMware VDDK documentation — API and deployment guide for high-speed disk access: docs.vmware.com