Nested virtualized VMware LAB on current creator workstation vs. older physical servers

This is a comparison; if the 10 years old server hardware can keep up with today’s desktop computers. The nested virtualized lab seemed better on the Desktop.

As I mentioned in my previous blog posts, I am keen to find the best solution to design and build a sophisticated VMware LAB (as a source system) to test Cloud Migration tools.

My plan is to have an up and running VMware-based LAB at low cost, as a source system for Cloud Migration tests latest by October 2021. My blog might save time for you in experimenting/learning similar things.

Our journey starts with building an on-premises LAB, which requires hardware (emulating production source datacenters – even multi-tenant; for the demo-migration projects).

You should do similar attempts; unless you are born in the cloud generation* and has an unlimited budget or friends providing you access to such environments for learning (self-education) purposes. At some point later, you might need to perform cross-cloud migration tests which will not require on-prem hardware – but as of today, the source of the migration in most cases are local datacenters (includes hardware).

* Please note, Azure VMware Solution, 3 node ~minimal deployment will cost you ~24K USD/mo, this is for production or serious POCs, not for self-education. You can try VMware-hosted shared hands-on labs as well – I will do the same, soon.

I like to build LABs by myself from scratch to have full admin experience. I like to be in control of “My Cloud”, or “Your Cloud” as we used to say at VMware a decade before.

I was comparing a 10 years old server running vSphere 6.7 natively with a one-year-old workstation running vSphere 7 in a nested virtualized setup. As of today, Workstation looks like a better approach for learning/LAB purposes. Nested virtualization got a lot better over the last few years.

Please, do not buy any hardware before I finish building up my lab and confirm that this concept works as expected.

Below see my first impression after one-week testing.

Attila is comparingServer / x5660 / LSI RAID 12G@7K / native vSphere 6.7 bootWorkstation / 3900XT / NVMe PCIe / nested vSphere 7 on WS16 & Win10
Performance5 – OK; with local 12Gbps connected Ent NL-SAS RAID107 – surprising good speed on NVMe Gen4, OS install, boot and work experience, all quick, no wasted time
Noise level1 – high, not recommended to work next to server – the big issue at home7 – low, not as quiet as a laptop, but absolutely OK near you
Power consumption5 – 400-500W depending on load, quite expensive to run 7/24,
(2×6 core, 18x8GB DDR3)
5 – similar (12 core, 4x32GB DDR4)
Startup time3 – boots like a server, i.e. slow10 – startup/full operation below ~30 sec
Hibernate option1 – nope10 – this is huge /use suspend VM or just hibernate the Windows 10 with all VMs running, continue anytime later
Cost of one node5 – ~3200 USD (~was 10K)5 – ~3200 USD (it’s ~one year old)
Can run Davinci Resolve 17 and Forza Horizon 4no, it’s dedicated for vSphereyes, you can edit video or play if your memory is not full with VMs yet ?
SummaryTotal points: ~ 20Total points: ~ 40

Rate between 1 and 10. Higher is better. Workstation looks twice good as servers. I am comparing apples with giraffes ?

This blog is not sponsored by anybody, it’s based on the devices I already own for some time. You might have similar gadgets at home. Do not let them be wasted. Use them for self-education.

I spent days on FC SAN config. It’s a Brocade-like legacy configuration portal (or CLI), building FABRIC “A” and “B”, single initiator zoning as I learned from friends at EMC, so much fun after ~10 years of not doing such. However, a single NVMe Gen4 stick overperformed my legacy SAN (12x7K SATA disks in RAID 10 array installed into HP MSA 1500 CS – config via a special serial cable; connected 2*2Gbps FC SAN via redundant 4Gbps SAN switch). Well, NVMe has no redundancy, so I can lose all data (LAB) lightning fast! Considering backup (Veeam).
I need to admit, I added Intel SAS HBA (RS3UC080) and ST1000NM0045 disks. Really nice in RAID10 with VMFS6.
I installed 10G Intel NICs (2 port) (X710) for 2 node vSAN with an external quorum, maybe later.

You will need a creator PC at some point to run Windows 10 (ups, sorry, Windows 11! is for creators). You might end up similar experience using VMware Workstation 16. I am using creator PC for other purposes, such as video edit using Davinci Resolve, leverage GPUs.

My multi-purpose creator PC (on left with fancy colorful Super RTX 2070). I am using macOS on various Apple hardware (computer on right). However, I prefer to use Workstation 16 and not Fusion. Sorry VMware folks, I am experimenting with Parallels Desktop 17 on Mac. It has nothing to do with my Cloud Migration LAB.
Left is the physical server at 76%, right is nested workstation completed the Windows 2022 installation already.
After reboot. I need to work on infra, DNS, DHCP, VLANs, GW, etc.

We will talk about the basic networking environment (AD/DNS/DHCP/GW) for such LAB in the next blog post. Please be patient, I know that localhost.localdomain looks as bad as self-signed certificates.

Related posts

Fixing Proxmox Boot Hangs When Passing Through 2× RTX 3090 GPUs: Step-by-Step Troubleshooting Guide

Running multiple NVIDIA GPUs for AI workloads in Proxmox VE can cause early boot hangs if the host OS tries to load conflicting drivers. In this guide I document how my Proxmox host with 2× RTX 3090 was stuck at systemd-modules-load, how I debugged it, which files to inspect (/etc/default/grub, /etc/modprobe.d/, /etc/modules-load.d/), and the final stable configuration for rock-solid GPU passthrough to an Ubuntu VM.

Building the Perfect Edge AI Supercomputer – Adding an Edge Virtualization Layer with Proxmox and GPU Passthrough

I built on my edge AI hardware by adding Proxmox VE as the virtualization layer. After prepping BIOS, using Rufus with the nomodeset trick, and installing Proxmox, I enabled IOMMU, configured VFIO, and passed through 2× RTX 3090 GPUs to a single Ubuntu VM. This setup lets me run private AI workloads at near bare-metal speed, while keeping Windows and native Ubuntu for special use cases.

Budget AI Supercomputers: Dell Server vs. Threadripper Build vs. Next-Gen AI Desktop

Exploring three budget AI supercomputer paths: a Dell R740xd for enterprise labs with big storage but limited GPU flexibility, a TRX50 + Threadripper 7970X workstation offering fast DDR5, Gen5 NVMe, and dual RTX GPU power, and the futuristic GB10 AI desktop with unified CPU/GPU memory. Dell is lab-friendly, GB10 is AI-only, but the TRX50 build strikes the best balance today.

Building the Perfect Edge AI Supercomputer – Cost Effective Hardware

Keeping up with today’s technology is both exciting and demanding. My passion for home labs started many years ago, and while my family often jokes about the time and money I spend on self-education, they understand the value of staying ahead in such a fast-moving field. What started as curiosity has grown into a journey of building cost-effective supercomputers for edge AI and virtualization.

Fix VMware Workstation Performance Issues on Windows 11: Disable Hyper-V and VBS

This blog explains why VMware Workstation runs slower on Windows 11 compared to Windows 10, focusing on changes like Hyper-V, VBS, and HVCI being enabled by default on modern CPUs. It explores why sharing hypervisors with native hardware causes performance issues, and why disabling Hyper-V restores full VMware performance. Step-by-step PowerShell scripts are provided to toggle Hyper-V on or off safely.

Terraform deployment for FortiGate Next-Generation Firewall in Microsoft Azure

This blog explores deploying FortiGate VM in Azure, tackling challenges like license restrictions, Terraform API changes, and Marketplace agreements. It offers insights, troubleshooting tips, and lessons learned for successful single VM deployment in Azure. Using an evaluation license combined with B-series Azure VMs running FortiGate is primarily intended for experimentation and is not recommended for production environments.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.