Nested virtualized VMware LAB on current creator workstation vs. older physical servers

This is a comparison; if the 10 years old server hardware can keep up with today’s desktop computers. The nested virtualized lab seemed better on the Desktop.

As I mentioned in my previous blog posts, I am keen to find the best solution to design and build a sophisticated VMware LAB (as a source system) to test Cloud Migration tools.

My plan is to have an up and running VMware-based LAB at low cost, as a source system for Cloud Migration tests latest by October 2021. My blog might save time for you in experimenting/learning similar things.

Our journey starts with building an on-premises LAB, which requires hardware (emulating production source datacenters – even multi-tenant; for the demo-migration projects).

You should do similar attempts; unless you are born in the cloud generation* and has an unlimited budget or friends providing you access to such environments for learning (self-education) purposes. At some point later, you might need to perform cross-cloud migration tests which will not require on-prem hardware – but as of today, the source of the migration in most cases are local datacenters (includes hardware).

* Please note, Azure VMware Solution, 3 node ~minimal deployment will cost you ~24K USD/mo, this is for production or serious POCs, not for self-education. You can try VMware-hosted shared hands-on labs as well – I will do the same, soon.

I like to build LABs by myself from scratch to have full admin experience. I like to be in control of “My Cloud”, or “Your Cloud” as we used to say at VMware a decade before.

I was comparing a 10 years old server running vSphere 6.7 natively with a one-year-old workstation running vSphere 7 in a nested virtualized setup. As of today, Workstation looks like a better approach for learning/LAB purposes. Nested virtualization got a lot better over the last few years.

Please, do not buy any hardware before I finish building up my lab and confirm that this concept works as expected.

Below see my first impression after one-week testing.

Attila is comparingServer / x5660 / LSI RAID 12G@7K / native vSphere 6.7 bootWorkstation / 3900XT / NVMe PCIe / nested vSphere 7 on WS16 & Win10
Performance5 – OK; with local 12Gbps connected Ent NL-SAS RAID107 – surprising good speed on NVMe Gen4, OS install, boot and work experience, all quick, no wasted time
Noise level1 – high, not recommended to work next to server – the big issue at home7 – low, not as quiet as a laptop, but absolutely OK near you
Power consumption5 – 400-500W depending on load, quite expensive to run 7/24,
(2×6 core, 18x8GB DDR3)
5 – similar (12 core, 4x32GB DDR4)
Startup time3 – boots like a server, i.e. slow10 – startup/full operation below ~30 sec
Hibernate option1 – nope10 – this is huge /use suspend VM or just hibernate the Windows 10 with all VMs running, continue anytime later
Cost of one node5 – ~3200 USD (~was 10K)5 – ~3200 USD (it’s ~one year old)
Can run Davinci Resolve 17 and Forza Horizon 4no, it’s dedicated for vSphereyes, you can edit video or play if your memory is not full with VMs yet ?
SummaryTotal points: ~ 20Total points: ~ 40

Rate between 1 and 10. Higher is better. Workstation looks twice good as servers. I am comparing apples with giraffes ?

This blog is not sponsored by anybody, it’s based on the devices I already own for some time. You might have similar gadgets at home. Do not let them be wasted. Use them for self-education.

I spent days on FC SAN config. It’s a Brocade-like legacy configuration portal (or CLI), building FABRIC “A” and “B”, single initiator zoning as I learned from friends at EMC, so much fun after ~10 years of not doing such. However, a single NVMe Gen4 stick overperformed my legacy SAN (12x7K SATA disks in RAID 10 array installed into HP MSA 1500 CS – config via a special serial cable; connected 2*2Gbps FC SAN via redundant 4Gbps SAN switch). Well, NVMe has no redundancy, so I can lose all data (LAB) lightning fast! Considering backup (Veeam).
I need to admit, I added Intel SAS HBA (RS3UC080) and ST1000NM0045 disks. Really nice in RAID10 with VMFS6.
I installed 10G Intel NICs (2 port) (X710) for 2 node vSAN with an external quorum, maybe later.

You will need a creator PC at some point to run Windows 10 (ups, sorry, Windows 11! is for creators). You might end up similar experience using VMware Workstation 16. I am using creator PC for other purposes, such as video edit using Davinci Resolve, leverage GPUs.

My multi-purpose creator PC (on left with fancy colorful Super RTX 2070). I am using macOS on various Apple hardware (computer on right). However, I prefer to use Workstation 16 and not Fusion. Sorry VMware folks, I am experimenting with Parallels Desktop 17 on Mac. It has nothing to do with my Cloud Migration LAB.
Left is the physical server at 76%, right is nested workstation completed the Windows 2022 installation already.
After reboot. I need to work on infra, DNS, DHCP, VLANs, GW, etc.

We will talk about the basic networking environment (AD/DNS/DHCP/GW) for such LAB in the next blog post. Please be patient, I know that localhost.localdomain looks as bad as self-signed certificates.

Related posts

Comparison of VMware relocation options in public cloud

I keep researching this topic from several perspectives: regional availability, provided architecture, most popular use cases, VMware software versions, provided hardware configuration, and finally the price of a 3-node vSphere cluster in the Cloud.

AWS MiGratioN, GCP Migrate4Cloud, and Azure Migrate pros and cons

It’s been more than 5 years since I am testing and comparing 1st party migration tools. I have seen these tools getting better over the years, with major improvements by acquisitions, end-of-life products, continuous changes, and improvements not just the tools but the methodology around, well-architected, CaF, the concept of the landing zone, 5Rs become 7Rs. In this article, I am sharing my experiences with the most commonly used cloud migration tools.

Oracle Database service for Azure – connecting Azure VM and Power App

I have connected a Database Admin Azure VM running Oracle’s SQL Developer (Windows version) and a Microsoft Power Platform application displaying Oracle’s HR demo schema (via on-premises data gateway on Azure VM connecting with Power Platform’s Oracle Premium Connector) to the same Oracle Database hosted on OCI.

Oracle Database service for Azure – linking subscriptions

As part of my multi-cloud research, I wanted to test Oracle Database Service for Azure. In this article, you will see how to sign up for the new service and how to link Oracle and Azure accounts. I used Frankfurt datacenters, Azure MSDN, and OCI paid account (Free Tier does not work) using my private Azure Active Directory.

Why multi-cloud is the way to go? VMware and Oracle perspective.

While cloud migration is still a popular topic during customer discussions, I have noticed that more and more customers are considering an exit plan from one cloud (vendor lock-in) to another cloud meaning there is an increase in multi-cloud migration demand. VMware, Oracle, and SAP are the major workloads in on-premises data centers today. Based on my research both VMware and Oracle are very vocal about the importance of having a multi-cloud strategy.

AWS Site-to-Site VPN using MikroTik RouterOS

There are two ways of approaching this challenge. (#1) running MikroTik virtual appliance (CHR) in AWS (#2) using Virtual Private Gateway, a “cloud-native” networking solution provided by AWS. Each solution has its own benefits.