The perfect nested virtualization-based demo LAB: adding Template VMs (and vSAN)

Recently, I have installed several template virtual machines in my lab. My primary objective is to automate large-scale migrations to the AWS Cloud using the AWS CloudEndure Migration Factory Solution. The secondary objective is to test VMware HCX based migrations to Cloud. This article gives you an idea of how to prepare vSAN based nested LAB to achieve similar goals.

List of template VMs I deployed so far.

It was an incredible experience to deploy various operating systems released in the last 2 decades. Amazon Linux 2 and VMware Photon OS were new to me. Surprised by how similar all Linux installations are (Oracle Enterprise Linux, Fedora, CentOS). SUSE installation wizard was nice. I used FreeBSD a lot in the past, port collection, compile, the same story in 2022. Windows, server, multiple versions. The oldest Windows 2008 R2 server and VMware tools were tricky, the newest Windows 11 actually requires you to add TPM, so I kept it on Workstation level (no nested virtualization).

I will check if TPM is possible in nested-virtualization (via virtual TPM) some point later. To have Windows 11 running in nested vSphere. But this is not important at this point for me. VDI pools are OK on Windows 10 for testing VMware Horizon in Hybrid deployments (with cloud) or understanding how Horizon Universal adds value.

I am going to use Terraform to automate LAB migration deployment using recently (manually) deployed templates. I will add NSX-T and HCX to this lab and connect with Cloud. I start with AWS. As mentioned in the subject, my primary objective is to gain knowledge with mass migration tools such as AWS CloudEndure Migration Factory.

Coordinate and automate large scale migrations to the AWS Cloud using the AWS CloudEndure Migration Factory Solution – AWS CloudEndure Migration Factory Solution (amazon.com)

This is my ESXi host. Since the host Workstation has 128GB, each nested server runs with 32GB RAM. 24 core in the host, 8 assigned to each guest. I think, there will be enough capacity for NSX-T and HCX testing (assuming management overhead and template VMs, basic production-like workloads for migration testing).

vSAN works perfectly in a nested-virtualized environment. I used both virtual-NVMe (cache tier) and virtual-SCSI (capacity tier) disks (they are both on physical NVMe). Each vSphere host is using a different physical NVMe disk, to ensure vSAN performance. Actually, vSAN is as fast as the local disk while running OS installation times (no IOPS or sophisticated performance testing was done).

There is no distributed switch or NSX-T deployment yet. Just some good old standard switch. Works very nice in nested VMware Workstation 16 environment.
I will introduce more complexity by adding gateways and VLANs, however standard Host-only networks with DHCP and defined subnets work very well for vMotion and vSAN traffic.

That’s it for today. I encourage everyone to build nested virtualized LAB. The best is this: fast, silent (workstation computer) and there is one button to hibernate the entire datacenter when I go to sleep. Actually, I hibernate the Windows 10 running the Workstation 16 with the vCenter Server and vSphere nodes. Including everything that runs inside. Even vSAN. I hope this continues to work as the “datacenter” gets more complex (ie. adding NSX-T).

I am also planning to add a VMware Kubernetes engine (Tanzu) to see what the management of cloud-native workloads looks like in vSphere and how migration tools can pick up workloads from there.

I will not forget about VMware Site Recovery Manager and Horizon View. I am also planning to test vRealize automation. Large enterprises often use such VMware products (not only vSphere) and we need to have answers of how they work in a hybrid cloud deployment, how they integrate with cloud-native workloads.

Stay tuned, the best is yet to come!

Related posts

Comparison of VMware relocation options in public cloud

I keep researching this topic from several perspectives: regional availability, provided architecture, most popular use cases, VMware software versions, provided hardware configuration, and finally the price of a 3-node vSphere cluster in the Cloud.

AWS MiGratioN, GCP Migrate4Cloud, and Azure Migrate pros and cons

It’s been more than 5 years since I am testing and comparing 1st party migration tools. I have seen these tools getting better over the years, with major improvements by acquisitions, end-of-life products, continuous changes, and improvements not just the tools but the methodology around, well-architected, CaF, the concept of the landing zone, 5Rs become 7Rs. In this article, I am sharing my experiences with the most commonly used cloud migration tools.

Oracle Database service for Azure – connecting Azure VM and Power App

I have connected a Database Admin Azure VM running Oracle’s SQL Developer (Windows version) and a Microsoft Power Platform application displaying Oracle’s HR demo schema (via on-premises data gateway on Azure VM connecting with Power Platform’s Oracle Premium Connector) to the same Oracle Database hosted on OCI.

Oracle Database service for Azure – linking subscriptions

As part of my multi-cloud research, I wanted to test Oracle Database Service for Azure. In this article, you will see how to sign up for the new service and how to link Oracle and Azure accounts. I used Frankfurt datacenters, Azure MSDN, and OCI paid account (Free Tier does not work) using my private Azure Active Directory.

Why multi-cloud is the way to go? VMware and Oracle perspective.

While cloud migration is still a popular topic during customer discussions, I have noticed that more and more customers are considering an exit plan from one cloud (vendor lock-in) to another cloud meaning there is an increase in multi-cloud migration demand. VMware, Oracle, and SAP are the major workloads in on-premises data centers today. Based on my research both VMware and Oracle are very vocal about the importance of having a multi-cloud strategy.

AWS Site-to-Site VPN using MikroTik RouterOS

There are two ways of approaching this challenge. (#1) running MikroTik virtual appliance (CHR) in AWS (#2) using Virtual Private Gateway, a “cloud-native” networking solution provided by AWS. Each solution has its own benefits.