Adding a 10G-Connected Synology NAS into My Home Lab: My Passion for Hybrid Cloud Labs Continues

This blog explores creating a powerful VMware lab using a budget-friendly 24-core workstation, Synology NAS with SSD storage, and advanced networking setup, including a shift from Mikrotik to Fortinet gateways. A hands-on guide for cost-effective, enterprise-ready cloud migration and virtualization.

Back in 2021, I was really excited about running a nested virtualized VMware LAB on a creator workstation with 128GB of RAM and 24 logical cores. In terms of computing power (CPU and RAM), I am super impressed with the AMD Ryzen™ Threadripper™ PRO 7995WX, boasting 96 cores. However, I’d probably have a much longer conversation with my wife if I accidentally ordered this CPU from our shared bank account, so for the sake of family peace, I decided against it. I can still make do with my 24-core, 3-year-old architecture to keep my hands-on experience with the latest technologies up to date, especially in a hybrid environment, using my lab as a source for migration and modernization demos.

Maybe one day, when prices drop into the more affordable range – like with some of my retro computer builds based on flagship processors from the 90s and 00s – I’ll consider the AMD Ryzen™ Threadripper™ Processors.

AMD Ryzen™ Threadripper™ Processors

Nested virtualized VMware LAB on current creator workstation vs. older physical servers – Cloud Migration Blog by Attila Macskasy

One challenge with VMware Workstation or Hyper-V-based local disks (even if they’re shared) is that they’re not real storage solutions. Personally, I prefer having shared volumes like iSCSI, NFS, or CIFS/SMB to support VMware vSphere, Hyper-V, or any type of containers/Kubernetes labs.

So, the question was: how can I provide real storage to these nested virtualized nodes? The solution I came up with was to invest in an SSD-based Synology system with 64GB of RAM and 10GB NICs.

This setup allows Synology to run some VMs (such as network gateway appliances, domain controllers, and databases) that need to be always on, while also providing iSCSI, NFS, or CIFS/SMB over a 10GB connection to the single server running all the nested clusters. It might seem strange to have just one storage system and one workstation, but trust me – this can function like a large production data center in many scenarios.

I’m not entirely sure why gamers need a 10G CAT8 SFTP patch cable (Vention 40Gbps, 2000MHz), but it works great for connecting my workstation and NAS.

This is my 3-year-old budget-friendly (marriage-approved) Workstation choice.

I went with an 8-disk setup, ordering the DiskStation® DS1821+ from Synology, and adding the E10G18-T1 NIC for direct crosslinking with my workstation running the entire nested virtualized datacenter.

When it came to RAM, I apologize to Synology for not purchasing memory modules from their store. I found larger 64GB modules at a much better price, so I went with OWC, which worked perfectly fine.

As for the disks, I decided on the following layout:

Kingston Enterprise-grade SSDs were more than sufficient for the VMs, and currently, I don’t need a cache for the RED array (used for archiving).

Unboxing all this was so much fun!

As you can see, the system comes with 4x1G LAN ports by default.

So, I installed everything.

This isn’t my first Synology setup, and it definitely won’t be my last. I’m quite familiar with how to install the disks.

Adding RAM and the 10G NIC was pretty straightforward.

The final result looks like this:

I went with RAID 5 for the pools, with one volume for each. Over time, I’ll probably tweak this setup, especially when I introduce iSCSI over 10G to the workstation PC.

That’s it for now! In my next blog article, I’ll dive into building the best lab for incredible performance at an affordable price.

Most recently, I used this lab to test VMware alternatives like Windows Server 2025 and System Center 2025 VMM, Windows Admin Center, Azure ARC, Azure Stack HCI, and I’m eagerly awaiting the newly announced Azure Local solution.

I’m planning on setting up Kubernetes clusters and am especially excited about deploying Red Hat OpenShift.

I plan to deploy Fortinet gateways, as they are more commonly used and enterprise-ready compared to my current Mikrotik-based VPN setup (connecting my lab to Azure, AWS, and GCP).

I’ll approach this lab in a repeatable, automated way using DevOps, likely leveraging Terraform providers and Ansible if in-guest provisioning is necessary.

Great things ahead for the holidays! But of course, family comes first. I’ll do my best, so stay tuned!

Happy Holidays!

Attila

Related posts

Fixing Proxmox Boot Hangs When Passing Through 2× RTX 3090 GPUs: Step-by-Step Troubleshooting Guide

Running multiple NVIDIA GPUs for AI workloads in Proxmox VE can cause early boot hangs if the host OS tries to load conflicting drivers. In this guide I document how my Proxmox host with 2× RTX 3090 was stuck at systemd-modules-load, how I debugged it, which files to inspect (/etc/default/grub, /etc/modprobe.d/, /etc/modules-load.d/), and the final stable configuration for rock-solid GPU passthrough to an Ubuntu VM.

Building the Perfect Edge AI Supercomputer – Adding an Edge Virtualization Layer with Proxmox and GPU Passthrough

I built on my edge AI hardware by adding Proxmox VE as the virtualization layer. After prepping BIOS, using Rufus with the nomodeset trick, and installing Proxmox, I enabled IOMMU, configured VFIO, and passed through 2× RTX 3090 GPUs to a single Ubuntu VM. This setup lets me run private AI workloads at near bare-metal speed, while keeping Windows and native Ubuntu for special use cases.

Budget AI Supercomputers: Dell Server vs. Threadripper Build vs. Next-Gen AI Desktop

Exploring three budget AI supercomputer paths: a Dell R740xd for enterprise labs with big storage but limited GPU flexibility, a TRX50 + Threadripper 7970X workstation offering fast DDR5, Gen5 NVMe, and dual RTX GPU power, and the futuristic GB10 AI desktop with unified CPU/GPU memory. Dell is lab-friendly, GB10 is AI-only, but the TRX50 build strikes the best balance today.

Building the Perfect Edge AI Supercomputer – Cost Effective Hardware

Keeping up with today’s technology is both exciting and demanding. My passion for home labs started many years ago, and while my family often jokes about the time and money I spend on self-education, they understand the value of staying ahead in such a fast-moving field. What started as curiosity has grown into a journey of building cost-effective supercomputers for edge AI and virtualization.

Fix VMware Workstation Performance Issues on Windows 11: Disable Hyper-V and VBS

This blog explains why VMware Workstation runs slower on Windows 11 compared to Windows 10, focusing on changes like Hyper-V, VBS, and HVCI being enabled by default on modern CPUs. It explores why sharing hypervisors with native hardware causes performance issues, and why disabling Hyper-V restores full VMware performance. Step-by-step PowerShell scripts are provided to toggle Hyper-V on or off safely.

Terraform deployment for FortiGate Next-Generation Firewall in Microsoft Azure

This blog explores deploying FortiGate VM in Azure, tackling challenges like license restrictions, Terraform API changes, and Marketplace agreements. It offers insights, troubleshooting tips, and lessons learned for successful single VM deployment in Azure. Using an evaluation license combined with B-series Azure VMs running FortiGate is primarily intended for experimentation and is not recommended for production environments.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.