AWS MiGratioN, GCP Migrate4Cloud, and Azure Migrate pros and cons

It’s been more than 5 years since I am testing and comparing 1st party migration tools. I have seen these tools getting better over the years, with major improvements by acquisitions, end-of-life products, continuous changes, and improvements not just the tools but the methodology around, well-architected, CaF, the concept of the landing zone, 5Rs become 7Rs. In this article, I am sharing my experiences with the most commonly used cloud migration tools.

AWS MGN – Amazon Web Services – Application Migration Service

Let’s start with the agent-based or agentless dilemma. I can test both agent-based and agentless approaches thanks to my great VMware source LAB. Some people told me that the taco-lover-hipster-style-cloud-native people will never use silly-unpredictable-bottleneck-heavy agentless solutions πŸ™‚ These guys designed Netflix and similar immutable systems, so I need to admit that they are right, and I was wrong at some point about flavoring agentless migrations (possible proxy/transfer gateway bottleneck design issues). But still, choosing between agent-based or agentless VM disk replication is not a religious question – each option has pros and cons. I think large enterprises still hate to install agents, especially in production on-premises (vSphere or Hyper-V virtualized) systems, unless it is a must.

It is super easy to install a replication agent, and IAM roles are pretty straightforward.

The interface is pretty nice and clean

You can make mistakes in networking configuration so you will see “Stalled” replication status and agent authentication issues.

When it comes to networking, the agent needs ports TCP/1500 (which might be blocked) and TCP/443.

MGN is based on the CloudEndure acquisition.
After fixing networking issues agent installation log looks much greener.

Agent installation on Windows is similar to Linux – quite easy

I really liked the concept of using EC2 Launch Templates in MGN.

Because editing VM startup configuration gives you lots of flexibility to re-configure (the right size, etc.) the VM when it starts in the cloud.
While replication is ongoing, AWS MGN heavily using port 1500 as expected πŸ™‚

BTW, Mikrotik, did you know that AWS supports it? Version 6.44.3 is officially supported as a Site2Site VPN device.
Wohoooo, replication is completed. Ready for testing.
AWS seems to have a bit more sophisticated steps for migration. But if you look at it again, same as Azure Migrate: Ready for testing (Replication), Test, and Cutover (Migrate).
Test
Steps to test
It was a bit faster than Azure Migrate, but still, GCP is my favorite choice (speed-wise)

Without further do, my quick assessment

Pros

  • combined with AWS Migration Factory and AWS Control Tower this can possibly handle very large migrations
  • agent installation is very easy and high-scale automation-friendly
  • this is a mature solution with a long history, not only Linux, AWS is keen to get Windows Workloads to their cloud, and it offers both Database Migration, Schema Conversion, and App Migration tools as well.

Cons

  • for me, using proxies for replication and flavor agentless solutions, it was a bit interesting to see that each source VM gets a target replication VM (a small individual Linux) to receive disk replication data, somewhat waste of resources in the cloud, but nevertheless, this is no additional cost, and I understand the scalability and durability reasons (less/no bottleneck).
  • each AWS region has its own MGN, this is not a global service, if you plan to migrate multiple regions, you might need multiple MGN configurations or build a custom orchestration on the top
  • testing VMs as slow as Azure Migrate, come on, GCP made it fast. I think the admin user experience makes sense. no one likes to wait at the cloud console πŸ™‚

Kudos (this is how Microsoft people say thanks) to Ivan Trbovic | LinkedIn my partner in MGN crime(s). Special thanks to Mattia Lepri Berluti | LinkedIn who is probably the most experienced expert in this topic. Thank you, guys!

GCP M4C – Google Cloud Platform – Migrate for Cloud

Changing gears, and let’s see what GCP can do for us. I used Aga’s blog to study how she tested M4C a little earlier (as a Google employee). I was impressed with Azure Migrate that time working at Microsoft πŸ™‚

Migrating VMs from VMware on-prem (or GCVE ) to Google Compute Engine – SOFTWARE DEFINED BLOG

It starts again with pimpimg my VMware Source system to create users and roles for M4C to connect to vSphere.

I was having a little fun with GCP CLI, leaned about projects, IAM, etc.

Well, I need to admit that I failed to generate an SSH key using Windows OS and PuTTY. I have no idea of how to do this. But I do not care, because I found the right solution.

I was a bit upset about losing an hour on deploying a VMware template and connecting to it.

Let’s follow the documentation, carefully, step by step…
After 3 attempts, all failed. With ssh-rsa prefix, without, with rsa-key postfix, without, etc. I was very angry about not knowing how to put SSH Public Key to VMware OVF deployment wizard πŸ™
I got this silly error message over and over again… Grrr…

Then I decided to use my Ubuntu VM to generate an SSH key

Please take a look at this important screenshot, this is how to nail it

Finally.
Ta-daaaa! Connected. That’s it. I nailed it. πŸ™‚

The appliance command line configuration was kind of easy, with no drama, just followed the documentation.

Discovery completed, I see a list of VMs from vCenter, let’s do the migration.
It took me some time to understand the purpose of sources/migrations/groups/targets. But I think this is pretty handy. Just a little different.
Replication was completed for my VM, and replication started for Sandor’s VM.
Replication is heavy on TCP/443
Replication is ready.
So you need to start Synchronization… πŸ™‚
This is what mapping resources look like. It is called Target details. You need to set the VM parameters in GCP.
The network needs to be created before the test run.
Create Test-Clone. This is my favorite stuff. Very very fast.
Starting test VM
Done. Works. Wow.

I got a public IP to test VM

Seems clean up is manual πŸ˜‰ You need to delete the clone VM for yourself. No problem, it took no time.

Pros

  • Speed. Mind-blowing fast. Finally. Test migration in less than a minute. This is how it should be. Well done Google.
  • A modern interface and perfect APIs and automation opportunities.
  • Easy to setup, especially if you generate the appliance SSH on Linux πŸ™‚

Cons

  • Still a bit non-mature, i.e. new. I am not sure about large-scale migration capacity.
  • Semi-automated test cleanup
  • Lack of/limited ecosystem, especially compared to Microsoft Azure Migrate

Kudos to SΓ‘ndor Tibor PolΓ‘k | LinkedIn who wanted to check out this tool with me as an afternoon GCP hands-on challenge for an Azure and AWS expert.

MS AZMIG – Microsoft Azure Migrate in 2019

I remember, that back in 2019 it was only Cloudamize as 3rd party assessment tool available as part of the Azure Migrate ecosystem.

At that time, I was deploying side by side the VMware vSphere and the Microsoft Hyper-V-based appliances to see what are the differences.

I remember, that besides the connection credentials to Hyper-V or vCenter, the VMware version of the appliance required you to install VMware Disk tools. Just like today in 2022.

Version 6.7 in 2019

In 2019, working at Microsoft I was happily using Hyper-V next to my VMware clusters. However, never managed to connect Azure Migrate to a cluster, only a single node standalone Hyper-V. Maybe the product was in preview (buggy), or I did a mistake on the Hyper-V cluster credentials configuration. At that time, I was a happy System Center User. Up to Windows Azure Pack. Deploying something called Cloud OS Network to service providers in Central Eastern Europe together with Microsoft delivery partners. Lots of fun. Early days of Azure.

I can’t have a happy life without having a cool VMware source lab, so that time in 2019 my VMware lab looked like this

Please note, this is a kind of multi-tenant (no vCloud Director this time), multiple FC storage LUNs, distributed switch (no NSX-T at that time), etc. Various templates. I cloned from templates standard Windows and Linux-based typical workloads. Still, so much fun.

For the Linux source system, I used ISPConfig, which has a sense of multi-tenancy (LAMP hosting control panel) instead of a single admin boring WordPress. Ups, apologize AWS, the migration workshop demo is still very cool (WordPress based). But I am a special hosting mindset service provider geek, I like to challenge myself with advanced source workloads.

The insider preview portal in 2019 was quite similar to the 2022 version of Azure Migrate. Below you see a 4-node basic Windows workload assessment. Obviously, everything is super green. Next time I will add some Gentoo Linux w. custom compiled kernel/modules πŸ™‚

It was really nice to map each disk to the target, an example is an MS SQL server with multiple disks.

Replication…
Test Migration…

I do not think that MS had the Bastion Host in 2019 πŸ™‚ I used traditional point2site or site2site VPN and tested with RDP to private IP. Or sometimes put a temporary jump box VM to the target network, actually networks, one for the test and one for the production.

If you forget to enable RDP or firewall on the source VM, you will have a fun time in Cloud with serial console CLI to fix it – if you can. Probably it is less difficult to fix on-prem and wait for sync and re-test.

MS AZMIG – Microsoft Azure Migrate in 2022

Let’s fast forward in time and see how Azure Migrate looks today.

First thing first, this is not only for servers anymore. It can handle databases and web apps. Because there is Database Migration Service. I still remember from 2014 www.movemetothecloud.net (Introducing Azure Websites Migration Assistant | Azure Blog and Updates | Microsoft Azure) which is now part of Azure Migration and Modernization Center | Microsoft Azure – see Web App section.

For the Azure Migrate VMware appliance, you still need to download VDDK.

Version 6.7 in 2022 πŸ™‚ Haha. Good things do not need improvement.

the appliance UI got better…
and you have some new authentication methods to allow the appliance to discover within the VMs (for example an SQL server)

I was running training on Azure Migrate in May 2022 for another partner. We spent soo much time on understanding how to connect to vCenter with the very minimum permissions to do the discovery and the actual replication and migration.

I remember, that the Microsoft documentation was outdated, and VMware vSphere 7 roles were called slightly different – but it was still obvious how to create a custom role in vSphere and use it in the VM/storage/network folders to follow the least privilege concept.

Improvement in mapping on-prem and Azure resources.

We still have a similar UI to 2019 for the replication, testing, and migration.
I used text files, and SQL database record inserts to check if replication is completed as expected.
It is also smart to monitor replication in vCenter, checking the Applicate network traffic. Below is a VM sync to Azure. My VMware lab is now version 7 and will be upgraded to 8 soon πŸ™‚ I miss Windows UI, sorry old-school mindset.

If you need more real-time and accurate information I highly recommend to monitor Azure Migrate Appliance on your network firewall

I have Mikrotik, and I checked traffic on interfaces and firewall filtered to the appliance as source IP. It is only using TCP/443 as per documentation.
Assessment tools that integrate with Azure Migrate – rich ecosystem in 2022
Migration tools that integrate with Azure Migrate – rich ecosystem in 2022

That’s it. No drama. Azure Migrate does its job.

Pros

  • quite easy to use
  • it works for smaller workloads, not only Windows, and not only MS SQL. Microsoft loves Linux.
  • nice 3rd party ecosystem of assessment and discovery tools – that’s huge, well done

Cons

  • quite slow orchestration, especially compared to GCP M4C πŸ™‚ Starting up a single small demo VM in Azure (test migrate) from already replicated disks takes ages.
  • I like agentless. However, SCOM/Log Analytics/Dependency mapping or whatever agent is still not the replication agent what and how AWS MGN does.
  • scalability, I can group VMs to assessments, and it works nicely within the range of 5-50VM, I am not sure about 1000+ VMs. There is a lack of large-scale orchestration and automation without using 3rd party ecosystem.

Related posts

Fixing Proxmox Boot Hangs When Passing Through 2Γ— RTX 3090 GPUs: Step-by-Step Troubleshooting Guide

Running multiple NVIDIA GPUs for AI workloads in Proxmox VE can cause early boot hangs if the host OS tries to load conflicting drivers. In this guide I document how my Proxmox host with 2Γ— RTX 3090 was stuck at systemd-modules-load, how I debugged it, which files to inspect (/etc/default/grub, /etc/modprobe.d/, /etc/modules-load.d/), and the final stable configuration for rock-solid GPU passthrough to an Ubuntu VM.

Building the Perfect Edge AI Supercomputer – Adding an Edge Virtualization Layer with Proxmox and GPU Passthrough

I built on my edge AI hardware by adding Proxmox VE as the virtualization layer. After prepping BIOS, using Rufus with the nomodeset trick, and installing Proxmox, I enabled IOMMU, configured VFIO, and passed through 2Γ— RTX 3090 GPUs to a single Ubuntu VM. This setup lets me run private AI workloads at near bare-metal speed, while keeping Windows and native Ubuntu for special use cases.

Budget AI Supercomputers: Dell Server vs. Threadripper Build vs. Next-Gen AI Desktop

Exploring three budget AI supercomputer paths: a Dell R740xd for enterprise labs with big storage but limited GPU flexibility, a TRX50 + Threadripper 7970X workstation offering fast DDR5, Gen5 NVMe, and dual RTX GPU power, and the futuristic GB10 AI desktop with unified CPU/GPU memory. Dell is lab-friendly, GB10 is AI-only, but the TRX50 build strikes the best balance today.

Building the Perfect Edge AI Supercomputer – Cost Effective Hardware

Keeping up with today’s technology is both exciting and demanding. My passion for home labs started many years ago, and while my family often jokes about the time and money I spend on self-education, they understand the value of staying ahead in such a fast-moving field. What started as curiosity has grown into a journey of building cost-effective supercomputers for edge AI and virtualization.

Fix VMware Workstation Performance Issues on Windows 11: Disable Hyper-V and VBS

This blog explains why VMware Workstation runs slower on Windows 11 compared to Windows 10, focusing on changes like Hyper-V, VBS, and HVCI being enabled by default on modern CPUs. It explores why sharing hypervisors with native hardware causes performance issues, and why disabling Hyper-V restores full VMware performance. Step-by-step PowerShell scripts are provided to toggle Hyper-V on or off safely.

Terraform deployment for FortiGate Next-Generation Firewall in Microsoft Azure

This blog explores deploying FortiGate VM in Azure, tackling challenges like license restrictions, Terraform API changes, and Marketplace agreements. It offers insights, troubleshooting tips, and lessons learned for successful single VM deployment in Azure. Using an evaluation license combined with B-series Azure VMs running FortiGate is primarily intended for experimentation and is not recommended for production environments.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.