Bulk-import based Azure and AWS price comparison

In recent months, I’ve assisted customers in rapidly generating Azure and AWS pricing estimates, a crucial step for securing vendor funding in lift-and-shift migration projects. When we nominate an opportunity, cloud vendors need a clear understanding of the potential workload, as larger workloads often attract more sponsorship. Both the Azure Migration and Modernization Program (AMMP) and AWS Migration Acceleration Program (MAP) operate on the same principle: accurate cloud consumption estimates, backed by detailed workload lists, are essential to demonstrate the project’s scope and secure the necessary support.

AWS Migration Evaluator (formerly TSO Logic)

This article was written before I was testing AWS Migration Evaluator Features – Build A Business Case For AWS – Amazon Web Services which offers a very nice template for data import Migration Evaluator Resources – Build A Business Case For AWS – Amazon Web Services .

I can’t wait to test it and learn more, compare results with Azure Migrate. I have high expectations, as AWS offers quite mature tools for the discovery/assessment phase.

AWS Migration Evaluator data format

AWS template has the following columns:

Server Name, CPU Cores, Memory (MB), Provisioned Storage (GB), Operating System, Is Virtual?, Hypervisor Name, Cpu String, Environment, SQL Edition, Application, Cpu Utilization Peak (%), Memory Utilization Peak (%), Time In-Use (%), Annual Cost (USD), Storage Type

(Example data: Apache01, 4, 4096, 500, Windows Server 2012 R2, TRUE, Host-1, Intel Xeon E7-8893 v4 @ 3.2GHz, Production, SQL Server 2012 Enterprise, Service Now, 60.00%, 95.00%, 100.00%, 3400, HDD)

Migration Evaluator is deployed on-premises and leverages read-only access to VMware, Hyper-V, Windows, Linux, Active Directory and SQL Server infrastructure.

Source: Cloud Business Case & Migration Plan – Amazon Migration Evaluator – AWS

Azure Migrate – RVTools (VMware vCenter Server VM list) or bulk import CSV (any source, manual)

Azure Pricing Calculator is an online tool designed for calculating costs by adding resources individually. While it does not support bulk import, Azure Migrate – Cloud Migration Tool | Microsoft Azure does. Azure Migrate offers a seamless way to discover, assess, and migrate workloads. It supports both online and offline discovery methods: an online migration discovery appliance connected to Hyper-V or vCenter, and offline discovery through bulk import of custom/manual template-based CSV files or RVTools exports. It also supports online discovery, by connecting Hyper-V or VMware vCenter sources – that’s obviously a better way because it will collect performance data as well.

Source: Azure Migrate appliance – Azure Migrate | Microsoft Learn
  1. Discovery and Assessment: Azure Migrate can discover on-premises servers and assess their readiness for migration to Azure. It evaluates VM compatibility, performance metrics, and cost estimates.
  2. Migration Guidance: The tool offers step-by-step guidance on migrating workloads, including application dependencies and infrastructure mapping.
  3. Integrated Tools: Azure Migrate integrates with other Azure services like Azure Site Recovery and Database Migration Service, offering a holistic migration solution.
This is how to Download a CSV template from Azure Portal / Azure Migrate Service.

RVTools data format for Azure Migrate

For VMware vSphere sources, I highly recommend exporting RVTools data and import it into Azure Migrate to create an offline estimate. RVTools – Download (robware.net) is straightforward to download and install, and it quickly connects to your vCenter server to extract all the necessary information for Azure Migrate to create an assessment.

Azure Migrate generic CSV template data format

If the vCenter/RVTools option is not available, you can alternatively provide the necessary data in the required CSV format. Below is a snapshot of the AzureMigrateimporttemplate.csv, highlighting the four most important columns needed for a quick estimate (excluding historical performance data like disk IOPS or network throughput).

Required data includes Server name, Cores, Memory (In MB) and OS name. Quite similar to AWS Migration Evaluator.

This is Azure Migrate CSV template for bulk-import.

AWS Calculator – bulk import of Excel template

AWS Pricing Calculator allows users to input (both portal and bulk Excel-based import provided) specific details about their current on-premises infrastructure or planned AWS usage. Key features include:

  1. Detailed Cost Estimates: Users can receive detailed estimates for a wide range of AWS services, from computing and storage to networking and databases.
  2. Custom Scenarios: The calculator supports the creation of custom pricing scenarios to match unique business needs.
  3. Export and Share: Results can be exported and shared with stakeholders for collaborative decision-making.

AWS calculator bulk-import data format

Below is a snapshot of the Amazon_EC2_Instances_BulkUpload_Template_Commercial.xlsx, this is quite different from the Azure Migrate or AWS Migration Evaluator templates.

AWS Excel Template for bulk-import.

The idea of grouping by Instance Type saves time, however you need to multiply rows by Operating System (Linux/Windows). Below is my mapping of AWS Calculator and Azure Migrate templates.

Description – I used this to map with Azure Migrate’s *Server name (FQDN) because there no other field to make it specific. Problem starts, if this line represents multiple instances (see Number of Instances)

Operating System – in Azure Migrate, we have a more sophisticated (detailed) *OS Name

Number of Instances – in Azure Migrate import, each line represents one VM, in here, you can group and multiply based on Instance Type

Instance Type (described below)

Your most important column, and unfortunately, this is quite differnet from Azure Migrate’s Cores and Memory. You can try to get help using large language models like OpenAI to provide the best AWS instace based on Cores and Memory only, however, this is not the most accurate way of getting the job done. Better way is to use AWS Instance Type Selector: Compute – Amazon EC2 Instance Types – AWS, but it is very time consuming (one by one). I prefer using AWS CLI and using the describe-instance-types command as per my example. Even better is to use Migration Evaluator because its bulk import template uses Cores and Memory, similar to Azure Migrate import CSV so you don’t need to research AWS Instance Types.

aws ec2 describe-instance-types --filters "Name=vcpu-info.default-cores,Values=<min-cores>,<max-cores>" "Name=memory-info.size-in-mib,Values=<min-memory>,<max-memory>"

Replace <min-cores>, <max-cores>, <min-memory>, and <max-memory> with your specific requirements in MiB (1 GiB = 1024 MiB).

There are several ways of automating the execution of the command above and loading the proper instance types to AWS calculator’s template using Azure Migrate template format.

Keep in mind, that there are many configuration options beyond just cores and memory. As an architect, it’s your responsibility to ensure the target cloud service meets the source system’s requirements while offering the best possible service.

Related posts

Fixing Proxmox Boot Hangs When Passing Through 2× RTX 3090 GPUs: Step-by-Step Troubleshooting Guide

Running multiple NVIDIA GPUs for AI workloads in Proxmox VE can cause early boot hangs if the host OS tries to load conflicting drivers. In this guide I document how my Proxmox host with 2× RTX 3090 was stuck at systemd-modules-load, how I debugged it, which files to inspect (/etc/default/grub, /etc/modprobe.d/, /etc/modules-load.d/), and the final stable configuration for rock-solid GPU passthrough to an Ubuntu VM.

Building the Perfect Edge AI Supercomputer – Adding an Edge Virtualization Layer with Proxmox and GPU Passthrough

I built on my edge AI hardware by adding Proxmox VE as the virtualization layer. After prepping BIOS, using Rufus with the nomodeset trick, and installing Proxmox, I enabled IOMMU, configured VFIO, and passed through 2× RTX 3090 GPUs to a single Ubuntu VM. This setup lets me run private AI workloads at near bare-metal speed, while keeping Windows and native Ubuntu for special use cases.

Budget AI Supercomputers: Dell Server vs. Threadripper Build vs. Next-Gen AI Desktop

Exploring three budget AI supercomputer paths: a Dell R740xd for enterprise labs with big storage but limited GPU flexibility, a TRX50 + Threadripper 7970X workstation offering fast DDR5, Gen5 NVMe, and dual RTX GPU power, and the futuristic GB10 AI desktop with unified CPU/GPU memory. Dell is lab-friendly, GB10 is AI-only, but the TRX50 build strikes the best balance today.

Building the Perfect Edge AI Supercomputer – Cost Effective Hardware

Keeping up with today’s technology is both exciting and demanding. My passion for home labs started many years ago, and while my family often jokes about the time and money I spend on self-education, they understand the value of staying ahead in such a fast-moving field. What started as curiosity has grown into a journey of building cost-effective supercomputers for edge AI and virtualization.

Fix VMware Workstation Performance Issues on Windows 11: Disable Hyper-V and VBS

This blog explains why VMware Workstation runs slower on Windows 11 compared to Windows 10, focusing on changes like Hyper-V, VBS, and HVCI being enabled by default on modern CPUs. It explores why sharing hypervisors with native hardware causes performance issues, and why disabling Hyper-V restores full VMware performance. Step-by-step PowerShell scripts are provided to toggle Hyper-V on or off safely.

Terraform deployment for FortiGate Next-Generation Firewall in Microsoft Azure

This blog explores deploying FortiGate VM in Azure, tackling challenges like license restrictions, Terraform API changes, and Marketplace agreements. It offers insights, troubleshooting tips, and lessons learned for successful single VM deployment in Azure. Using an evaluation license combined with B-series Azure VMs running FortiGate is primarily intended for experimentation and is not recommended for production environments.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.