WordPress Migration Challenge from SiteGround to My Own Server

For years, my WordPress site ran smoothly on SiteGround: fast, stable, and easy to manage. But as the hosting plan renewal approached, I realized the cost had quietly grown too high for a personal tech blog. So I decided to take on a challenge: migrate the entire site (files, database, and SSL) to my own self-hosted LAMP server running on Proxmox, with automation powered by AI-assisted Bash scripts.

Step 1: Building the Local LAMP Stack

The foundation was an Ubuntu 24.04 VM on my Proxmox host.
We automated the entire setup with a shell script that:

  • Installed Apache, PHP-FPM 8.2, and MariaDB
  • Created a self-signed certificate for early testing
  • Configured the cloudmigration.blog virtual host
  • Tuned PHP settings to match SiteGround’s environment (using phpinfo() as a reference)

A verification script ensured everything — from modules to sockets and firewall — was correctly configured before proceeding.

Step 2: Migrating Files

Initially, I used FTP to pull down all files from /public_html on SiteGround.
That worked — but it was painfully slow.
So we switched gears and used SSH with key-based authentication to compress the entire directory remotely, then transfer it securely via SCP.

To ensure integrity, we compared:

  • File counts
  • Directory trees
  • Byte sizes

When both the FTP and SSH copies matched perfectly, we knew the content was identical.

Step 3: Exporting and Importing the Database

WordPress data lives in MySQL, so the next step was exporting the remote database.
Using SSH, we:

  1. Parsed wp-config.php to extract DB credentials.
  2. Ran a remote mysqldump with gzip compression.
  3. Downloaded and imported it locally into MariaDB.

We didn’t just stop there — a comparison script listed every table and row count between the remote and local databases.
When all 74 tables matched, we could confidently say:
the database migration was perfect.

Step 4: Replacing the SiteGround SSL Certificate

After DNS was moved from SiteGround to GoDaddy, I pointed cloudmigration.blog to my public IP (46.139.14.94) via my MikroTik router’s NAT.

Once HTTP and HTTPS ports were open, a script verified DNS propagation across multiple resolvers (Google, Cloudflare, Quad9).

Then we used Certbot to automatically:

  • Request a Let’s Encrypt certificate
  • Replace the old self-signed cert
  • Update Apache’s virtual host configuration
  • Enable HTTPS redirection

Now the blog loads securely from my own VM with a trusted certificate.

Step 5: Cleanup and Fine-Tuning

A final script handled:

  • Removing SiteGround-specific WordPress plugins
  • Fixing folder permissions (especially wp-content/uploads)
  • Setting recommended Apache and PHP security options

With that, the migration was complete — fully automated, reproducible, and clean.

Step 6: Backup and Recovery

Disaster recovery is often overlooked — but not here.
I built a dedicated backup script that:

  • Exports the WordPress database
  • Archives the entire site directory
  • Stores both with timestamps and checksums under /home/attila/backups

Using these files, I can restore the entire site — from OS-level disaster to a working WordPress — in minutes.

Step 7: The Final Test

To confirm the migration was truly live:

  1. I created a new post — TEST POST FROM VM.
  2. Flushed local DNS and viewed the site from a mobile network (outside my LAN).
  3. The post appeared instantly, served over HTTPS from my home IP.

That was the moment I knew:
cloudmigration.blog was officially running on my own infrastructure.

Lessons Learned

  • Automation saves time and ensures reproducibility.
    Each script was modular — from setup to verification.
  • DNS propagation takes patience.
    It can take hours for global resolvers to catch up.
  • Self-hosting is empowering.
    With a modest VM, I now host a production-ready WordPress site with full control and zero recurring fees.

What’s Next

I plan to publish these scripts as an open-source toolkit — a fully automated “WordPress Migration Lab” anyone can use to move from shared hosting to their own infrastructure.
Future plans include containerizing the setup with Docker Compose and exploring multi-site replication across clouds.

Related posts

Fixing Proxmox Boot Hangs When Passing Through 2× RTX 3090 GPUs: Step-by-Step Troubleshooting Guide

Running multiple NVIDIA GPUs for AI workloads in Proxmox VE can cause early boot hangs if the host OS tries to load conflicting drivers. In this guide I document how my Proxmox host with 2× RTX 3090 was stuck at systemd-modules-load, how I debugged it, which files to inspect (/etc/default/grub, /etc/modprobe.d/, /etc/modules-load.d/), and the final stable configuration for rock-solid GPU passthrough to an Ubuntu VM.

Building the Perfect Edge AI Supercomputer – Adding an Edge Virtualization Layer with Proxmox and GPU Passthrough

I built on my edge AI hardware by adding Proxmox VE as the virtualization layer. After prepping BIOS, using Rufus with the nomodeset trick, and installing Proxmox, I enabled IOMMU, configured VFIO, and passed through 2× RTX 3090 GPUs to a single Ubuntu VM. This setup lets me run private AI workloads at near bare-metal speed, while keeping Windows and native Ubuntu for special use cases.

Budget AI Supercomputers: Dell Server vs. Threadripper Build vs. Next-Gen AI Desktop

Exploring three budget AI supercomputer paths: a Dell R740xd for enterprise labs with big storage but limited GPU flexibility, a TRX50 + Threadripper 7970X workstation offering fast DDR5, Gen5 NVMe, and dual RTX GPU power, and the futuristic GB10 AI desktop with unified CPU/GPU memory. Dell is lab-friendly, GB10 is AI-only, but the TRX50 build strikes the best balance today.

Building the Perfect Edge AI Supercomputer – Cost Effective Hardware

Keeping up with today’s technology is both exciting and demanding. My passion for home labs started many years ago, and while my family often jokes about the time and money I spend on self-education, they understand the value of staying ahead in such a fast-moving field. What started as curiosity has grown into a journey of building cost-effective supercomputers for edge AI and virtualization.

Fix VMware Workstation Performance Issues on Windows 11: Disable Hyper-V and VBS

This blog explains why VMware Workstation runs slower on Windows 11 compared to Windows 10, focusing on changes like Hyper-V, VBS, and HVCI being enabled by default on modern CPUs. It explores why sharing hypervisors with native hardware causes performance issues, and why disabling Hyper-V restores full VMware performance. Step-by-step PowerShell scripts are provided to toggle Hyper-V on or off safely.

Terraform deployment for FortiGate Next-Generation Firewall in Microsoft Azure

This blog explores deploying FortiGate VM in Azure, tackling challenges like license restrictions, Terraform API changes, and Marketplace agreements. It offers insights, troubleshooting tips, and lessons learned for successful single VM deployment in Azure. Using an evaluation license combined with B-series Azure VMs running FortiGate is primarily intended for experimentation and is not recommended for production environments.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.