Earlier this year we made the call to consolidate three WordPress sites — Rover Engineering, Rover Planet, and USYTech — off three separate AWS Lightsail instances and onto a single, scalable platform on IONOS Cloud. Here’s the story.
The original prompt
The trigger wasn’t cost. We had three Lightsail bundles humming away at modest monthly rates, and that wasn’t painful. The prompt was the shape of what we’re trying to build:
- Business-user friendly hosting — team members and clients should be able to manage day-to-day site administration (content, plugins, email, SSL) through a sane GUI without needing a developer in the loop.
- Architecture that can scale — we want to add client sites onto the same platform without re-architecting each time.
- API-managed infrastructure where it makes sense — provisioning, DNS, SSL issuance, cutover.
Cost savings would be a nice-to-have. The point was a better foundation, not a cheaper one.
The architecture we ended up with
After comparing IONOS’s VPS, Compute Engine vCPU servers, and Cloud Cubes — including a side-by-side scaling and IO analysis — we landed on:
- One IONOS Cube S in London (2 vCPU, 4 GB RAM, 120 GB direct-attached NVMe). Cubes give us cheaper per-vCPU pricing than vCPU servers and noticeably better disk I/O for read-heavy WordPress workloads. The trade-off is template immutability — we replace the instance to scale — and at this volume that’s an acceptable maintenance pattern.
- Plesk Web Pro as the control panel. Each site lives in its own subscription with its own database and database user (multi-DB, multi-install pattern). WordPress Toolkit handles staging, cloning, plugin/theme management. Customer accounts give us isolated logins for clients later, scoped to just their site.
- IONOS Object Storage for automated daily incremental + weekly full backups via Plesk’s S3-compatible backup integration.
- Let’s Encrypt for SSL on every domain and subdomain, issued and auto-renewed through Plesk.
The result is a single small server that hosts three production WordPress sites, scales by snapshot-and-recreate, and exposes a per-tenant management UI we can hand to clients.
Setting up the Plesk platform
Before any of the three migrations could happen, the Cube needed turning into a hosting platform. The short version of what we configured:
- Hostname & panel SSL. The Plesk control panel lives at
plesk.roverengineering.co.uk(a dedicated A record pointing at the Cube’s static IP). Root SSH was disabled after provisioning; admin access is key-only. We issued a Let’s Encrypt cert for the Plesk panel itself, and Plesk handles per-site Let’s Encrypt certs (issued and auto-renewed) for every domain and subdomain we add. - DNS pattern, per site. Three DNS records per domain: apex
Apointing at the Cube IP (set during cutover),migrate.<site>for the staging subdomain, and the sharedplesk.host for the panel. TTL is dropped to 600 seconds an hour before any cutover so we can roll back fast if anything misbehaves. Email deliverability records (SPF,DKIM,DMARC) sit alongside, configured once per domain so WordPress’s outbound mail (Mail SMTP plugin) passes Gmail/Workspace authentication. - Backups to IONOS Object Storage. Plesk’s built-in remote-backup module talks to any S3-compatible endpoint, so we pointed it at an IONOS Object Storage bucket (
wordpress-company-websites-backup). Schedule: incremental daily, full weekly, last two full backups retained. Failures email the platform admin. Cube-level instance snapshots are still manual via the IONOS DCD UI — there’s no API for that — so we use the Plesk backups as the day-to-day disaster-recovery story and reserve snapshots for the “scale to a bigger Cube” flow. - Per-tenant admins. Each site gets its own Plesk subscription with its own database, database user, and customer account scoped to that one subscription. Two operator admins (Mahmoud and Sneha) sit above all subscriptions; future client accounts will get a customer-level login that only sees their own site — no visibility into the others sharing the Cube.
- PHP per domain. Plesk lets you pick a PHP version per subscription, so we matched each migrated site to the version it was running on its old Lightsail Bitnami image (rather than upgrading PHP and the WordPress migration in the same step). Two settings needed bumping above Plesk’s defaults on a couple of sites:
memory_limitraised from 128 MB to 256 MB after Rover Planet’s block-metadata registry blew the default during a Plesk maintenance run, andmax_execution_timeraised for Rover Engineering to let a 116 MB SQL import finish under PHP-FPM.
None of these are exotic; the point is that the platform setup is small, knowable, and codified. When we add the next client site, the steps are: provision the subscription, add the DNS records, issue Let’s Encrypt, pick the PHP version, attach the backup schedule, hand the client their customer login. Half an hour, repeatable.
The migration itself
We did the three sites in sequence: Rover Planet first as the pilot, then Rover Engineering, then USYTech. Each one taught us something.
Rover Planet — pilot
Rover Planet went smoothly using the Duplicator plugin: build a package on the source, upload to the destination via Plesk’s File Manager, run the installer in a browser. Duplicator handles the database import, URL search-and-replace, and (importantly) preserves the original WordPress salts. End-to-end this took a couple of hours including smoke testing.
Rover Engineering — the hard way
For Rover Engineering we tried to be clever. Instead of clicking through the UI we built a fully programmatic migrator: AWS CLI for the source audit, SSH and mysqldump for the database, tar for wp-content, GoDaddy DNS API for the cutover, and a custom PHP installer to do the import server-side. It worked, but several things bit us along the way:
- WordPress salts. A fresh
wp-config.phpgenerated on the destination meant any plugin that encrypted secrets with the original salts — chatbots, mailers, security plugins — silently couldn’t decrypt them. We had to copy the eight originalAUTH_KEY/NONCE_SALT/etc. lines across. Duplicator does this automatically; manual migrations have to do it by hand. - The
.htaccessfile. WordPress core downloads don’t include one. Without the standard rewrite rules every pretty permalink 404s while the homepage looks fine. - PHP-FPM timeouts. A 116 MB SQL import killed the FPM worker partway through. We bumped the per-domain PHP timeout via Plesk and re-ran with foreign-key checks disabled.
- Plesk REST API limits. The REST surface doesn’t expose a file upload endpoint or the WordPress Toolkit migration tool. A few steps required Plesk’s File Manager UI or the scheduled-task CLI.
The site went live, but it was a reminder that the boring path is often the right one.
USYTech — back to the boring path
For USYTech we went back to Duplicator + Plesk File Manager + browser installer. It took 30 minutes end to end. Salts came across automatically, permalinks worked first try, and the smoke tests were green.
Tools that earned their keep
- Plesk WordPress Toolkit — for cloning the staging subdomain (
migrate.example.com) onto the production document root after testing, without re-running any imports. - Playwright crawler — we wrote a small Playwright script that walks every page of the staging site, captures HTTP statuses, broken images, console errors, and chatbot presence, and produces a punch list. Catches anything humans miss in a manual smoke test.
- GoDaddy and IONOS Cloud APIs — for DNS A-record cutovers and Cube provisioning. Doing the actual cutover in code (rather than clicking) makes rollback a one-line change.
Lessons we’re carrying forward
- Use the platform’s native migration tool when one exists. Plesk + Duplicator + WordPress Toolkit handles ten edge cases for free. Reinventing the import flow in custom code earned us new bugs without a meaningful upside.
- Stage to a
migrate.subdomain first, always. It costs nothing and lets you do real end-to-end testing before flipping DNS. - Salts and
.htaccessare migration trip-hazards. Duplicator hides them; manual migrations need to handle them explicitly. - Drive DNS cutovers via API. We wrote the apex A-record flip as a one-line API call. Rollback is the same call with the previous IP. No human typing the wrong octet under pressure.
- Backups before destruction. Plesk’s automated backups to IONOS Object Storage cover ongoing recovery, but we still pulled a final pre-decommission backup of each Lightsail to local + cold storage before deleting the AWS resources.
What’s next
With the three internal sites stable on IONOS, we’re using the same Plesk subscription model to host the first client sites — isolated WordPress installs with their own admin logins, scoped resource limits, and per-site SSL. Plus two operational pieces still on the punch list: enabling end-user SFTP for clients, and a shared post-cutover URL-fix script for sites whose page-builder hard-codes absolute URLs.
If you’re weighing a similar move, the short version is: IONOS Cubes plus Plesk hits a useful sweet spot for small-to-medium WordPress fleets — cheap NVMe-backed compute, a sensible per-tenant management UI, predictable scaling. And the boring tools beat the clever ones.