I’m formulating plans to switch from AWS to Hetzner. Amazon gets you by charging high prices (sometimes 20x more than competitors) and forcing you to make long-term commitments in order to get the prices to somewhere more reasonable. Then they make it exorbitantly expensive to migrate your data anywhere else. It’s a very customer-hostile approach that I’m tired of at this point.
Amazon might think that they’re locking people in with the egress fees. But they’re also locking people out. As soon as you switch one part to a competitor, the high egress forces you to switch over everything.
It’s going to be complicated to switch, but it’s made easier by the fact that I didn’t fall into the trap of building my platform on Amazon-specific services.
I just want to point out this guide uses many of the same tasks I use when migrating websites between servers while minimizing downtown.
- reduce dns ttl (if not doing an ip swap)
- rsync website files
- rsync /etc/letsencrypt/ ssl certificates
- copy over database (if writes don't happen often and database is small enough, this can be done without replica, just go read_only during migration)
- test new server by putting new ip in local /etc/hosts
- turn off cron on old server
- convert old server nginx to reverse proxy to new server
- change dns (or ip swap between old and new server)
- turn on cron on new server
Every time I see this kind of article, no one really bothers about sb/server redundancy, load balancers, etc. are we ok with just 1 big server that may fail and bring several services down?
You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
This is something we've[0] done a number of times for customers coming from various cloud providers. In our case we move customers onto a multi-server (sometimes multi-AZ) deployment in Hetzner, using Kubernetes to distribute workloads across servers and provide HA. Kubernetes is likely a lot for a single node deployment such as the OP, but it makes a lot more sense as soon as multiple nodes are involved.
For backups we use both Velero and application-level backup for critical workloads (i.e. Postgres WAL backups for PITR). We also ensure all state is on at least two nodes for HA.
We also find bare metal to be a lot more performant in general. Compared to AWS we typically see service response times halve. It is not that virtualisation inherently has that much overhead, rather it is everything else. Eg, bare metal offers:
- Reduced disk latency (NVMe vs network block storage)
- Reduced network latency (we run dedicated fibre, so inter-az is about 1/10th the latency)
- Less cache contention, etc [1]
Anyway, if you want to chat about this sometime just ping me an email: adam@ company domain.
[1] I wrote more on this 6 months ago: https://news.ycombinator.com/item?id=45615867
Hard to read this article as it was written by Claude as a report after the migration that Claude did for you. If an llm helped you migrate and save this much money, kudos. But if you decide to write about it at least proof read it and remove redundant parts and llm storytelling.
Yeah, well be careful of Hetzner, I used to love them but I just migrated away. They just shut all all of our VMs over a $36 billing dispute. (~30 VMs we were using for our CI/CD pipeline) We provided them evidence with records of the payment in totality from our bank, they refused to look at it / discuss the dispute, even when we were communicating urgently and just ultimately shut off all our access. We're on Scaleway now.
A few months ago, I looked into AWS alternatives for my small SaaS side project. My main motivations were to save money and maybe support some EU cloud providers. At first, I planned to go with Hetzner and accepted that I would need to do a lot of things myself.
However, the dealbreaker for me was that Hetzner IPs have a bad reputation. At work, I learned that one of the managed AWS firewall rules blocks many (maybe all) of their IPs. I can’t even open a website hosted on a Hetzner IP from my work laptop because it’s blocked by some IT policy (maybe this is not an issue for you if you are using CloudFlare or similar).
I've read online that the DDoS protection is very bad as well.
So in the end, I picked DO App Platform in one of the EU regions. Having the option to use a managed DB was a big plus as well.
The migration sharing is admirable and useful teaching, thank you!
I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).
I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
What are you doing for DB backups? Do you have a replica/standby? Or is it just hourly or something like that?
Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.
AWS only requires a card from me. I tried registering at Hetzner and they wanted a picture of my passport.
I saved about $1200 a year by moving from AWS to Hetzner. Can’t recommend it enough. AWS has kind of become a scam.
Hey I made the meme in the header https://wasp.sh/blog/2025/04/02/an-introduction-to-database-...
Nice to see it used _twice_ :D
Really interesting sharing, thanks! Why lower the TTL to 300 instead of something like 60 or 30, to make the switch even faster? The nameservers were DO's, so they should've been more than able to handle the increased load.
BTW, I've been a client of Hetzner (Cloud, Object Storage, and Storage Box) for a few years now, very happy with them!
In the big corporate world, this would be a $600m budget, creating multiple VPs, thousands of positions, multi-cloud and multi-dc kubernetes, tons of highly paid consultants, the migration would take 9 - 12 years, create so many success stories, lessons learnt, promotions, etc etc.
If you’re migrating a large MySQL database and you’re not
using mydumper/myloader, you’re doing it the hard way.
If you aren't using xtrabackup you are doing it wrong. I recently migrated a database with 2TB of data from 5.7 to 8.4 with about 15 seconds of down time. It wouldn't have been possible without xtrabackup. Mysqldumper requires a global write block, I wouldn't call blocking writes for hours a "zero downtime migration".I know they've been bought out by Akamai or whatever but I've been using Linode for over 10 years and I still go to them if I need a VPS. I don't have extreme needs, but they seem to be always improving or adding features comparable to other providers and the UI is consistent so I don't see a reason to change. Any time there has been an issue they've migrated me to a new host automatically without even needing to do anything. I combine it with Dokploy now and just deploy most of my projects via Docker Compose and private GitHub repos.
Might give this a whirl, not move business infrastructure here, but see how it works for my personal VPN server.
I've had excellent experiences with Percona xtrabackup for MySQL migration and backups in general. It runs live with almost no performance penalty on the source. It works so well that I always wait for them to release a new matching version before upgrade to a new MySQL version.
I set up a VM on Hetzner a few weeks ago. I've been quite impressed so far, and was able to orchestrate everything with Terraform without a problem.
> The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
Not really, a MITM could do anything here. It's not very likely to happen here, but I think this comment shows a misunderstanding of what certificates and verification does.
>Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar
This reasoning does not add up. They could simply say they needed to move somewhere cheaper, like Hetzner. Inflation is still high but getting lower. Weakened Turkish Lira part is not correct because dollar is artificially suppressed for a very long time.
What's the HA plan?
Sounds like from the requirement to live migrate you can't really afford planned downtime, so why are you risking unplanned downtime?
They're great but I wish Hetzner had a US (or CA) east coast presence, the latency of going across the ocean is really troublesome. They have some presence for their cloud offering, so they at least have some experience with the idea.
Hetzner oversells hardware which means your neighbors are a drag on your performance. If your server is mostly idle, this might be a good move. If not, it probably won't be worth it.
I had my fair share of Hyperscaler -> $something_else migrations during the past year. I agree, especially with rented hardware the price-difference is kind of ridiculous.
The issue is though, that you loose the managed part of the whole Cloud promise. For ephemeral services this not a big deal, but for persistent stuff like databases where you would like to have your data safe this is kind of an issue because it shifts additional effort (and therefore cost) into your operations team.
For smaller setups (attention shameless self-promotion incoming) I am currently working on https://pellepelster.github.io/solidblocks/cloud/index.html which allows to deploy managed services to the Hetzner Cloud from a Docker-Compose like definition. E.g. a PostgreSQL database with automatic backup and disaster recovery.
I wish we had something like Hetzner dedicated near us-east-1.
They do offer VPS in the US and the value is great. I was seriously looking at moving our academic lab over from AWS but server availability was bad enough to scare me off. They didn't have the instances we needed reliably. Really hoping that calms down.
> Old server nginx converted to reverse proxy We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
What was the config on the receiving side to support this? Did you whitelist the old server IP to trust the forwarding headers? Otherwise you’d get the old server IP in your app logs. Not a huge deal for an hour but if something went wrong it can get confusing.
Congrats on doing this successfully, but your setup is amateur. This would have been infinitely easier if you were using IaC (Terraform/Ansible), containerized applications (that you're not already doing that is madness), and had a high-availability cluster setup in place already. It sounds like avoiding downtime is important to you, yet there's no redundancy in the existing stack at all, and everything is done by hand.
This isn't something others should use as an example.
When you find a gold, why tell everyone where it is? Silent happiness keeps the benefits:)
yeah we did the same, however we also run an identical backup server in a different data center so we can switch over in matter of minutes if needed.
We need more competition across the board. These savings are insane and DO should be sweating, right?
I did the same this year. I really liked Digital Ocean though, compared to more complex cloud offerings like AWS. AWS feels like spending more for the same complexity. At least DO feels like it does save time and mental band width. Still though, the performance of cloud VPS is abysmal for the price. I'm now on Hetzner + K3's plus Flux CD (with Cloudflare for file storage (R2) and caching. I run postgres on the same machine with frequent dump backups. If I ever need realtime read replicas, I'll likely just migrate the DB to Neon or something and keep Hetzner with snapshots for running app containers.
Love Hetzner. Cheapest prices in all the land (aside from Hosting your own server) from what I’ve gathered online. Host:
My foray into multiplayer games.
I’ve had Proxmox on one of their AX42 servers for a year now. All of it is backed up to PBS, backed by Cloudflare R2 storage.
None of it is mission critical - but it’s certainly something I’d use in production with a few more instances.
Networking over Tailscale works flawlessly with my Proxmox nodes at home.
does anyone else start to wonder about these companys issuing vps/online space with no hardening and no warning
you can basically go on hetzner and spin up a vps with linux that is exposed to the open internet with open ports and user security and within a few hours its been hacked, there is no like warning pop up that says "if you do this your server will be pwnd"
i especialy wonder with all the ai provisioned vps and postgres dbs what will happen here
Given the premise that zero day exploits are going to be frequent going forward, I feel like there is a new standard for secure deployment.
Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).
That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.
Anyone else doing something similar?
If I remember correctly (it has been a while since I looked), Hetzner although is a lot cheaper on the price sheet, they're European region by default and then if you look to get US region servers at Hetzner, the pricing is a lot higher and similar to Digital Ocean. Is that still the case?
For OP though who is a Turkey-based company and want European region servers anyway, it might make sense.
I assume a vm on DO is HA protected. Also storage might live on a Cluster. Did you consider a socond dedi or do you just accept the risk of longer failover time and data loss time (RPO) for recovering to a newly provisioned server? Would love to know your thoughts on this especially as the migration was well designed and executed.
Did this about a year ago, went smoother than expected tbh. the main gotcha for us was DO's managed postgres — had to dump/restore manually since there's no direct migration path to Hetzner's managed DBs. ended up just self-hosting postgres on a separate box which has been fine, maybe even better.
I started with DO in 2013 when they offered 20GB SSD, 512MB RAM for $5/mo. For some reason I paid no VAT then, but I do now. Their $4/mo option now is still 512MB, still 1 vCPU, but 10GB SSD. So it's like the last decade of technological progress with regards to RAM, CPU and storage that should either lead to price cuts/spec bumps didn't happen. And yeah, DO got expensive before AI bought up all the memory.
I considered going to hetzner at one point but I read a lot of stories around hetzner that didn't inspire confidence. Primarily that they're not really that much cheaper than going to other companies offering something similar.
If some people can chime in with their positive experiences I might switch.
A zero-downtime migration to a single database server? Power fails, disks fail, even CPU fans sometimes fail and bring a single server to a halt. Somehow I would have expected at least a high-available database cluster with multiple machines for applications "serving hundreds of thousands of users".
We are currently moving from heroku to Hetzner. Same story, thousands saved / month.
Am I missing something? I'm genuinely surprised it was not deployed from the start on a dedicated server. Don't you make a cost analysis before deploy? And if the cost analysis was ok at initial deploy, why wait to have such a difference in cost before migrating? How much money goes wasted in such situations?
Migrated from OVH to Hezner last Winter too, 0 downtime since, rolling backup running fine and lower bill too.
> The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
Yeah - no, it's not. They made the MitM attack possible with this change. The exposure was limited to those 5 minutes, but it should have been a known risk.
Also not certain how they could check the apps on the new server with the read-only database, while it was a replica?
Still, nice to hear it succeeded, the reasons sound very familiar.
I'm currently paying $800ish a month for digital ocean servers that I know would fit on a single hetzner machine :/
> Cloud providers are expensive for steady-state workloads.
Asking the obvious question: why not your own server in a colo?
is a pity that Hetzner does not have monitoring agent like DO. in DO you can set alerts and view all metrics. its this one thing that keeps me from migrating because i dont want to install custom monitoring solutions.
I moved two servers, one from Linode and the other from DO to Hetzner a few months ago, with similar savings. The best part was that the two servers had tens of different sites running, implemented in different languages, with obsolete libraries, MySQL and Redis instances. A total mess. Well: Claude Code migrated it all, sometimes rewriting parts when the libraries where no longer available. Today complex migrations are much simpler to perform, which, I believe, will increase the mobility across providers a lot.