I first heard about Cloudflare in 2011, on Google Reader.
I was 18 or 19, working at a small shop where most of what we shipped was PHP on the Zend Framework with a lot of jQuery on top. I read the Cloudflare announcement on a slow afternoon and was immediately convinced. A globally distributed cache layer in front of any site, with a free tier? That sounded like the future of how the web was going to be served. I sent an email to our internal tech-discussion list saying as much.
The replies were not kind. “Useless.” “Why would you ever offload your traffic to another company you don’t control?” “Terrible idea, that’s a single point of failure for your entire site.” A bunch of takes that didn’t age well.
I let it go.
Fourteen-ish years later, here’s what’s running my entire personal infrastructure today: Cloudflare for DNS, Cloudflare for the public-internet edge, Cloudflare Tunnel for cluster ingress, Cloudflare Pages for every static site I run, Cloudflare D1 for the small databases that need to live near the edge, and Cloudflare Pages Functions for the rare server-side handler. All of it. Free tier. Forty-something zones managed under one account.
This is the story of how I got there. Like most of these stories, it started with a much smaller problem.
The cert problem
I run a k3s cluster on a desk in my home office. It hosts the services I use day to day: a self-hosted Gitea (git.ri.gd), AdGuard, Sonarr, a personal wiki MCP, half a dozen other small things. For a long while, the way I exposed those services was the path of least resistance. Spin up a service, give it a hostname, slap on a self-signed cert, click through the warning every time the browser complained.
Self-signed certs are technically fine. They are also a constant low-grade annoyance. Every browser tab I opened to a *.ri.gd service was an angry red lock. I’d click through, and click through, and after enough clicks-through I started to wonder how I was supposed to notice when something was actually wrong. The cure for cert-warning fatigue is not getting better at clicking the warning.
I looked at a couple of solutions. The one I kept seeing was cert-manager running on the cluster, configured with a Cloudflare DNS-01 solver. The flow is straightforward. cert-manager wants a Let’s Encrypt cert for <service>.ri.gd. Let’s Encrypt issues a DNS-01 challenge. cert-manager calls the Cloudflare API to drop a _acme-challenge TXT record on the zone. Let’s Encrypt verifies. cert-manager retrieves the cert. Renewals are automatic. Real, browser-trusted certs on every internal hostname.
The catch was that cert-manager’s Cloudflare solver wants the zone to actually be on Cloudflare. So I went to my registrar, pointed the ri.gd nameservers at Cloudflare, and sat back to see what would break.
Nothing broke. I was honestly shocked at how cleanly it all came up. Within an hour, every internal service had a real cert. The red lock was gone. I could finally pay attention to the warnings I was supposed to notice.
Internal versus public
ri.gd was always meant to be internal. Every *.ri.gd hostname resolves to a tailnet IP, so the domain is effectively private even though the DNS records themselves are public. That distinction is what makes the whole setup work. I get real Let’s Encrypt certs (DNS-01 verifies through public DNS) for hostnames that are only reachable from devices on my Tailscale tailnet. Public DNS, private routing.
But not everything I ran was supposed to stay internal. A handful of services needed to be exposed to the open internet. A small portfolio site, a tracker endpoint, a tunnel target. For those I bought a second domain dedicated to public-facing endpoints and kept it entirely separate from the ri.gd zone.
Two things made the public side land much more cleanly than I expected.
The first was Cloudflare Rules. When I needed to expose just one endpoint of an otherwise-private service (say a public signup form on a service whose admin UI should stay tailnet-only) I could do it with a rule, instead of standing up a separate reverse proxy. The convention I ended up with is <service>.ri.gd for the admin/internal side, a separate public alias on the public-facing zone, and Cloudflare deciding what gets through.
The second was Cloudflare Tunnel. Before that, every public-facing service had to live on a node with a public IP, which meant keeping VPSs in the cluster specifically to be the ingress for that traffic. Hardening those public IPs, paying for the VPSs every month, scheduling the public-facing workloads onto them. With Cloudflare Tunnel, that whole layer goes away. The cluster runs one cloudflared pod that maintains an outbound connection to Cloudflare. That tunnel is the only ingress path. No public IP on any of my nodes. Adding a new public hostname is a dashboard click (or a single API call) that maps <hostname> to <service>.<namespace>.svc.cluster.local:<port>.
I went from “spin up another VPS just so this one service can be public” to “click two things in a dashboard.” That’s a real improvement.
The static-site problem
While the cluster side was getting tidied up, the other half of my infrastructure was on a completely different stack. The static sites lived on Fastmail Files for years.
The reason was simple. My email is on Fastmail, Fastmail Files came free with the plan, and the WebDAV upload story worked. For one or two small sites, it’s perfectly fine. You rclone sync a directory of HTML to a WebDAV path, you point a DNS record at the right Fastmail hostname, you have a site.
The problem is that “perfectly fine for one or two sites” does not extend to dozens. Once I had real automation pushing real changes to a real number of sites, the seams showed:
- Gitea Actions deploys would routinely take 10+ minutes for the bigger sites. Every file enumerated, every file uploaded, WebDAV serializing the lot.
- Random WebDAV errors. Half-uploaded directories.
rclonewould retry, sometimes succeed, sometimes leave the site in a half-published state I’d only notice the next time I visited it. - Every so often an automation run would delete and recreate the directory that was mapped as a published website. Fastmail Files did not handle that gracefully. The directory mapping would silently break and the site would go offline until I logged into the Fastmail UI and manually re-linked the folder to its domain.
It was the kind of stack where the pager goes off because of the storage layer, not the app. That is the wrong way around.
I tried Cloudflare Pages on one site as an experiment, expecting roughly the same level of friction. The first deploy took a couple of seconds. The API was clean. Custom-domain attachment was a single API call that resolved automatically as soon as DNS pointed at the project. Agents could push deploys without breaking the link between the project and the domain. There was nothing to manually re-link.
Within a few weeks I’d moved every static site I run onto Cloudflare Pages, including this blog.
Small wins
Things that aren’t headline features kept reinforcing the choice. The first time I added a new domain to Google Search Console, the verification step had a direct integration with Cloudflare. Instead of copying a TXT record into the DNS panel, hitting save, and waiting a few minutes for propagation before Google would verify, Search Console talked to Cloudflare’s API. The record was created and verified in one click. It was instant. That kind of small thing kept happening, and after a few of them I stopped expecting the usual TXT-record dance for new integrations.
Where everything ended up
If I draw a map of my personal infrastructure today, almost every box has a Cloudflare label on it:
- DNS for ~forty zones. Domains I’ve collected over the years from Iwantmyname, Namecheap, registro.br, and GoDaddy. They all point their nameservers at Cloudflare now.
- Cloudflare Pages for every public static site. Hugo blogs, hand-written HTML pages, the small portfolio sites, this blog.
- Cloudflare Tunnel for cluster ingress. One
cloudflaredpod, one tunnel, all public hostnames routed through it. No public IP on any cluster node. cert-managerwith the Cloudflare DNS-01 solver on the cluster, issuing real Let’s Encrypt certs for every*.ri.gdservice.- Cloudflare Pages Functions for the rare server-side handler a static site needs.
- Cloudflare D1 for SQLite-shaped persistence at the edge. Paired with Pages Functions, it’s the lowest-friction “I just need to capture this form submission and read it later” stack I’ve used.
Fastmail still holds my email, and only my email. Files, the static sites, the WebDAV deploy paths are all gone. The Fastmail bill became a pure email bill. The Cloudflare bill is still zero, because every product in that list sits on the free tier and I haven’t bumped a quota.
Final thoughts
I did not arrive at this stack by sitting down and choosing it. Each piece showed up as the answer to a specific problem. Cert-warning fatigue. VPSs kept around just to be ingress nodes. Static sites that kept going offline by themselves. The fact that Cloudflare happened to have a clean answer for each of those problems, and that the answers composed with each other, is what made it the platform of everything by accident.
If I could send a note back to that 2011 email thread, I’d say two things. First, the part where Cloudflare becomes a single point of failure for a chunk of my infrastructure is real. When Cloudflare has a bad afternoon, I have a bad afternoon. Second, the part where that’s a deal-breaker isn’t. The amount of operational complexity the platform absorbs in exchange for that dependency is enormous, and the muscle for working around the occasional outage is the muscle you’d be building for any provider anyway.
I doubt my reply on the original thread was particularly articulate. I was 18. But fourteen years later, the system itself is the reply.