I've been using fly.io a lot recently.

I've hosted on just about every hosting provider you can imagine, from the big ones like AWS (read my post on lambda-forge) to the most obscure ones that only accept litecoin as payment.


So far, the one I've enjoyed the most is fly.io. It's fast, easy to set up and works perfectly for 90% of web applications that don't have very specific computing requirements.

Since hosting on fly, I've been exploring simpler infrastructure setups that don't require complex cloud provider integrations. Recently, I had the chance to set up a DIY CDN for publite.me using fly.io and Varnish, and figured I'd share this approach.

The Goal

The goal for this project was to create a simple two-container setup: 1. A main application container (in my case, an existing fly.io application) 2. A Varnish CDN container that sits in front of the application and handles caching

Client Request → Varnish CDN → Application
                     ↑             ↓
                     └─────Cache───┘

This DIY approach gives complete control over the caching strategy without dependencies on third-party services (besides fly).

The DIY Part

Let's start with setting up our fly.io application. Since we already have an existing fly application, I'm just going to focus on the CDN container:

# Create the CDN application
fly apps create fly-cdn


The important part is setting up the Varnish container. Here's a simple Dockerfile for that:

FROM varnish:6.6
COPY default.vcl /etc/varnish/default.vcl
EXPOSE 80

CMD ["varnishd", "-F", "-f", "/etc/varnish/default.vcl", "-a", ":80", "-p", "default_ttl=3600", "-s", "malloc,256m"]



Now comes the crucial part - the Varnish configuration file (default.vcl):

vcl 4.1;

backend default {
    # This is the host of the application you want to cache
    .host = "my-awesome-blog.fly.dev";
    .port = "80";
    .connect_timeout = 5s;
    .first_byte_timeout = 90s;
    .between_bytes_timeout = 10s;
    .ssl = true;
    .ssl_sni = true;
}

sub vcl_recv {
    # Only cache GET and HEAD requests
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);
    }

    # Pass through admin pages
    if (req.url ~ "^/admin") {
        return (pass);
    }

    return (hash);
}

sub vcl_backend_response {
    # Set cache time for different types of content
    if (bereq.url ~ "\.(jpg|jpeg|png|gif|ico)$") {
        set beresp.ttl = 1d; # Cache static images for a day
    } else {
        set beresp.ttl = 3600s; # Cache everything else for 1 hour
    }

    # Strip cookies from static content
    if (bereq.url ~ "\.(jpg|jpeg|png|gif|ico|css|js)$") {
        unset beresp.http.Set-Cookie;
    }

    return (deliver);
}

sub vcl_deliver {
    # Add a header to show cache status (hit or miss)
    if (obj.hits > 0) {
        set resp.http.X-Cache = "HIT";
    } else {
        set resp.http.X-Cache = "MISS";
    }

    return (deliver);
}


Lastly, we need to deploy the application using:

fly deploy



Fly containers at the edge

One interesting aspect of this setup is how fly.io handles regions and networking. By default, Fly deploys your applications to the region closest to you, but you can configure multiple regions for better global coverage:

# In the fly.toml file for your CDN
[regions]
primary = ["iad", "lhr", "syd"]  # Deploy to US, UK, and Australia

The great thing about fly.io is that it automatically routes requests to the nearest instance, essentially giving you a global CDN as is.



When This Makes Sense (And When It Doesn't)

This setup can make sense for budget-constrained, small to medium sized projects where you might want precise control over all of your caching behaviours. If you're not getting a lot of traffic but want to make sure your server-rendered app is as performant as possible this is not a bad way to go about it.

For anything getting some serious traffic, you're better off going with one of the more established CDN providers like AWS Cloudfront, Cloudflare, Fastly etc.. Some of those services will include WAF and DDoS attack protection as well.