Why I Prefer Running nginx on My Docker Host Instead of in a Container
Mục Lục
Why I Prefer Running nginx on My Docker Host Instead of in a Container
This is specific to using Docker Compose on a single server deploy. Here’s how I arrived at this choice after years of experimenting.
Quick Jump: Gotchas When Running nginx in a Container | Decoupling Your App from Its Reverse Proxy | But What about Testing nginx in Development? | How Can nginx Connect to Your Container? | Conclusion
One of the main perks of using Docker is being able to run a single command such as docker-compose up
and have everything you need running to serve your application, so there must be a good reason to break away from that.
Let’s say you have a typical web application with your favorite web framework, a database, cache and perhaps a background worker all sitting in a docker-compose.yml
file.
That gets you going for development but in production you’ll likely want to throw nginx in front of your app server as a reverse proxy.
That’ll give you all sorts of goodies like being able to efficiently serve static files, terminate SSL, use gzip, get country code headers, set headers, control deny / allow lists, redirect HTTP to HTTPS, redirect www to your apex domain (or vice versa) and more.
That’s with the free and open source version of nginx too. The reason I still use nginx when other similar’ish and more Docker friendly solutions exist is because it does all of those things very well and it has an impeccable track record.
Needless to say nginx is amazing, but when you run it in a container you can easily run into a few use cases that may trip you up if you’re deploying everything onto 1 server.
Gotchas When Running nginx in a Container
Having built and deployed a bunch of sites over the years, here’s what I came across:
Upgrading Docker Itself
On a decently long running server which is often the case in single server deploys you’ll likely want to upgrade Docker from time to time.
If nginx is running in a container then your site is going to be 100% dead to the world while Docker isn’t running. Users will get a connection error.
When nginx is installed directly on your host you can serve a 503 maintenance page that doesn’t depend on Docker or any containers running. It can be served directly by nginx and it’s super easy to toggle on and off. It only requires a self contained HTML file to exist in a specific location on your file system.
Sometimes a little bit of down time is needed on a single server deploy, unless you go the route of spinning up a brand new server, updating your DNS records, waiting for the records to propagate and then destroying your old server. That or replacing DNS with managing your own load balancer works too.
That’s totally reasonable and I have nothing against that but that also opens up multiple cans of worms like using a third server or managed service to host your database so you don’t end up losing data or having it become out of sync during the switch over period.
There’s also tons of things you may or may not be doing to keep your web servers stateless that makes server migrations tricky to do without downtime or data inconsistencies.
Long story short, the maintenance page served by nginx is a straight forward solution that works no matter what your app’s architecture is and how far down the https://12factor.net rabbit hole you’ve gone.
Running Multiple Independent Apps on 1 Server
If you have a single docker-compose.yml
file that has everything related to your app (including nginx), what happens if you have 2 apps each with their own nginx service?
You can’t publish ports 80 / 443 from 2 different copies of nginx running. You’ll end up with a port conflict. You also can’t realistically use custom ports because then end users would need to go to example.com:81 instead of example.com.
When nginx is running directly on your host this is a non-issue. You can have your Dockerized web apps all running on their own ports and have a separate nginx config for each app. Since nginx is only running once you never encounter a port conflict on 80 / 443.
From your app’s docker-compose.yml
POV all they need to do is publish 127.0.0.1:8000:8000
and now nginx can access them but the public internet won’t be able to. Perfect!
Yes, technically you can accomplish the same thing if you run nginx in a Docker container that’s separate from each app’s docker-compose.yml
file but now you’ve still lost everything being up’able from 1 file and you need to ensure everything is on the same Docker network.
I also know there’s the https://github.com/nginx-proxy/nginx-proxy repo, but in practice I never ended up sticking with it. By the time you finish setting your custom headers, redirects and all that jazz you end up with pretty much a fully custom set up in the end.
At that point with a custom jwilder nginx image, custom configs and setting every optional flag it becomes more complex to run that than rolling your own solution.
Of course you can always run all of your projects on different servers to make this a non-issue but if you’re someone who likes shipping a bunch of little side projects it can save you a decent amount of monthly costs by running more than 1 thing on 1 server.
Speaking of which, running nginx outside of Docker is also nice for combining static web sites and Dockerized apps on the same server. You can have your non-Dockerized nginx serve your static blog while also acting as a reverse proxy to your main Dockerized web app(s).
That works nicely in practice because a static site only involves having a bunch of files sitting on disk. There’s no programming run-time to deal with, so it makes sense not requiring Docker to serve such content. This blog and all of my static sites are hosted directly with nginx running on 1 server btw.
SSL Certificate Management
Technically this isn’t much different in either case but personally I use Certbot to issue Let’s Encrypt SSL certs so it’s not a big deal to get SSL going directly on my host. That includes DNS based validation too.
I’m mentioning it here because when nginx isn’t running in Docker it’s really straight forward to piece everything together. Certbot runs on your host, certificates renew automatically with a cron job and you can reference the certificate paths in your nginx config. Done.
It’s a bit more complicated if you have everything containerized. Now, it’s still doable of course but I remember once looking at the jwilder letsencrypt proxy companion and was like “whhhhhhhhhhhaaaaaaaat?” after trying to digest the charts and diagrams that explain the architecture involved to pull everything off.
That and there’s been an issue open since 2018 to get DNS challenges working so you can issue wild card certificates. Even without wild card certs using DNS based validation is nice because it means nginx doesn’t need to respond to challenge requests. Certbot handles this out of the box.
I appreciate that projects like jwilder exist and clearly a lot of folks use it but it just wasn’t for me. I also haven’t found any alternative solutions that make the process more straight forward than handling certificate management outside of Docker.
Decoupling Your App from Its Reverse Proxy
With the above said I also think from an architecture perspective, decoupling your reverse proxy (aka nginx) from your application seems like a reasonable idea.
Think about it like this:
If you decided to deploy your app on Heroku, you don’t need to think about a reverse proxy. It’s no longer a concern that your app has to deal with.
If you decided to run your app in a Kubernetes cluster you’ll use 1 of many different ingress controllers, but as for nginx itself it’s debatable on whether or not you’d use it in the same exact way as you would on a self managed server.
If you decided to deploy your app to your own self managed server then you’d very likely be using nginx. You could even use it when deploying to multiple servers too because it supports remote upstreams and load balancing too.
With the above said, from your app’s perspective I think it’s fair to say that nginx isn’t a primary / critical concern in the same way that your database is. In the nginx case, it very much depends on how you decide to run your app in production.
But What about Testing nginx in Development?
One amazing benefit of Docker is knowing that if something runs on your dev box, as long as you use the same image somewhere else with the same or similar environment variables you can be very confident it’ll work.
But nginx is kind of different in this regard. If you really want to test that your nginx config is working with production-like settings, at the very least you’re going to need to run a proper DNS server like dnsmasq on your dev box or have a router that supports full blown DNS.
By default you won’t be able to fully test any nginx configuration that depends on being able to resolve a fully qualified domain name, such as redirecting www.example.com to example.com. That’s because you only have access to localhost.
Plus, I don’t know about you but running my app in production mode is quite different than in development. For example, I don’t generate compiled and MD5 tagged static files in development but nginx will expect them to exist.
Also some of the paths to where things are located will be different. For example let’s say your app supports uploading files to disk. If you use a container, you’ll need to have that directory volume mounted back to disk.
In development that might be in a relative path to your project to keep things together but in production it might be saved in a block storage device.
There’s going to be a bunch of aspects of your nginx config that will be different in dev and prod, and your app itself is likely going to be running differently too.
That lead me to not caring too much about not having nginx running in development in a container. I can still test nginx configs for syntax errors in CI because nginx has a built in tool for that, and you can easily do that by volume mounting in your config to the official nginx Docker image and run nginx -t myconfig.conf
.
But for really testing everything in production mode, nothing beats a test or staging server that’s running in the same environment as your production server but on a different sub-domain. This server could be temporary too, it’s up to you.
On the bright side, once you have your nginx config working chances are you won’t be modifying it that often. It’s one of those things where you really only need to test it when it changes and that might not happen for months or even years after it’s been set up.
How Can nginx Connect to Your Container?
If you like the idea of keeping nginx outside of Docker you can have it connect to your web container after using Docker’s port publishing feature.
For example if you’re using Docker Compose and you have a web
service defined that’s running on port 8000 you could add this port publish property to that service:
ports
:
-
"
127.0.0.1:8000:8000"
Then when you configure nginx’s proxy_pass
setting you can set it to http://localhost:8000
. This works nicely because the above port forward is only limited to localhost so that nginx can connect to your container from your Docker host but the public internet will not be able to connect to your container.
I went into a lot more detail about this concept and more in my DockerCon talk from 2021.
Conclusion
So that’s why I happily run nginx directly on my Docker host while I keep the rest of my application Dockerized. I’m really happy with the result and if you happen to use Ansible to provision your server, it’s a piece of cake to get everything set up to be automated once you learn how Ansible works.
By the way if you’re curious, I have been working on a web app deployment course for over a year that uses Docker, Ansible, nginx and more. If you want to get notified when it’s released you can sign up below.
How do you run nginx with your Dockerized web apps? Let me know in the comments.