Heads up! Long post and lots of head bashing against the wall.
Context:
I have a written an a python app (Django). I have dockerized the deployment and the compose file has three containers, app, nginx and postgres. I’m currently trying to deploy a demo of it in a VPS running Debian 11. Information below has been redacted (IPs, Domain name, etc.)
Problem:
I keep running into 502 errors. Locally things work very well even with nginx (but running on 80). As I try to deploy this I’m trying to configure nginx the best I can and redirecting http traffic to https and ssl certs. The nginx logs simply say “connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: demo.example.com, request: “GET / HTTP/1.1”, upstream: “http://192.168.0.2:8020/”, host: “demo.example.com””. I have tried just about everything.
What I’ve tried:
How can you help:
Please take a look at the nginx.conf config below and see if you guys can spot a problem, PLEASE! This is my current /etc/nginx/nginx.conf
`
user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;
events { worker_connections 1024; }
http { include /etc/nginx/mime.types; default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream djangoapp {
server app:8020;
}
server {
listen 80;
listen [::]:80;
server_name demo.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name demo.example.com;
ssl_certificate /etc/letsencrypt/live/demo.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/demo.example.com/privkey.pem;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_prefer_server_ciphers on;
location / {
proxy_pass http://djangoapp;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection keep-alive;
proxy_redirect off;
}
location /static/ {
autoindex on;
alias /static/;
}
}
}
`
EDIT: I have also confirmed that both containers are connected to the same docker network (docker network inspect frontend)
EDIT 2: Solved my problem. See my comments to @chaospatterns. TLDR there was an uncaught exception in the app but it didn’t cause a crash with the container. Had to dig deep into logs to find it.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
First the basics. Connection refused means that nothing is running on “http://192.168.0.2:8020/”
0.0.0.0/8082->8082
What steps did you do to confirm that this is running?
gunicorn app.wsgi:application --user www-data --bind 0.0.0.0:8020 --workers 3
.SOLVED… ALMOST THERE??? There were no signs (docker logs app) of an issue, until I scrolled all the way to the very top (way past all the successful migrations, tasks being run upon boot and successful messages). There was an uncaught exception to boot gunicorn workers because of middleware I had removed from my dependencies a few days ago. Searched through my code, removed any calls and settings for this middleware package. Redeployed the app and now I can hit the public page.
What now? So now that it looks like everything is working. What is the best practice for the nginx.conf? Leave it all in /etc/nginx/nginx.conf (with user as root), reestablish the out box nginx.conf and /etc/nginx/conf.d/default.conf and just override the default.conf or add a secondary config like /etc/nginx/conf.d/app.conf and leave default.conf as configured out of box? What is the best practice around this?
That’s odd that it didn’t cause the Docker container to immediately exit.
My suggestion would be to create
/etc/nginx/conf.d/mycooldjangoapp.conf
. Compared toconf.d/default.conf
, this is more intuitive if you start hosting multiple apps. Keep it out of thenginx.conf
because apt-get or other package managers will usually patch that with new version changes and again it gets confusing if you have multiple apps.Yea not sure why it didn’t just crash and hid behind all kinds of successful messages.
Fair enough! If I create a secondary config as you are suggesting, wouldn’t it create a conflict with the server blocks of default.conf? If I remember correctly, default.conf has a server listen 80 block going to localhost (which in my case wouldn’t be the correct path since the app is in another container) so wouldn’t nginx get confused because it doesn’t know which block to follow???
Or maybe I saw the block in default.conf but it was all commented it out out of the box. Idk I had to step away for a sec. As you can imagine I’ve been bashing my head for hours and it turned out to be some bs I should have probably read the entire log stream. So I’m pretty angry/decompressing at the moment.
No, you can have multiple
server
blocks with the samelisten
directive. They just need to differ by theirserver_name
and only oneserver
block can containdefault_server
; ReferenceNGINX will use the server_name directives to differentiate the different backend services. This is a class virtual host configuration model.
Alright I’ll give it a try and see what happens. Thanks for your help!