• 0 Posts
  • 20 Comments
Joined 1Y ago
cake
Cake day: Jun 17, 2023

help-circle
rss

Do NOT self-host email! In the long run, you’ll forget a security patch, someone breaches your server, blasts out spam and you’ll end up on every blacklist imaginable with your domain and server.

Buy a domain, DON’T use GoDaddy, they are bastards. I’d suggest OVH for European domains or Cloudflare for international ones.

After you have your domain, register with “Microsoft 365” or “Google Workspace” (I’d avoid Google, they don’t have a stable offering) or any other E-Mail-Provider that allows custom domains.

Follow their instructions on how to connect your domain to their service (a few MX and TXT records usually suffice) and you’re done.

After that, you can spin up a VPS and try out new stuff and connect it also to your domain (A and CNAMR records).


A self-hostes RSS reader? Probably the ability to read your stuff from anywhere without installing something. Like on your work PC… ;)


Why not renting a few machines and virtualizing yourself? Can you install ESXi on a Hetzner server?


I learned a lot from the tutorials of https://ibracorp.io/

You’ll find rather advanced things there, but they are easy to follow and well explained.


Could you elaborate on this please? Isn’t cloudflared a tunnel INTO the machine running a service? Can you use the same tunnel for outbound traffic as well?? Where does the traffic end up? How does this work?



Yes, your uni might intercept communication on port 53 and reroute it to their DNS servers. It’s possible.


I used this one, with some modifications, like command line parameters to reuse it for different backup jobs.

I’ve packaged it into a little docker container that runs crond and runs the script every day for a few backup pairs.


I had UrBackup running for 6 months+. It wasn’t reliably backing things up, configuring it to be accessible via Internet is almost impossible, adding clienta is a hassle and the config isn’t very user friendly.

Furthermore I got the inpression, that it’s backups aren’t reliable; restoring files without UrBackup might be impossible.

That’s why I’m now back at a incremental rsync backup script. It’s reliable, you can just restore things by copying them back via ssh and it uses a lot less space (!!!) than the UrBackup backups.


Oh man! I’m using FreeIPA and I’m way in over my head. lldap looks like a great replacement! Question: do you know if/how I can migrate my (little) directory without recreating every user and group (AND resetting their passwords)?


Why are you using a crappy uni DNS? Why not 1.1.1.1 or OpenDNS or even Google’s 8.8.8.8?


I see. Sure, that’s a valid way to manage networking. I personally don’t like to do this manually anymore, just like I don’t drive stick shift anymore.

If you want to expose a service to the WWW I’d recommend using a reverse proxy. E.g. I use Traefik 2; it gets the config needed automatically from 5-6 labels per container and I don’t need to bother with IPs, certificates, NAT and what have you. It just creates virtual hosts procures a LetsEncrypt certificate and directs the traffic to the target container completely on its own.
Spinning up a container and trying it out with its own subdomain with correct SSL certificates immediately never has been easier. (I have a “*” DNS entry to my Treafik server).

You also could try installing cloudflared and create a Cloudflare tunnel. This way you don’t even have to forward any ports in your router.

Just some tips, if you want to explore new things :)


I have never cared about the IP addresses of my docker containers and never will.

Why do you? There is a docker internal DNS, you can just resolve IPs by service name/container_name.


Usually you can just send a second DNS server by separating the IPs with a comma.

That said, I’m running two PiHoles for the exact reason OP noted. These two PiHoles settings are synced with GravitySync.

If I update one PiHole or it goes down for any reason, the second one is there to pickup the slack.

Regarding DHCP: I’d probably turn off the stupid FritzBox DHCP because you really can’t set 2 DNS servers (WTF!) and instead use the PiHole(s) for DHCP.


Make sure the docker containers are using the same network. If you didn’t specify something, this should be the case for all three containers in the compose file.

Alright, now give every container a name of your choosing using the container_name field.

Lastly, change the nginx config to refer to the app container by name, but I think you already did that: upstream djangoapp { server container-name:port }

No need to expose any ports except the 80 or 443 of the nginx container.

If you have issues, spin up a temporary alpine container with a command like “tail -f /dev/null”, and use it with “docker exec -it temp /bin/bash” to install debugging stuff to debug the connection (nc/netcat, curl, …).


What about Microsoft 365? The tenant itself is free and per account and month you pay about 5-10$ (depending on sibscription level).

With this you have the full MS Exchange experience, you get 1TB of OneDrive space and all the shebang.


Ahhhh… alright, I misunderstood. So either depends_on is your friend or you could implement a rather dirty solution: Write a little script for the NPM healthcheck that also checks if searxng is online. Then use autoheal.

But that would be my last solution and only if the searxng is very closely depending on the npm container.


This will do exactly what you want: You have to configure a healthcheck for searnxg that detects when it’s down. Maybe something with curl or whatever.

As soon as it’s down autoheal will restart the container. Doesn’t matter why it is down (update, dependency not running, …) autoheal will just restart the container.


Either use depends_on or think of a health check and use Will Farrell’s simple Docker Autoheal container that restarts containers when they become unhealthy. https://github.com/willfarrell/docker-autoheal


Either use depends_on or think of a health check and use Will Farrell’s simple Docker Autoheal container that restarts containers when they become unhealthy. https://github.com/willfarrell/docker-autoheal