• 3 Posts
  • 28 Comments
Joined 1Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

Interesting solution! Thanks for the info. Seems like Nginx Proxy Manager doesn’t support Proxy Protocol. Lmao, the world seems to be constantly pushing me towards Traefik all the time 🤣


I see. And the rest of your services are all exposed on localhost? Hmm, darn, it really looks like there’s no way to use user-defined networks.


I am guessing you’re not running Caddy itself in a container? Otherwise you’ll run into the same real IP issue.


I see! So I am assuming you had to configure Nginx specifically to support this? Problem is I love using Nginx Proxy Manager and I am not sure how to change that to use socket activation. Thanks for the info though!

Man, I often wonder whether I should ditch docker-compose. Problem is there’s just so many compose files out there and its super convenient to use those instead of converting them into systemd unit files every time.


Pasta is the default, so I am already using it. It seems like for bridge networks, rootlesskit is always used alongside pasta and that’s the source of the problem.


How do you guys handle reverse proxies in rootless containers?
I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193 This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right? If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.
fedilink

I use podman with the podman-docker compatibility layer and native docker-compose. Podman + podman-docker is a drop-in replacement for actual docker. You can run all the regular docker commands and it will work. If you run it as rootful, it behaves in exactly the same way. Docker-compose will work right on top of it.

I prefer this over native Docker because I get the best of both worlds. All the tutorials and guides for Docker work just fine, but at the same time I can explore Podman’s rootless containers. Plus I enjoy it’s integration with Cockpit.


I started with Docker and then migrated to Podman for the integrated Cockpit dashboard support. All my docker-compose files work transparently on top of rootful Podman so the migration was relatively easy. Things get finicky when you try to go rootless though.

I say try both. Rootful podman is gonna be closest to the Docker experience.






What are some KVM-over-IP or equivalent solutions you guys would recommend for guaranteed remote access and remote power cycle?
Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader. I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.
fedilink


OP might not be looking to make a full paid service considering he’s just doing this for friends and relatives.

I get the sentiment though. I run a Jellyfin server that I share with a few friends and some of them have flat out told me that I should start charging for it. I refused because getting paid for it just sets up an expectation that it will be reliable and have all the stuff that they want. Personally, I don’t want that kind of pressure. I want to be able to tweak the server and install new things / updates without worrying about uptime.


I am totally out of the loop. Why is Texas’s power grid that bad right now?


[SOLVED] If I am using the SWAG proxy in front of a Nextcloud instance, is it safe to ignore some of the warnings in the admin page?
I am using [one of the official Nextcloud docker-compose files](https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/insecure/mariadb/apache) to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning: ``` The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly. ``` I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance. **SOLVED:** Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of `noindex, nofollow, nosnippet, noarchive`, but Nextcloud expects `noindex, nofollow`.
fedilink


You should really use the Nextcloud docker-compose files to setup Nextcloud. They make it stupidly easy to deploy. Pair that with SWAG as a reverse proxy and you get a pretty secure Nextcloud deployment complete with SSL certs.

Come to think of it, why not also run pihole in a docker instead of a full VM?


Nice that share feature looks pretty slick. I might check this out.

Yeah I frankly don’t get why syncthing doesn’t implement it either. It’s like the only feature that really holds me back from using it, otherwise it’s pretty damn slick and has much faster sync than Nextcloud.


Eh, RAID 5 and 6 are still viable for home deployments. Not a lot of people want to be running massive drive arrays or expensive disks at home just to get decent storage. I ran a 4x 4TB RAID 5 for close to a decade and it’s survived 4 drive rebuilds. The Intel chip on the QNAP machine I was using to maintain that array died before the array itself did. Now I have an NVMe SSD-based array, so drive rebuilds are even less of a concern.

The other reason why I brought it up is that the article you linked doesn’t even mention BTRFS RAID 5 and 6 issues until all the way down at the bottom of the article in a small paragraph, when really it should be in bright red letters at the beginning.


Does FileBrowser support creating public links for sharing? I use Nextcloud as a way to deliver large amounts of photos and videos to my clients.

My issue with Syncthing is that doing partial sync is sort of a pain in the ass. My Nextcloud currently has 290GB of data that I’d rather not completely sync to all of my devices and AFAICT with Syncthing, you still need to fiddle around with config files to do that, and even then its clunky and doesn’t work sometimes.

Yeah I get that Nextcloud is a bit slow but it’s definitely more capable as a drop-in cloud storage replacement than other software I’ve seen.


Yes many possible configurations and snapshots.

Except RAID 5 and 6! Those are still broken on BTRFS and not recommended for use by the devs. It’s unfortunate because I just setup a DIY NAS and I had to go with ZFS because of this.


Does BTRFS include Raid support? I don’t have much experience with it. The most I did once was recover a snapshot.

It does have RAID support but its RAID 5 and 6 are BROKEN! The devs themselves do not recommend using these. If you need RAID 5 and 6 and you absolutely want to use BTRFS, you’ll have to go with mdraid and then put BTRFS on top, but then you lose a lot of the BTRFS self-healing capabilities. Personally for RAID 5 and 6, I still recommend ZFS’s RAIDz. It’s quite easy to setup. I have a DIY NAS with an OS drive running BTRFS and a storage pool consisting of 4x 4TB SSDs running in RAIDz1.


One word of advice–it can be smart to have the domain name with one provider, and the hosting with a different one.

If OP is thinking of DDNS, he might be looking at hosting from home. If you’re using a VPS, the IP generally doesn’t change so DDNS isn’t really required.

I agree with your Namecheap recommendation though. I use it to access my docker containers that’s running on a NAS box at home. My router runs the DDNS client and periodically notifies Namecheap whenever my home IP changes.


SSDs are coming down drastically in price so depending on when you create your NAS, you might want to consider NVMe SSDs instead of HDDs for the performance and power savings. I just bought 4x 4TB MSI Spatium’s and put them into a self-built NAS with ZFS raidz1 and I couldn’t be happier. It takes only 2 hrs to scrub 8TB worth of data.

HDDs are starting to become obsolete and I am honestly here for it. I think in the future SSDs will start to become much more economical. Currently they’re still 2x the price of an equivalent NAS grade HDD, but that’s better than the 4x just two years ago.


If those disks are the big plastic WD externals, they can be easily shucked and used in a NAS—much cheaper than buying the bare drives without the casing for reasons known only to WD.

They’re cheaper because WD externals are usually bottom of the barrel drives that failed to pass muster for their other offerings. I would exercise caution when relying on them. Source: friend who works at WD doing drive validation.


Mmm, not quite. I am not familiar with how picoshare works exactly, but according to the picoshare docker README, it uses the data volume to store its application sqlite database. My original suggestion is that the Docker application and its application data (configs, cache, local databases, and other files critical to the functioning of the application) should be on the same machine, but the images and other files that picoshare shares can be remote.

Basically, my rule is to never assume that anything hosted on another machine will be guaranteed to be available. If you think picoshare can still work properly when its sqlite database gets ripped out without warning, then by all means go for it. However, I don’t think this is the case here. You’ll risk the sqlite database getting corrupted or the application itself erroring out if there’s ever a network outage.

For example, with the Jellyfin docker image, I would say that the cache and config volumes have to be local, while media can be on a remote NAS. My reasoning is that Jellyfin is built to handle media files changing / adding / disappearing. It is however, not built to gracefully handle its config files and caches disappearing in the middle of operation.


I wouldn’t recommend running container volumes over network shares mainly because network instability between NAS and server can cause some really weird issues. Imagine an application having its files ripped from underneath them while they’re running.

I would suggest containers + volumes together on the server, and stuff that’s just pure data on the NAS. So for example, if you were to run a Jellyfin media server, the docker container and its volumes will be on the server, but the video and audio files will be stored on the NAS and accessed via a network share mount.


I had been self-hosting stuff on my QNAP NAS for years before it died due to the infamous Intel clock drift issue and now I am in the process of making a DIY NAS (last few parts are coming in this weekend). I don’t have answers to all your questions but I’ll try my best with the experience that I have.

  1. It is absolutely possible to mix your usecases on one machine, with the caveat that if you’re running on less-powerful hardware (like a off-the-shelf NAS), some of your services might be competing with each other for resources. CPU usage and disk access times (especially with a RAID 5 HDD array) can all impact performance. My QNAP NAS did start to bog down a few times with both Jellyfin and Nextcloud running at full tilt, but it was generally pretty usable.

  2. Most NAS products support docker images so I wouldn’t worry too much about NAS vs PC in this case. Also, docker-compose is your friend. Write your yaml file once and it will make for easy setup and upgrading.

  3. Dude, I am with you on dead-end products. The death of my QNAP NAS has caused me lots of headache and I basically swore off products that I can’t upgrade and fix myself. The problem is price. The cheapest x86 PC that I personally think will handle multiple usecases (media server, Nextcloud, SAMBA, maybe a Valheim server or a VM when I need it) costs roughly around $650-$750 depending on your build. You can probably find a Synology or QNAP NAS for about $500-$550. Granted, they most likely aren’t going to be anywhere near as powerful as a DIY x86 PC, so I think its worth going the DIY route. Those prices do NOT include the drives either, so be sure to factor that into your calculation. If you’re curious, here’s one of the cheaper builds I was considering building: https://pcpartpicker.com/list/rtqDbK. Ultimately I decided to go for a crazier build because I did not want slow HDDs anymore: https://pcpartpicker.com/list/Lm92Kp

  4. You mean running a media server on your laptop, but pointing the media libraries to a Samba share on a NAS? I did that for years with my QNAP NAS and a little Intel NUC running Plex. The only issue is that you won’t get incremental media library updates whenever you add new files into the Samba folder. Usually, Plex (and Jellyfin) can detect file changes if the media library is local and automatically process only those files instead of rescanning the entire media library. Over Samba, there’s no such automatic detection so whenever you add a file, you have to manually trigger a full rescan in order for it to pop up in your media library.

  5. I believe Unraid does this. I have not tried it myself and I plan on going with ZFS for my DIY NAS.

  6. I don’t have any resource recommendations, but personally I’ve taken the docker-compose approach which helps quite a bit for isolation. For media servers, you only need to give read-only access to the volumes hosting your media storage. It is also recommended to put media servers like Jellyfin behind a reverse Nginx proxy because Nginx has been battle-tested in terms of security and Jellyfin’s web server has not. You can use docker-compose to easily spin up a Nginx proxy alongside your media server and have them contained in their own isolated network.

Do not open any more ports than is necessary to host your services. This means even remote administration should not be available via your public IP. Learn how to setup Wireguard so that if you’re away from home, you can quickly VPN into your network and do remote administration. If you’re using SSH, make sure you disable password authentication and only rely on SSH keys. I am sure other people can add more, this is just the basics.

Hope this helps!


Gave me a fixed IP in my home network

You don’t need to have a fixed IP for your client machines.

What does ipconfig /all list as your DNS servers? Also, double check your browser’s DNS Over HTTPS setting. Depending on what it is set to, you might be accidentally bypassing your configured DNS server.

To verify which DNS you’re actually contacting, you can go to ipleak.net to check.