I see! So I am assuming you had to configure Nginx specifically to support this? Problem is I love using Nginx Proxy Manager and I am not sure how to change that to use socket activation. Thanks for the info though!
Man, I often wonder whether I should ditch docker-compose. Problem is there’s just so many compose files out there and its super convenient to use those instead of converting them into systemd unit files every time.
I use podman with the podman-docker compatibility layer and native docker-compose. Podman + podman-docker is a drop-in replacement for actual docker. You can run all the regular docker commands and it will work. If you run it as rootful, it behaves in exactly the same way. Docker-compose will work right on top of it.
I prefer this over native Docker because I get the best of both worlds. All the tutorials and guides for Docker work just fine, but at the same time I can explore Podman’s rootless containers. Plus I enjoy it’s integration with Cockpit.
I started with Docker and then migrated to Podman for the integrated Cockpit dashboard support. All my docker-compose files work transparently on top of rootful Podman so the migration was relatively easy. Things get finicky when you try to go rootless though.
I say try both. Rootful podman is gonna be closest to the Docker experience.
OP might not be looking to make a full paid service considering he’s just doing this for friends and relatives.
I get the sentiment though. I run a Jellyfin server that I share with a few friends and some of them have flat out told me that I should start charging for it. I refused because getting paid for it just sets up an expectation that it will be reliable and have all the stuff that they want. Personally, I don’t want that kind of pressure. I want to be able to tweak the server and install new things / updates without worrying about uptime.
You should really use the Nextcloud docker-compose files to setup Nextcloud. They make it stupidly easy to deploy. Pair that with SWAG as a reverse proxy and you get a pretty secure Nextcloud deployment complete with SSL certs.
Come to think of it, why not also run pihole in a docker instead of a full VM?
Eh, RAID 5 and 6 are still viable for home deployments. Not a lot of people want to be running massive drive arrays or expensive disks at home just to get decent storage. I ran a 4x 4TB RAID 5 for close to a decade and it’s survived 4 drive rebuilds. The Intel chip on the QNAP machine I was using to maintain that array died before the array itself did. Now I have an NVMe SSD-based array, so drive rebuilds are even less of a concern.
The other reason why I brought it up is that the article you linked doesn’t even mention BTRFS RAID 5 and 6 issues until all the way down at the bottom of the article in a small paragraph, when really it should be in bright red letters at the beginning.
Does FileBrowser support creating public links for sharing? I use Nextcloud as a way to deliver large amounts of photos and videos to my clients.
My issue with Syncthing is that doing partial sync is sort of a pain in the ass. My Nextcloud currently has 290GB of data that I’d rather not completely sync to all of my devices and AFAICT with Syncthing, you still need to fiddle around with config files to do that, and even then its clunky and doesn’t work sometimes.
Yeah I get that Nextcloud is a bit slow but it’s definitely more capable as a drop-in cloud storage replacement than other software I’ve seen.
Does BTRFS include Raid support? I don’t have much experience with it. The most I did once was recover a snapshot.
It does have RAID support but its RAID 5 and 6 are BROKEN! The devs themselves do not recommend using these. If you need RAID 5 and 6 and you absolutely want to use BTRFS, you’ll have to go with mdraid and then put BTRFS on top, but then you lose a lot of the BTRFS self-healing capabilities. Personally for RAID 5 and 6, I still recommend ZFS’s RAIDz. It’s quite easy to setup. I have a DIY NAS with an OS drive running BTRFS and a storage pool consisting of 4x 4TB SSDs running in RAIDz1.
One word of advice–it can be smart to have the domain name with one provider, and the hosting with a different one.
If OP is thinking of DDNS, he might be looking at hosting from home. If you’re using a VPS, the IP generally doesn’t change so DDNS isn’t really required.
I agree with your Namecheap recommendation though. I use it to access my docker containers that’s running on a NAS box at home. My router runs the DDNS client and periodically notifies Namecheap whenever my home IP changes.
SSDs are coming down drastically in price so depending on when you create your NAS, you might want to consider NVMe SSDs instead of HDDs for the performance and power savings. I just bought 4x 4TB MSI Spatium’s and put them into a self-built NAS with ZFS raidz1 and I couldn’t be happier. It takes only 2 hrs to scrub 8TB worth of data.
HDDs are starting to become obsolete and I am honestly here for it. I think in the future SSDs will start to become much more economical. Currently they’re still 2x the price of an equivalent NAS grade HDD, but that’s better than the 4x just two years ago.
If those disks are the big plastic WD externals, they can be easily shucked and used in a NAS—much cheaper than buying the bare drives without the casing for reasons known only to WD.
They’re cheaper because WD externals are usually bottom of the barrel drives that failed to pass muster for their other offerings. I would exercise caution when relying on them. Source: friend who works at WD doing drive validation.
Mmm, not quite. I am not familiar with how picoshare works exactly, but according to the picoshare docker README, it uses the data volume to store its application sqlite database. My original suggestion is that the Docker application and its application data (configs, cache, local databases, and other files critical to the functioning of the application) should be on the same machine, but the images and other files that picoshare shares can be remote.
Basically, my rule is to never assume that anything hosted on another machine will be guaranteed to be available. If you think picoshare can still work properly when its sqlite database gets ripped out without warning, then by all means go for it. However, I don’t think this is the case here. You’ll risk the sqlite database getting corrupted or the application itself erroring out if there’s ever a network outage.
For example, with the Jellyfin docker image, I would say that the cache
and config
volumes have to be local, while media
can be on a remote NAS. My reasoning is that Jellyfin is built to handle media files changing / adding / disappearing. It is however, not built to gracefully handle its config files and caches disappearing in the middle of operation.
I wouldn’t recommend running container volumes over network shares mainly because network instability between NAS and server can cause some really weird issues. Imagine an application having its files ripped from underneath them while they’re running.
I would suggest containers + volumes together on the server, and stuff that’s just pure data on the NAS. So for example, if you were to run a Jellyfin media server, the docker container and its volumes will be on the server, but the video and audio files will be stored on the NAS and accessed via a network share mount.
I had been self-hosting stuff on my QNAP NAS for years before it died due to the infamous Intel clock drift issue and now I am in the process of making a DIY NAS (last few parts are coming in this weekend). I don’t have answers to all your questions but I’ll try my best with the experience that I have.
It is absolutely possible to mix your usecases on one machine, with the caveat that if you’re running on less-powerful hardware (like a off-the-shelf NAS), some of your services might be competing with each other for resources. CPU usage and disk access times (especially with a RAID 5 HDD array) can all impact performance. My QNAP NAS did start to bog down a few times with both Jellyfin and Nextcloud running at full tilt, but it was generally pretty usable.
Most NAS products support docker images so I wouldn’t worry too much about NAS vs PC in this case. Also, docker-compose
is your friend. Write your yaml file once and it will make for easy setup and upgrading.
Dude, I am with you on dead-end products. The death of my QNAP NAS has caused me lots of headache and I basically swore off products that I can’t upgrade and fix myself. The problem is price. The cheapest x86 PC that I personally think will handle multiple usecases (media server, Nextcloud, SAMBA, maybe a Valheim server or a VM when I need it) costs roughly around $650-$750 depending on your build. You can probably find a Synology or QNAP NAS for about $500-$550. Granted, they most likely aren’t going to be anywhere near as powerful as a DIY x86 PC, so I think its worth going the DIY route. Those prices do NOT include the drives either, so be sure to factor that into your calculation. If you’re curious, here’s one of the cheaper builds I was considering building: https://pcpartpicker.com/list/rtqDbK. Ultimately I decided to go for a crazier build because I did not want slow HDDs anymore: https://pcpartpicker.com/list/Lm92Kp
You mean running a media server on your laptop, but pointing the media libraries to a Samba share on a NAS? I did that for years with my QNAP NAS and a little Intel NUC running Plex. The only issue is that you won’t get incremental media library updates whenever you add new files into the Samba folder. Usually, Plex (and Jellyfin) can detect file changes if the media library is local and automatically process only those files instead of rescanning the entire media library. Over Samba, there’s no such automatic detection so whenever you add a file, you have to manually trigger a full rescan in order for it to pop up in your media library.
I believe Unraid does this. I have not tried it myself and I plan on going with ZFS for my DIY NAS.
I don’t have any resource recommendations, but personally I’ve taken the docker-compose
approach which helps quite a bit for isolation. For media servers, you only need to give read-only access to the volumes hosting your media storage. It is also recommended to put media servers like Jellyfin behind a reverse Nginx proxy because Nginx has been battle-tested in terms of security and Jellyfin’s web server has not. You can use docker-compose
to easily spin up a Nginx proxy alongside your media server and have them contained in their own isolated network.
Do not open any more ports than is necessary to host your services. This means even remote administration should not be available via your public IP. Learn how to setup Wireguard so that if you’re away from home, you can quickly VPN into your network and do remote administration. If you’re using SSH, make sure you disable password authentication and only rely on SSH keys. I am sure other people can add more, this is just the basics.
Hope this helps!
Gave me a fixed IP in my home network
You don’t need to have a fixed IP for your client machines.
What does ipconfig /all
list as your DNS servers? Also, double check your browser’s DNS Over HTTPS setting. Depending on what it is set to, you might be accidentally bypassing your configured DNS server.
To verify which DNS you’re actually contacting, you can go to ipleak.net to check.
Interesting solution! Thanks for the info. Seems like Nginx Proxy Manager doesn’t support Proxy Protocol. Lmao, the world seems to be constantly pushing me towards Traefik all the time 🤣