• 0 Posts
  • 21 Comments
Joined 1Y ago
cake
Cake day: Jun 22, 2023

help-circle
rss

I went with iDrive e2 https://www.idrive.com/s3-storage-e2/ 5 TB is 150$/year (50% off first year) for S3-compatible storage. My favorite part is that there are no per-request, ingress or egress costs. That cost is all there is.


This isn’t specific to just netdata, but I frequently find projects that have some feature provided via their cloud offering and then say “but you can also do it locally” and gesture vaguely at some half-written docs that don’t really help.

It makes sense for them, since one of those is how they make money and the other is how they loose cloud customers, but it’s still annoying.

Shoutout to healthcheck.io who seem to provide both nice cloud offerings and a fully-fledged server with good documentation.


I’ve not found a good solution for actual constant monitoring and I’ll be following this thread, but I have a similar/related item: I use healthcheck.io (specifically a self-hosted instance) to verify all my cron jobs (backups, syncs, …) are working correctly. Often even more involved monitoring solutions do not cover that area (and it can be quite terrible if it goes wrong), so I think it’ll be a good addition to most of these.


I’ve not tried that myself, but AFAIK VLC can be remote controlled in various ways, and since the API for that is open, multiple clients for it exist: https://wiki.videolan.org/Control_VLC_from_an_Android_Phone

There’s also Clementine which offers a remote-control Android app.


I personally prefer podman, due to its rootless mode being “more default” than in docker (rootless docker works, but it’s basically an afterthought).

That being said: there’s just so many tutorials, tools and other resources that assume docker by default that starting with docker is definitely the less cumbersome approach. It’s not that podman is signficantly harder or has many big differences, but all the tutorials are basically written with docker as the first target in mind.

In my homelab the progression was docker -> rootless docker -> podman and the last step isn’t fully done yet, so I’m currently running a mix of rootless docker and podman.


Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it’s got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other “better” HDDs were entirely unresponsive.

Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won’t be buying hundreds of HDDs in their life.


Do you have any devices on your local network where the firmware hasn’t been updated in the last 12 month? The answer to that is surprisingly frequently yes, because “smart device” companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn’t gotten an update sind 2018.

Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age “my local home network is 100% secure” is far from a safe assumption.

Heck, even your router might be vulnerable…

Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it’s definitely not overkill.


Sidnenote about the PI filesystem self-clobbering: Are you running off of an SD card? Running off an external SSD is way more reliable in my experience. Even a decent USB stick tends to be better than micro-SD in the long run, but even the cheapest external SSD blows both of them out of the water. Since I switched my PIs over to that, they’ve never had any disk-related issues.


IMO set up a good incremental backup system with deduplication and then back up everything at least once a day as a baseline. Anything that’s especially valuable can be backed up more frequently, but the price/effort of backing up at least once a day should become trivial if everything is set up correctly.

If you feel like hourly snapshots would be worth it, but too resource-intensive, then maybe replacing them with local snapshots of the file system (which are basically free, if your OS/filesystem supports them) might be reasonable. Those obviously don’t protect against hardware failure, but help against accidental deletion.


What you describe is true for many file formats, but for most lossy compression systems the “standard” basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine.

And the standard defines a collection of “tools” that the encoders can use and how exactly to use, combine and tweak those tools is up to the encoder.

And over time new/better combinations of these tools are found for specific scenarios. That’s how different encoders of the same code can produce very different output.

As a simple example, almost all video codex by default describe each frame relative to the previous one (I.e. it describes which parts moved and what new content appeared). There is of course also the option to send a completely new frame, which usually takes up more space. But when one scene cuts to another, then sending a new frame can be much better. A “bad” codec might not have “new scene” detection and still try to “explain the difference” to the previous scene, which can easily take up more space than just sending the entire new frame.


This feels like a XY problem. To be able to provide a useful answer to you, we’d need to know what exactly you’re trying to achieve. What goal are you trying to achieve with the VPN and what goal are you trying to achieve by using the client IP?


You don’t need a dedicated git server if you just want a simple place to store git. Simply place a git repository on your server and use ssh://yourserver/path/to/repo as the remote URL and you can push/pull.

If you want more than that (i.e. a nice Web UI and user management, issue tracking, …) then Gitea is a common solution, but you can even run Gitlab itself locally.


“Use vim in SSH” is not a great answer to asking for a convenient way to edit a single file, because it requires understanding multiple somewhat-complex pieces of technology that OP might not be familiar with and have a reasonably steep learning curve.

But I’d still like to explain why it pops up so much. And the short version is very simple: versatility.

Once you’ve learned how to SSH into your server you can do a lot more than just edit a file. You can download files with curl directly to your server, you can move around files, copy them, install new software, set up an entire new docker container, update the system, reboot the system and many more things.

So while there’s definitely easier-to-use solutions to the one singular task of editing a specific file on the server, the “learn to SSH and use a shell” approach opens up a lot more options in the future.

So if in 5 weeks you need to reboot the machine, but your web-based-file-editing tool doesn’t support that option, you’ll have to search for a new solution. But if you had learned how to use the shell then a simple “how do I reboot linux from the shell” search will be all that you need.

Also: while many people like using vim, for a beginner in text based remote management I’d recommend something simpler like nano.


I just use Ansible to prepare the OS, set up a dedicated user, install/setup Rootless Docker and then Sync all the docker compose files from the same repo to the appropriate server and launch/update as necessary. I also use it to centrally administer any cron jobs like for backup.

Basically if I didn’t forget anything (which is always possible) I should be able to pick a brand new RPi with an SSD and replace one of mine with a single command.

It also allows me to keep my entire setup “documented” and configured in a single git repository.


yeah, there’s a bunch of lessons that tend to only be learned the hard way, despite most guides mentioning them.

similarly to how RAID should not be treated as a backup.


I’ve got a similar setup, but use Kopia for backup which does all that you describe but also handles deduplication of data very well.

For example I’ve added older less structured backups to my “good” backup now and since there is a lot of duplication between a 4 year old backup and a 5 year old backup it barely increased the storage space usage.


There’s lots of very good approaches in the comments.

But I’d like to play the devil’s advocate: how many of you have actually recovered from a disaster that way? Ideally as a test, of course.

A backup system that has never done a restore operations must be assumed to be broken. similar logic should be applied to disaster recovery.

And no: I use Ansible/Docker combined approach that I’m reasonably sure could quite easily recover most stuff, but I’ve not yet fully rebuilt from just that yet.


There’s plenty of reasons why you would not want to have a Jellyfin server be publicly available (even behind authentication). It’s simply not a well-secured system at this point (and may not get there for a long time, because it’s not a focus).

I strongly suggest keeping it accessed via VPN.

But note that VPN access is not necessarily any slower than “publicly” serving the HTTPs directly, at least not by much.

If you don’t already use Wireguard as the protocol, then maybe consider running a wireguard VPN instead, that tends to be quicker than classic OpenVPN.

And last but not least: a major restricting factor in performance of media servers from afar is the upload speed of your ISP connection, which is very often much lower than your download (100Mbit/10Mbit are common here, for example, so only 10% of the speed up than down).


Realistically the best bang-for-the-buck is maybe to sell it to some collector and get a new one ;-)

Mostly tongue-in-cheek, though. I don’t know if anyone is actually willing to pay for it, but I know some people are quite happy when they find their old Pi 1.


I used OpenHab a few years ago and remember it being way more fiddly with very varying integration quality. it didn’t help that it was based on OSGi packages (the complex mess that Eclipse IDE is also based on), which I don’t much care for.

i only recently starte with HA and found it much easier to use and tweak.

But I also saw some stubbornness by the devs. In my case related to oauth/third party authentication, which they claimed was “enterprise interests trying to corrupt a community project” (I’m paraphrasing) instead of good security practice of centralising the authentication in a homelab.


I don’t have a simple guide, but it’s probably a good idea to reduce the number of moving parts if you’re trying to keep stuff simple. So pick something that has all the features in-one (user management, authentication, authorization, …). They might not be the best at ever single thing (they almost certainly won’t), but doing it all usually means that it’s easier to configure and you don’t need to wire multiple things together.

I’ve recently moved from Authelia to Authentik due to some features that I was missing/wishing for, but between those two I’d definitely say Authenlia is easier to get running initially (and you don’t need external LDAP for it, as others have mentioned).

You’ll probably still need a proxy that can do proxy auth because not all services can do OICD/OAuth2. I’m using Traefik, but heard that Caddy is easier to set up initially (can’t compare myself).