• 3 Posts
  • 12 Comments
Joined 1Y ago
cake
Cake day: Jul 01, 2023

help-circle
rss

Logcheck. It took ages to make sure innocent logs are ignored, but now I get an email as soon as anything non-routine happens on my servers. I get emails with logs from every update, every time I log in, etc. This has given me the most confidence that nothing unexpected is happening on my servers. Of course, one needs to make sure that the firewall is configured well, and that you use ssh keys etc., but logcheck is how I know I’m doing enough.


As others have said, with an incremental filesystem level mechanism, the backup process won’t be too taxing for the CPU. I have ZFS set up which makes this easy and I make hourly snapshots using sanoid which also get sent to another mirrored pair of connected drives using syncoid. Then, once a day, I upload encrypted daily snapshots to a bucket in the cloud using restic. Sounds complicated, but actually sanoid/syncoid and restic do all the heavy lifting. All I did is automate their schedules using systemd timers and some scripts to backup the right directories.


Very interesting project! However, I can’t help shake the feeling that whilst you pitch it as a platform for sharing DRM-free games, it will get used for sharing games against the licenses and wishes of publishers. I don’t really care about the publishers, but do you not think there is a great risk that once your app gets enough attention, it will draw their ire and force you to shut down? Perhaps not directly, but e.g., removing you from the windows store etc.


My configuration and deployment is managed entirely via an Ansible playbook repository. In case of absolute disaster, I just have to redeploy the playbook. I do run all my stuff on top of mirrored drives so a single failure isn’t disastrous if I replace the drive quickly enough.

For when that’s not enough, the data itself is backed up hourly (via ZFS snapshots) to a spare pair of drives and nightly to S3 buckets in the cloud (via restic). Everything automated with systemd timers and some scripts. The configuration for these backups is part of the playbooks of course. I test the backups every 6 months by trying to reproduce all the services in a test VM. This has identified issues with my restoration procedure (mostly due to potential UID mismatches).

And yes, I have once been forced to reinstall from scratch and I managed to do that rather quickly through a combination of playbooks and well tested backups.


Thanks for this useful reply! I think I’ll just need to closely examine my setup and figure out if I really need the ability to up/down interfaces like I described or whether the more persistent approach of networkd is actually more suitable for me. Sometimes I just want to reproduce behaviour that I’ve used before, but may not actually need.


Thanks for your reply! One thing I’m struggling with networkd is hysteresis. That is, toggling the interface down and then back up does not do what I expect it to. That is, setting the interface down does not clear up the configuration, and setting the interface up does not reconfigure the interface. I have to run reconfigure for that. I was hoping that the declarative approach of networkd would make it easy to predict interface state and configuration.

This does make sense because configuration is not the same as operational state. However, what would the equivalent of ifdown (set interface down and remove configuration) and ifup (set interface up and reconfigure) be using networkd and networkctl? This kind of feature would be useful for me to test config changes, debug networking issues, disconnect part of the network while I’m making some changes, etc.


Using systemd-networkd vs ifupdown on Debian
Does anybody have experience with both systems enough to compare them? I'm currently using ifupdown on my Debian server as that's the default, but it seems that the modern way of managing the local network is via systemd-networkd so I'm contemplating putting the effort in to migrate. Would those of you who have experience with it, recommend it? In my short investigation, I have made the following observations: * using networkd means you can use networkctl to manually control the interfaces which is quite convenient * networkd aims to be fully declarative * networkd separates the creation of virtual interfaces (netdev files) from their configuration (network files) * networkd doesn't support all networking features (e.g. namespaces) * networkd is systemd, but surprisingly I can't find information on how to create other unit files that depend on the individual network files going up or down, other than networkd-dispatcher. I don't like dispatcher because just like ifupdown it triggers all the scripts and you need if tests to exclude all interfaces you don't need to be affected. I'd like to write unit files that can be targeted to activate and deactivate when a particular interface goes up or down. * networkd, other than via dispatcher, does not seem to support adding arbitrary commands to run like ifupdown supports via e.g. pre-down, post-up, etc.
fedilink

Thanks a lot for these tips! Especially about using the upstream deb.


Trying to understand the different selfhosted monitoring solutions
Note: It seems my original post from last week didn't get posted on lemmy.world from kbin (I can't seem to find it) so I'm reposting it. Apologies to those who may have already seen this. I'm looking to deploy some form of monitoring across my selhosted servers and I'm a bit confused about the different options. I have a small network of three machines that I would like to monitor. I am not looking for a solution that lets me monitor tens, hundreds, or thousands of nodes. Furthermore, I am more interested in being able to observe metrics for each node individually rather than in aggregate. Each of these machines performs a different task so aggregate metrics from these machines are not particularly meaningful. However, collecting all the metrics centrally so that I can have a single dashboard to view them all in one convenient place is definitely something I would like. With that said, I have been trying to understand the different (popular) options that are available and I would like to hear what the community's experience is with these options and if anybody has any advice on any of these in light of my requirements above. Prometheus seems like the default go-to for monitoring. This would require deploying a node\_exporter on each node, a prometheus service, and a grafana dashboard. That's all fine, I can do that. However, from all that I'm reading it doesn't seem like Prometheus is optimised for my use case of monitoring each node individually. I'm sure it's possible, but I'm concerned that because this is not what it's meant for, it would take me ages to set it up such that I'm happy with it. Netdata seems like a comprehensive single-device monitoring solution. It also appears that it is possible to run your own registry to help with distributed monitoring. Not gonna lie, the netdata dashboard looks slick. An important additional advantage is that it comes packaged on Debian (all my machines run Debian). However, it looks like it does not store the metrics for very long. To solve that I could also set up InfluxDB and Grafana for long-term metrics. I could use Prometheus instead of InfluxDB in this arrangement, but I'm more likely to deploy a bunch of IoT devices than I am to deploy servers needing monitoring which means InfluxDB is a bit more future-proof for me as it could be reused for IoT data. Cockpit is another single-device solution which additionally provides direct control of the system. The direct control is probably not so much of a plus as then I would never let Cockpit be accessible from outside my home network whereas I wouldn't mind that so much for dashboards with read-only data (still behind some authentication of course). It's also probably not built for monitoring specifically, but I included this in the list in case somebody has something interesting to say about it. What's everybody's experience with the above solutions and does anybody have advice specific to my situation? I'm currently leaning to netdata with my own registry at first and later add InfluxDB and Grafana for long-term metrics.
fedilink

I already posted that I recommend fastmail elsewhere in this thread, but you raised so many good points that it reminded me of some extra points :)

Fastmail offers granular, per-app passwords – I have a single password which has read-only access to IMAP in order to back up all the data on a timer. This feature is missing from many (many) other email providers - using the 80/20 rule, if they even offer it it’s a single password with full access (Mailfence, for example)

Since this community is about selfhosting I think it’s worth pointing out that this is AMAZING for selfhosting. I have all me selfhosted services sending e-mail via fastmail’s SMTP. With per-app passwords I don’t need to store my normal e-mail password and the apps can be limited to SMTP only (so no read access). And in case of compromise you can revoke permissions on a per-app granularity.

Fastmail offers full CardDAV (contacts) and CalDAV (calendar) access, which makes plugging it into any other app that supports this very easy - their DNS wizard helps you set up the service records. I use “DavX5” on my Android to sync all Contacts and Calendar outside of using the Fastmail app (which is a self contained app on Android, it’s not too bad)

Fastmail has become my contacts app now - it’s really great to have all your e-mail and contacts in the same place. The contacts don’t even need to have an e-mail address - I have a lot of contacts stored for whom I only have a phone number. I sync to android using the same DavX5 app and then immediately have these contacts in whatsapp and signal.


I recommend fastmail.com though they do have done shortcomings that you need to consider such as the fact that they’re based in Australia (five eyes country) and have servers in the USA. Their advantage is a slick interface, fantastic app based on JMAP, and just generally being super convenient. They allow catch all addresses, masked emails, custom domain etc. I find them super convenient.



If it was just storage/RAM scraping then that could be solved with SSL pass-through though. That way the reverse proxy would not decrypt the traffic and would forward the encrypted traffic further to the home server. I was actually setting that up a few hours ago. However, since the VPS provider owns the IP address of the VPS, they can simply obtain their own certificate for the domain. After all, Let’s Encrypt verifies your ownership of the domain by your ability to control the DNS entries. Therefore, even if the certificates weren’t on the VPS, the fact that I am redirecting traffic via their IP address makes me vulnerable to a malicious provider.

The “hobby exercise” was just to indicate that this is not for work and that I’m interested in an answer beyond “you need to trust your provider” which I do :) I agree, these are important questions! And they’re also interesting!


Is it possible to completely hide all reverse proxy traffic from a VPS provider?
I run a self-hosted server at home on which I have run a bunch of personal stuff (like nextcloud etc.). To prevent pointing DNS servers at my home router, I run a reverse proxy on a VPS that I rent (from Scaleway FWIW). Today I was trying to figure to what extent that exposes my data to my VPS provider and whether I can do something about it. Disclaimer: this is just a hobby exercise. I'm not paranoid, I just want to learn for my own self how to improve security of my setup. My reverse proxy terminates the SSL connection and then proxies the connection over a wireguard connection to my home server. This means that (a) data is decrypted in the RAM of the VPS and (b) the certificates live unencrypted in the storage of the VPS. This means that the VPS provider, if they want to, can read all the traffic unencrypted to and from my home server. I was thinking that I can solve both problems by using Nginx's SSL pass-through feature. This would allow me to not terminate SSL on the VPS solving (a) and to move the certificates to my home server solving (b). But just as I was playing around with it, I realised that SSL pass-through would not solve the problem of trying to protect my data from the VPS provider. As long as my DNS records point at the VPS provider's servers, the VPS provider can always get their own certificates for my domains and do a MitM attack. Therefore, I might as well keep the certificates on the VPS since I still have to trust them not to make their own behind my back. In the end I concluded that as long as I use a VPS provider to route my traffic to my home server, there is no fool-proof way to secure my data from them. Intuitively it makes sense, the data crosses their hardware physically and thus they will have access to it. The only way to stop it would be to update the DNS records to point directly at my home server which I don't want to do. Is this correct thinking or is there some way to prevent the VPS provider from seeing my data? Again, I'm trying to solve this problem as a hobby exercise. The most sensitive data that I have is stored encrypted at the filesystem level and I only decrypt it locally on my own machine to work on it. Therefore, the actually sensitive data that would be cost me a lot if compromised is never available unencrypted on the VPS. Due to the overhead of this encryption and other complications, I don't do this for all my files.
fedilink

I originally used this too, but in the end had to write my own python script that basically does the same thing and is also triggered by systemd. The problem I had was that for some reason podman sometimes thinks there is a new image, but when it pulls it just gets the old image. This would then trigger restarts of the containers because auto-update doesn’t check if it actually downloaded anything new. I didn’t want those restarts so had to write my own script.

Edit: but I lock the version manually though e.g. nextcloud 27 and check once a month if I need to bump it. I do this manually in case the upgrade needs an intervention.