• 8 Posts
  • 25 Comments
Joined 1Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

Zero trust, but you have to use Amazon AWS, Cloudflare, and make your own Telegram bot? And have the domain itself managed by Cloudflare.

Sounds like a lot of trust right there… Would love to be proven wrong.


Thank you for these ideas, I will read up on systat+sar and give it a go.

Also smart to have the script always running, sleeping, rather than launching it at intervals.

I know all of this is a poor hack, and I must address the cause - but so far I have no clues what’s causing it. I’m running a bunch of Docker containers so it is very likely one of them painting itself into a corner, but after a reboot there’s nothing to see, so I am now starting with logging the top process. Your ideas might work better.


Nope, haven’t. It says I have 2 GB of swap on a 16 GB RAM system, and that seems reasonable.

Why would you recommend turning swap off?


This issue doesn’t happen very often, maybe every few weeks. That’s why I think a nightly reboot is overkill, and weekly might be missing the mark? But you are right in any case: regardless of what the cron says, the machine might never get around to executing it.


This insane torture is why there are post-it notes under the keyboards.


How to auto-reboot if CPU load too high?
I run an old desktop mainboard as my homelab server. It runs Ubuntu smoothly at loads between 0.2 and 3 (whatever unit that is). Problem: Occasionally, the CPU load skyrockets above 400 (yes really), making the machine totally unresponsive. The only solution is the reset button. Solution: - I haven't found what the cause might be, but I think that a reboot every few days would prevent it from ever happening. That could be done easily with a crontab line. - alternatively, I would like to have some dead-simple script running in the background that simply looks at the CPU load and executes a reboot when the load climbs over a given threshold. --> How could such a cpu-load-triggered reboot be implemented? ----- edit: I asked ChatGPT to help me create a script that is started by crontab every X minutes. The script has a kill-threshold that does a kill-9 on the top process, and a higher reboot-threshold that ... reboots the machine. before doing either, or none of these, it will write a log line. I hope this will keep my system running, and I will review the log file to see how it fares. Or, it might inexplicable break my system. Fun!
fedilink

Brother ADS-1700W is amazing!

  • no PC or USB required: place it anywhere
  • WiFi
  • scans a page double-sided to PDF in two seconds!
  • sends file to network share, ready to be consumed by Paperless
  • fully automatic, no button presses needed!
  • tiny footprint
  • document feeder
  • use with separator pages to bulk-scan many documents in one go

😍


PiVPN offers both services, Wireguard and OpenVPN.

What app do you use on Android? And on Windows?


I used Zerotier before and I still use it now. It is also the solution I am now going to continue with.

I wanted to try Wireguard to get away from a centrally managed solution, but if I can’t get it working after several hours, and Zerotier took five minutes - the winner is clear.


Obviously :) and make sure to forward to the correct LAN IP address, and make sure that machine has a static IP (or DHCP reservation).


PiVPN is elegant. Easy install, and I am impressed with the ascii QR code it generates.

But I could not make it work. I am guessing that my Android setup is faulty, orrrr maybe something with the Pi? This is incredibly difficult to troubleshoot.


Help me get started with VPN
*TLDR: VPN-newbie wants to learn how to set up and use VPN.* **What I have:** Currently, many of my selfhosted services are publicly available via my domain name. I am aware that it is safer to keep things closed, and use VPN to access -- but I don't know how that works. - domain name mapped via Cloudflare > static WAN IP > ISP modem > Ubiquity USG3 gateway > Linux server and Raspberry Pi. - 80,443 fowarded to Nginx Proxy Manager; everything else closed. - Linux server running Docker and several containers: NPM, Portainer, Paperless, Gitea, Mattermost, Immich, etc. - Raspberry Pi running Pi-hole as DNS server for LAN clients. - Synology NAS as network storage. **What I want:** - access services from WAN via Android phone. - access services from WAN via laptop. - maybe still keep some things public? - noob-friendly solution: needs to be easy to "grok" and easy to maintain when services change.
fedilink

Sorry but that’s not true. I have been running Immich for a long time now, and it is solid and stable.

A recent update had a change in the Docker configuration, and if you didn’t know that and just blindly upgraded, it would still run and show a helpful explanation. That’s amazing service.


I have tried Photoprism but was not as impressed by it as Immich.


Among my must-have selfhosting items, in no particular order, I can recommend:

  • Portainer, to keep track of what’s going on.
  • Nginx Proxy Manager, to ensure https with valid certificate to those services I want to have available from the outside.
  • Pihole, of course.
  • Gitea, to store my coding stuff.
  • Paperless-ngx, to store every paper in my life.
  • Immich, an amazingly good replacement for Google Photos.

Didn’t see Paperless in these comments yet. Great way to never again search for documents, bills, receipts, warranties, manuals, etc cetera ad nauseam.



Benn using it a few months, works like a charm!


The simple way is to Google ‘yunohost’ and install that on your spare machine, then just play around with what that offers.

If you want, you could also dive deeper by installing Linux (e.g.Ubuntu), then installing Docker, then spin up Portainer as your first container.


[SOLVED] Can’t access my site from WAN despite DNS and port forwarding in place. Help? [ERR_SSL_UNRECOGNIZED_NAME_ALERT]
TLDR: - Update: the server software has a bug about generating+saving certificates. Bug has been reported; as a workaround I added the local IP to my local 'hosts' file so I can continue (but that does not solve it of course). - I suspect there's a problem with running two servers off the same IP address, each with their own DNS name? Problem: - When I enter https://my.domain.abc into Firefox, I get an error ERR_SSL_UNRECOGNIZED_NAME_ALERT instead of seeing the site. Context: - I have a static public IP address, and a Unifi gateway that directs the ports 80,443 to my server at 192.168.1.10 where Nginx Proxy Manager is running as a Docker container. This also gives me a *__Let's Encrypt_* certificate. - I use Cloudflare and have a domain `foo.abc` pointed to my static public IP address. This domain works, and also a number of subdomains with various Docker services. - I have now set up a **second server** running yunohost. I can access this on my local LAN at https://192.168.1.14. - This yunohost is set up with a DynDNS `xyz.nohost.me`. The current certificate is self-signed. - Certain other ports that yunohost wants (22,25,587,993,5222,5269) are also routed directly to 192.168.1.14 by the gateway mentioned above. - All of the above context is OK. Yunohost diagnostics says that *_DNS records are correctly configured_* for this domain. Everything is great (except reverse DNS lookup which is only relevant for outgoing email). Before getting a proper certificate for the yunohost server and its domain, I need to make the yunohost reachable at all, and I don't see what I am missing. What am I missing?
fedilink

Yeah, that’s not that easy, unfortunately, because each end of the network cable passes through an insulated wall, through a hole equal to the cable width = smaller than the plug. Even if I find the break, it is likely in the outdoors part of the cable where I would want an unbroken cable without a field repair.


Do not run fsck on a mounted device

So how do I run this on /dev/sda? I can’t very well unmount the OS drive…


Hmm, interesting! I have a Synology switch, gotta read up on its capabilities.


Actual, not academic. And I agree that a new cable is cheap, which is what I will do. My question is about avoiding throwing a mostly good cable in the trash.


Just finished wiring the garage to the house - and find that the wire is damaged! Now what?
I mean, the simplest answer is to **lay a new cable,** and that is definitely what I am going to do - that's not my question. But this is a long run, and it would be neat if I could salvage some of that cable. How can I discover where the cable is damaged? One stupid solution would be to halve the cable and crimp each end, and then test each new cable. Repeat iteratively. I would end up with a few broken cables and a bunch of tested cables, but they might be short. How do the pro's do this? (Short of throwing the whole thing away!)
fedilink

CPU load over 70 means I can’t even ssh into my server
*edit: you are right, it's the I/O WAIT that it destroying my performance:* `%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st` *I could clearly see it using `nmon > d > l > -` such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it's simply my `sdb1` drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that's just treating the symptom and not the root cause, so I should probably also look for that root cause. But that's for another Lemmy thread!* I really don't understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely. Notably, the `top` command doesn't show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don't grok how a system can even get that high at all. My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS. How can I troubleshoot this, when 'top' doesn't show any culprit and it does not seem to be caused by any one specific container? (this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn't "beefy" but a Pi would be so much less.)
fedilink

I found that running just a NAS is no good because it lacks performance. And running just a server is no good because of data safety. So I run both.

I run a Synology NAS for storage, and an Ubuntu Box for services (Paperless, Mattermost chat, Photoprism gallery, Gitea for Code, picoshare, and lots other small stuff). All services run on Docker (plus Portainer for handling them, plus NPM for certificate handling and subdomains).


I don’t get it - posts like this one appear again and again. It it spam? Malware? Puzzles? Games?


Docker + Nextcloud = why is it so difficult?
*TLDR: I consistently fail to set up Nextcloud on Docker. Halp pls?* Hi all - please help out a fellow self-hoster, if you have experience with Nextcloud. I have tried several approaches but I fail at various steps. Rather than describe my woes, I hope that I could get a "known good" configuration from the community? **What I have:** - a homelab server and a NAS, wired to a dedicated switch using priority ports. - the server is running Linux, Docker, and NPM proxy which takes care of domains and SSL certs. **What I want:** - a `docker-compose.yml` that sets up Nextcloud *without SSL.* Just that. - *ideally but optionally,* the compose file might include Nextcloud office-components and other neat additions that you have found useful. Your comments, ideas, and other input will be much appreciated!!
fedilink

I think you are saying what I am also saying, but my post was not clear on this:

The container files live on the server, and I use the volume section in my docker-compose.yml files to map data to the NFS share:

        volumes:
            - '/mnt/nasvolume/docker/picoshare/data:/data'

Would you say this is an okay approach?


(Why) would it be “bad practice” to separate CPU and storage to separate devices?
TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. **Is that bad performance?** - I have a Linux PC that acts as my homelab server, and a Synology NAS. - The server is fast but has 100GB SSD. - The NAS is slow(er) but has oodles of storage. - Both devices are wired to their own little gigabit switch, using priority ports. Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?
fedilink

Request: file-sharing service
*edit: thank you for all the great comments! It's going to take a while to chew through the suggestions. I just started testing **picoshare** which is already looking both easy and useful.* Hi all! I am looking for a file-hosting / file-sharing service and hope you guys could recommend something? Features I would like to see: - Docker-compose ready to use. - multi-user, not just for myself. - individual file size >2GB. - shared files should be public, not require a login to download. - optional: secret shares, not listed but public when the link is known. - optional: private shares that require either a password or a login. Thanks in advance for sharing your experiences!
fedilink