• 5 Posts
  • 13 Comments
Joined 3Y ago
cake
Cake day: May 30, 2021

help-circle
rss

I settled on obsidian with the built in sync. The data is as clean as it gets - its very agnostic to the editor as long as it adheres to the markdown standard (plus flavors). I’m aware that I’m creating a dependency on obsidians workflow and plugins, but the cost of switching is very low considering how I use my knowledge base (I could in work case scenario work with my files with standard Unix tools).

You are free to choose whatever tool that works for you, personally I don’t want my notes to be held hostage by a single vendor.

The closest to Anytype is logseq, but silver bullet.md is also awesome. And if you choose another markdown editor, you could use rsync/git/syncthing to synchronize your files.

When it comes to note applications, there is no shortage of them. Just make a informed decision that will serve you well in the long term.


I tried anytype during the alpha, but I understood early on that the data is crippled during export, and the self host node is very cumbersome to set up. Also, I had a gut feeling that it could turn into a enshittified product.

For my usecase, I could achieve my note taking needs by other more established, libre and less complex means.



Perfect timing since endlesssh isn’t actively developed anymore.


Me neither, but I’d love to hear those arguments.



This looks really slick! I don’t use ansible though, can I still benefit from running it?

Edit: just realized that your project has a larger scope than this, but still awesome to see how you solved the homepage feature.


Appriciation post - envlinks: ultraminimalist homepage / dashboard
I've seen a lot of posts for a lot of different homepage for selfhosters: homepage, homer, homarr (which has an 700 MB image!). I was after something lightweight, simple and easy to configure and get up and running without all the frills and flashy features. And I found a hidden geml in [envlinks](https://github.com/maxhollmann/envlinks/) - a really simple dashboard that is supersimple to configure (just env-variables in the compose file) and still customisable enough for my needs. Hope it will satisfy the need of other minimalists out there :-)
fedilink

How do you monitor your servers / VPS:es?
Hello selfhosters. We all have bare-metal servres, VPS:es, containers and other things running. Some of them may be exposed openly to the internet, which is populated by autonomous malicious actors, and some may reside on a closed-off network since they contain sensitive data. And there is a lot of solutions to monitor your servers, since none of us want our resources to be part of a botnet, or mine bitcoins for APTs, or simply have confidential data fall into the wrong hands. Some of the tools I've looked at for this task are check_mk, netmonitor, monit: all of there monitor metrics such as CPU, RAM and network activity. Other tools such as Snort or Falco are designed to particularly detect suspicious activity. And there also are solutions that are hobbled together, like fail2ban actions together with pushover to get notified of intrusion attempts. So my question to you is - how do you monitor your servers and with what tools? I need some inspiration to know what tooling to settle on to be able that detect unwanted external activity on my resources.
fedilink

Like you said, “it depends” 😁

I have a huge datablob that I mirror off-site once monthly. I have a few services that provides things for my family, I take a backup of them nightly (and run a “backup-restoration” scenario every six months). For my desktop, none at all - but I have my most critical data synched / documented so they can be restored to a functional state.


But this is by design, snap containers aren’t allowed to read data outside of their confinements. Same goes for flatpak and OCI-containers.

I don’t use snap myself, but it does have its uses. Bashing it just because it’s popular to hate on snap won’t yield a healthy discussion on how it could be improved.


100% agree on you list. I’d also throw in some file management solution, such as filebrowser, NFS/samba or syncthing.


Experience with N100 / N200 CPUs?
Hello selfhosters. I'm considering to buy a SFF PC to act as a docker host. The main services / applications I'm going to run is going to be Immich. Filebrowser, Samba-share and eventually Paperless-ngx. I've been eyeing PCs with a N100 / N200 specifically to run quiet, and to conserve on energy consumption. I am most likely going for an [Asus PN42](https://webshop.asus.com/se-en/90MR00X2-M00020/ASUS-PN42-BBN200MV-Barebone-Mini-PC) and will have an SSD in it to keep the moving parts to a minimum. To those who are running machines with this CPU and similiar workloads, how has your experience been?
fedilink

Learning the fundamentals first (such as networking) is a good way forward. You will propably need to learn many other subjects along the way, such as how system services are handled, permissions in linux, linux system administration in general and so on.

If you just want the fundamentals of networking, these resources are pretty good:

And my favorite:

Feel free posting to this community with questions or try finding someone who can be your ballplank. Getting started can be very challenging before you’ve grasped the basics.


So I managed to smash a few buttons randomly again, and get this solved.

There are a few things to be aware of:

  • Oracle doesn’t like ufw. So I disabled it and uninstalled it. Having ufw installed may result in bad stuff. Link
  • I decided to flush all rules in ip-tables to start on a clean slate: sudo iptables -F
  • While I’m at it, I’ve changed ip-tables to allow ALL. THE. INBOUND. TRAFFIC: sudo iptables -I INPUT -j ACCEPT
  • One last thing, I’ve changed the state of the firewall to go from stateful to stateless, still with no restrictions on the ingress / egress traffic.

This is, of course, not a recommended setup for a host to be used in production or to have critical data, but it gave me a host in a working state that I can work with.

Some posts that helped me in this:


[Solved] Change SSH port: no route to host (Oracle Cloud)
Hello all. I'm trying to change the SSH port on an Oracle VM, but I'm getting nowhere and I don't know where to solve the issue. I have changed the SSH port: ``` edit /etc/ssh/sshd_config ``` Entered the port info: ``` Port 5522 ``` I restarted the service: ``` sudo systemctl restart ssh ``` And made sure that the port is open: ``` ss -an | grep 5522 tcp LISTEN 0 128 0.0.0.0:5522 0.0.0.0:* tcp LISTEN 0 128 [::]:5522 [::]:* ``` *** I also allow incoming traffic to 5522: ``` sudo ufw allow 5522/tcp comment 'Open port ssh tcp port 5522' ``` AND just to make sure, I allow 'routed': ``` sudo ufw default allow FORWARD ``` And make sure the FW config is valid: ``` sudo ufw status verbose Status: active Logging: on (medium) Default: deny (incoming), allow (outgoing), allow (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere # Open port ssh tcp port 22 5522/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) # Open port ssh tcp port 22 5522/tcp (v6) ALLOW IN Anywhere (v6) # Open real ssh tcp port 22 ``` Yet, I cannot connect to this server. Trying to ssh -vvvv -p 5522 [ip-adress] yields this: ``` OpenSSH_9.0p1 Ubuntu-1ubuntu8.4, OpenSSL 3.0.8 7 Feb 2023 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for * debug2: resolve_canonicalize: hostname 129.x.x.5 is address debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/x/.ssh/known_hosts' debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/x/.ssh/known_hosts2' debug3: ssh_connect_direct: entering debug1: Connecting to 129.x.x.5 [129.x.x.5] port 5522. debug3: set_sock_tos: set socket 3 IP_TOS 0x10 debug1: connect to address 129.x.x.5 port 5522: No route to host ssh: connect to host 129.x.x.5 port 5522: No route to host ``` I can connect *just fine* when the port is at 22, but as soon as I change it to 5522, i get the 'no route to host' error. I've made sure I have rules on Oracle cloud that allows ingress and egress traffic to 0.0.0.0/0 on all protocols, no matter the destination / source. What am I doing wrong? It feels that this problem is host (server) based rather client based, since I'm getting a routing error. Do I need to configure the routing for that port specifically, and if so how? PS: Also, connecting to localhost:5522 from the server itself works fine. So the problem is not in the configuration, but likely network related. --- EDIT: This issue is solved, solution written on this post: https://lemmy.ml/comment/2787074
fedilink

How to reverse proxy with caddy, tailscale and docker ?
Hello all, I'm taking my first steps in the realm of self-hosting and am learning as I go. I have a VM running ubuntu and I got it connected to tailscale network to fend off unwanted visitors. I also have discovered Docker and am using it to deploy two web applications: [FreshRSS](https://github.com/FreshRSS/FreshRSS) and [Podfetch](https://github.com/SamTV12345/PodFetch). I can deploy them through Docker and they both have their own ports which I can access through `ipadrress:portnumber` URL in my webbrowser. But, the connection is unsecured over HTTP. I'd like to take it a step further in order to make the connections go over HTTPS. I thought to use Caddy to make a reverse proxy as it is supposed to have good support with[ Tailscale](https://tailscale.com/blog/caddy/) but I'm not being particularly successful. I can connect to the individual applications (FreshRSS, PodFetch) by using the given tailscale DNS name (machine.domain.ts.net) and port directly in the browsers URL, but going to the machine.domain.ts.net does only yield in a connection error. I've attached the stdout from running Caddy, my spidersense is telling it is something to do with getting a cert from letsencrypt. Over at tailscale admin, I've ensured I have a tailnet name, MagicDNS and HTTPS certificates enabled. Here's some relevant information, Caddy log file is at the end. Thanks in advance EDIT: solution to my problem at the end of this post. --- # sudo docker ps ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 86a72dbd2686 samuel19982/podfetch:latest "./podfetch" 20 minutes ago Up 18 minutes 0.0.0.0:8480->8000/tcp, :::8480->8000/tcp podfetch_podfetch_1 a7dae64308f9 caddy:latest "caddy run --config …" 25 hours ago Up 17 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 443/udp, 2019/tcp caddy 141bbf69ad62 freshrss/freshrss "./Docker/entrypoint…" 2 months ago Up 2 months 0.0.0.0:8080->80/tcp, :::8080->80/tcp freshrss ``` # Current Caddyfile: ``` machine.domain.ts.net respond "hello" file_server ``` # docker-compose.yml for Caddy ```yaml version: "3" services: caddy: image: caddy:latest container_name: caddy restart: always ports: - "80:80" - "443:443" volumes: - /home/ubuntu/caddy/caddy_data:/data - /home/ubuntu/caddy/caddy_config:/config - /home/ubuntu/caddy/Caddyfile:/etc/caddy/Caddyfile ``` # log output from running `sudo docker-compose up` in the directory where docker-compose.yml is located ```json Starting caddy ... done Attaching to caddy caddy | {"level":"info","ts":1691499456.0689287,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"} caddy | {"level":"warn","ts":1691499456.0720005,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":" caddyfile","file":"/etc/caddy/Caddyfile","line":9} caddy | {"level":"info","ts":1691499456.0762668,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origi ns":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]} caddy | {"level":"info","ts":1691499456.0775971,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"} caddy | {"level":"info","ts":1691499456.077673,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection po licies; adding one to enable TLS","server_name":"srv1","https_port":443} caddy | {"level":"info","ts":1691499456.077703,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"} caddy | {"level":"info","ts":1691499456.07822,"logger":"http","msg":"enabling HTTP/3 listener","addr":":2016"} caddy | {"level":"info","ts":1691499456.0783753,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB ). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."} caddy | {"level":"info","ts":1691499456.0794368,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]} caddy | {"level":"info","ts":1691499456.079528,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"} caddy | {"level":"info","ts":1691499456.079708,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]} caddy | {"level":"info","ts":1691499456.0798655,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2 ","h3"]} caddy | {"level":"info","ts":1691499456.0800827,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"} caddy | {"level":"info","ts":1691499456.0801237,"msg":"serving initial configuration"} caddy | {"level":"info","ts":1691499456.0802798,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00032950 0"} caddy | {"level":"info","ts":1691499456.080402,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"} caddy | {"level":"info","ts":1691499456.0843327,"logger":"tls","msg":"finished cleaning storage units"} ******************** ***** Connection to caddy is made here ******************** caddy | {"level":"warn","ts":1691499478.27926,"logger":"http","msg":"could not get status; will try to get certificate anyway","error":"Get \"http://loc al-tailscaled.sock/localapi/v0/status\": dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"} caddy | {"level":"error","ts":1691499478.2793655,"logger":"tls.handshake","msg":"getting certificate from external certificate manager","remote_ip":"100 .125.48.40","remote_port":"60140","sni":"machine.domain.ts.net","cert_manager":0,"error":"Get \"http://local-tailscaled.sock/localapi/v0/cert/vaulty.tail a5148.ts.net?type=pair\": dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"} caddy | {"level":"info","ts":1691499478.2794874,"logger":"tls.on_demand","msg":"obtaining new certificate","remote_ip":"100.125.48.40","remote_port":"60 140","server_name":"machine.domain.ts.net"} caddy | {"level":"info","ts":1691499478.2796874,"logger":"tls.obtain","msg":"acquiring lock","identifier":"machine.domain.ts.net"} caddy | {"level":"info","ts":1691499478.2826056,"logger":"tls.obtain","msg":"lock acquired","identifier":"machine.domain.ts.net"} caddy | {"level":"info","ts":1691499478.2827125,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"machine.domain.ts.net"} caddy | {"level":"info","ts":1691499478.285254,"logger":"tls","msg":"waiting on internal rate limiter","identifiers":["machine.domain.ts.net"],"ca":"h ttps://acme-v02.api.letsencrypt.org/directory","account":"caddy@zerossl.com"} caddy | {"level":"info","ts":1691499478.2852805,"logger":"tls","msg":"done waiting on internal rate limiter","identifiers":["machine.domain.ts.net"]," ca":"https://acme-v02.api.letsencrypt.org/directory","account":"caddy@zerossl.com"} caddy | {"level":"info","ts":1691499479.3021843,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"machine.domain.ts.net","cha llenge_type":"tls-alpn-01","ca":"https://acme-v02.api.letsencrypt.org/directory"} caddy | {"level":"error","ts":1691499479.867296,"logger":"tls.acme_client","msg":"challenge failed","identifier":"machine.domain.ts.net","challenge_ty pe":"tls-alpn-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - check that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this domain","instance":"","subproblems":[]}} caddy | {"level":"error","ts":1691499479.867339,"logger":"tls.acme_client","msg":"validating authorization","identifier":"machine.domain.ts.net","prob lem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - check that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this domain","instance":"", "subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1247308536/200246894916","attempt":1,"max_attempts":3} caddy | {"level":"info","ts":1691499481.1934462,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"machine.domain.ts.net","cha llenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"} caddy | {"level":"error","ts":1691499481.7219243,"logger":"tls.acme_client","msg":"challenge failed","identifier":"machine.domain.ts.net","challenge_t ype":"http-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - che ck that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this do main","instance":"","subproblems":[]}} caddy | {"level":"error","ts":1691499481.7219615,"logger":"tls.acme_client","msg":"validating authorization","identifier":"machine.domain.ts.net","pro blem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - check that a DNS recor d exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this domain","instance":"" ,"subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1247308536/200246898176","attempt":2,"max_attempts":3} ``` EDIT - SOLUTION: many weeks later, I've learn a few things. Running Caddy bare-metal removed the complexity of dealing with docker networks, but it wasn't as robust as I expected (lets just say - I ran into a very edge-case issue that ruined my day). The solution to my actual problem was to actually directing the requests to the URL to the *actual IP adress* of the docker container running the service I want to make avaible, and ensure that both docker and the service are on the same docker network. A very obvious solution in hindsight, and to be fair, I think I've had the misfortune to run into several issues before reaching this insight.
fedilink