• 2 Posts
  • 24 Comments
Joined 1Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

I haven’t used an out-of-the-box self-hosted solution for this, but I agree with others that blog or static site generator software could work. I think the main challenges you’ll find though are: 1. Formatting the content/site for long-form readability, and 2. Adding a table of contents and previous/next chapter links without a bunch of manual work.

Fortunately blog and static site software have plugins that can add missing functionality like this. Here’s one for WordPress (that I have no first-hand experience with): https://wordpress.org/plugins/book-press/

I also want to ask: What’s your plan for discovery/marketing? Because one of the benefits of the non-self-hosted web novel sites is that readers can theoretically discover your story there. But if you instead just post it on your own site, how will readers ever find it?


That’s unfortunate about NPM and Proxy Protocol, because plain ol’ nginx does support it.

I hear you about Traefik… I originally came from nginx-proxy (not to be confused with NPM), and it had pretty clunky configuration especially with containers, which is how I ended up moving to Traefik… which is not without its own challenges.

Anyway, I hope you find a solution that works for your stack.


I struggled with this same problem for a long time before finding a solution. I really didn’t want to give up and run my reverse proxy (Traefik in my case) on the host, because then I’d lose out on all the automatic container discovery and routing. But I really needed true client IPs to get passed through for downstream service consumption.

So what I ended up doing was installing only HAProxy on the host, configuring it to proxy all traffic to my containerized reverse proxy via Proxy Protocol (which includes original client IPs!) instead of HTTPS. Then I configured my reverse proxy to expect (and trust) Proxy Protocol traffic from the host. This allows the reverse proxy to receive original client IPs while still terminating HTTPS. And then it can pass everything to downstream containerized services as needed.

I tried several of the other options mentioned in this thread and never got them working. Proxy Protocol was the only thing that ever did. The main downside is there is another moving part (HAProxy) added to the mix, and it does need to be on the host. But in my case, that’s a small price to pay for working client IPs.

More at: https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address


Maybe…? I’m not familiar with that router software, but it looks plausible to me…


Since this is on a home network, have you also forwarded port 80 from your router to your machine running certbot?

This is one of the reasons I use the DNS challenge instead… Then you don’t have to route all these Let’s Encrypt challenges into your internal network.


It deduplicates aggressively at the block level. So if your files don’t change much, each additional backup takes very little space. And if a file changes a little, Borg only backs up what’s changed instead of the whole file again.

Borg also has a rich ecosystem of wrappers and tools (borgmatic, Vorta, etc.) that extend its functionality and make it easier to use.


Borg Backup would also fit the bill for backups going forward, especially if OP is still backing up to a local server (as opposed to cloud object storage).


Yeah, the constant Docker breakage was one of the main reasons I switched to Podman. FYI you can use Docker Compose directly with Podman.


Ehhh I would say then you have probabilistic backups. There’s some percent chance they’re okay, and some percent chance they’re useless. (And maybe some percent chance they’re in between those extremes.) With the odds probably not in your favor. 😄



I’m not sure that analogy quite holds (it’s not like the Ryobi tools are left connected to the building as a critical component of the HVAC system or something), but I like the image anyway. :D


Yeah, publicly accessible in that it’d be reachable over HTTPS from the internet (and not behind a VPN), but password-protected. Thanks for weighing in on this!


That all seems prudent and reasonable. I guess some of my own anxiety is about how exactly I’ll evaluate projects like you’re talking about. I can (and do) certainly look at whether a project is actively developed before selecting it. Not just for security reasons… I don’t want to bet on a horse that won’t get updated with fixes and features. But for security in particular, I guess I was hoping for ways to evaluate that for a project… without exhaustively poring over its source. Maybe, to your point, the other mitigations you listed should be sufficient, and I should worry more about that side of things than picking the perfect project.


Lol, I really appreciate your thoughts! These are exactly the sort of insights I came here for. I hope this is useful to others too who may be wondering about the same thing.


PHP and security
The recent post about what people are using for webmail got me thinking about a perhaps irrational policy I have with my own self-hosted software: I don't install anything written in PHP, because I have this vague notion that PHP software is often insecure. I think I probably got this idea because years ago I saw all the vulnerabilities in PHP webmail clients and PHP software like Wordpress and decided that it was the language's fault—or at least a contributing factor. Maybe this isn't fair. Maybe PHP is just more accessible to new devs and so they're more likely to gravitate to it and make security mistakes. Maybe my perception isn't even accurate, and webmail / blog software written in other languages is just as bad—but PHP gets all the the negative attention because it's so prevalent for web apps. Maybe my policy _was_ a good idea, years ago, but now it's just out of date. To be clear, I'm not trying to stoke the flames of a language holy war here or anything. I'm honestly asking: Is it maybe time to revisit my anti-PHP policy? I'm looking longingly at some federated software like [Pixelfed](https://pixelfed.org/) and wondering if maybe I'm just being a little too close-minded. So I'm interested in your own experiences and polices here. Where do _you_ draw the security line for what you will or won't host, and what made you make that choice?
fedilink


The app that comes to mind as having problems with changing IPs is the Home Assistant app. It would simply lose connectivity when the IP changed and never do another DNS lookup to connect again… I always had to restart it. The “solution” for me was not to change IPs and just leave Wireguard on. It’s cool that Ultrasonic handles it though.



One additional consideration that’s specific to Hetzner (which you may already be aware of): You can only scale down a server after scaling it up if you elected not to increase the (local) disk size. In other words, if you scale up a server with 40 GB storage to one that comes with 80, you can’t actually resize your storage from 40 to 80 if you ever want to scale the server back down later. Kind of obnoxious.


I don’t know about your particular use case, but I’ve found that some apps experience problems when the IP address of a resource they’re using changes out from under them. Like either they experience temporary connectivity issues during the transition or even just stop being able to reach the resource until restarted. However if your setup is working for you, that’s great!


Why not use Wireguard from your phones all the time, even at home? Just performance?


If Docker’s working fine for you and you’re happy with the trade-offs, no reason to switch.


I dunno, I think part of the trick is not learning every single new technology that comes your way. So much of tech these days is just fashion, and you can safely ignore most stuff until there’s a deafening drumbeat bashing down your door. And even then, you should ask if the drumbeat really suits your use cases or if everyone’s in such a fervor over it because it’s fashionable and they’re using it for things it’s not suited for.

Don’t give into the FOMO. Use your judgment. And don’t worry about Podman if what you’re doing now is working!


Do it! I think there’s a market there. Although the “Podman Compose” name is taken and you’ll have to think of something else…


Using Ansible to spew out systemd service boilerplate seems like a good idea. I’ll have to try that if I can ever give up my Docker Compose security blanket. And I wish you luck with your mega-container Podman conversion. That one sounds like it’ll be… a learning experience.


Podman is awesome—and totally frustrating
So [Podman](https://podman.io/) is an open source container engine like Docker—with "full"^1^ Docker compatibility. IMO Podman's main benefit over Docker is security. But _how_ is it more secure? Keep reading... Docker traditionally runs a daemon as the root user, and you need to mount that daemon's socket into various containers for them to work as intended (See: Traefik, Portainer, etc.) But if someone compromises such a container and therefore gains access to the Docker socket, it's game over for your host. That Docker socket is the keys to the root kingdom, so to speak. Podman doesn't have a daemon by default, although you can run a very minimal one for Docker compatibility. And perhaps more importantly, Podman can run entirely as a non-root user.^2^ Non-root means if someone compromises a container and somehow manages to break out of it, they don't get the keys to the kingdom. They only get access to your non-privileged Unix user. So like the keys to a little room that only contains the thing they already compromised.^2.5^ Pretty neat. Okay, now for the annoying parts of Podman. In order to achieve this rootless, daemonless nirvana, you have to give up the convenience of Unix users in your containers being the same as the users on the host. (Or at least the same UIDs.) That's because Podman typically^3^ runs as a non-root user, and most containers expect to either run as root or some other specific user. The "solution"^4^ is user re-mapping. Meaning that you can configure your non-root user that Podman is running as to map into the container as the root user! Or as UID 1234. Or really any mapping you can imagine. If that makes your head spin, wait until you actually try to configure it. It's actually not so bad on containers that expect to run as root. You just map your non-root user to the container UID 0 (root)... and Bob's your uncle. But it can get more complicated and annoying when you have to do more involved UID and GID mappings—and then play the resultant permissions whack-a-mole on the host because your volumes are no longer accessed from a container running as host-root.... Still, it's a pretty cool feeling the first time you run a "root" container in your completely unprivileged Unix user and everything _just works_. (After spending hours of swearing and Duck-Ducking to get it to that point.) At least, it was pretty cool for me. If it's not when you do it, then Podman may not be for you. The other big annoying thing about Podman is that because there's no Big Bad Daemon managing everything, there are certain things you give up. _Like containers actually starting on boot._ You'd think that'd be a fundamental feature of a container engine in 2023, but you'd be wrong. Podman doesn't do that. Podman adheres to the "Unix philosophy." Meaning, briefly, if Podman doesn't feel like doing something, then it doesn't. And therefore expects you to use systemd for starting your containers on boot. Which is all good and well in theory, until you realize that means Podman wants you to manage your containers entirely with systemd. So... running each container with a systemd service, using those services to stop/start/manage your containers, etc. Which, if you ask me, is totally _bananasland_. I don't know about you, but I don't want to individually manage my containers with systemd. I want to use my good old trusty Docker Compose. The good news is you _can_ use good old trusty Docker Compose with Podman! Just run a compatibility daemon (tiny and minimal and rootless… don't you worry) to present a Docker-like socket to Compose and _boom_ everything works. Except your containers still don't actually start on boot. You still need systemd for that. But if you make systemd run Docker Compose, problem solved! This isn't the "Podman Way" though, and any real Podman user will be happy to tell you that. The Podman Way is either the aforementioned systemd-running-the-show approach or something called Quadlet or even a Kubernetes compatibility feature. Briefly, about those: Quadlet is "just" a tighter integration between systemd and Podman so that you can declaratively define Podman containers and volumes directly in a sort of systemd service file. (Well, multiple.) It's like Podman and Docker Compose and systemd and Windows 3.1 INI files all had a bastard love child—and it's about as pretty as it sounds. IMO, you'd do well to stick with Docker Compose. The Kubernetes compatibility feature lets you write Kubernetes-style configuration files and run them with Podman to start/manage your containers. It doesn't actually use a Kubernetes cluster; it lets you pretend you're running a big boy cluster because your command has the word "kube" in it, but in actuality you're just running your lowly Podman containers instead. It also has the feel of being a dev toy intended for local development rather than actual production use.^5^ For instance, there's no way to apply a change in-place without totally stopping and starting a container with two separate commands. What is this, 2003? Lastly, there's _Podman_ Compose. It's a third-party project (not produced by the Podman devs) that's intended to support Docker Compose configuration files while working more "natively" with Podman. My brief experience using it (with all due respect to the devs) is that it's total amateur hour and/or just not ready for prime time. Again, stick with Docker Compose, which works great with Podman. Anyway, that's all I've got! Use Podman if you want. Don't use it if you don't want. I'm not the boss of you. But you said you wanted content on Lemmy, and now you've got content on Lemmy. This is all your fault! ^1^ Where "full" is defined as: Not actually full. ^2^ Newer versions of Docker also have [some rootless capabilities](https://docs.docker.com/engine/security/rootless/). But they've still got that stinky ol' daemon. ^2.5^ It's maybe not quite this simple in practice, because you'll probably want to run multiple containers under the same Unix account unless you're _really_ OCD about security and/or have a hatred of the convenience of container networking. ^3^ You _can_ run Podman as root and have many of the same properties as root Docker, but then what's the point? One less daemon, I guess? ^4^ Where "solution" is defined as: Something that solves the problem while creating five new ones. ^5^ Spoiler: Red Hat's whole positioning with Podman is like they see it is as a way for buttoned-up corporate devs to run containers locally for development while their "production" is running K8s or whatever. Personally, I don't care how they position it as long as Podman works well to run my self-hosting shit....
fedilink