I run my containers in an LCX on Proxmox (yes I heard I should use a VM, but it works…)

For data storage (syncthing, jellyfin …) I make volumes in the LXC. But I was wondering if this is the best way?

I started thinking about restoring backups. The docker backups can get quite large with all the user data. I was wondering if a separate “NAS” VM and NFS shares makes more sense. Then restoring/cloning docker lxc would be faster, for troubleshooting. And the user data I could restore separately.

What do you guys do?

Fermiverse
link
fedilink
11Y

I use unpriveliged LXC für everything I have running in my proxmox.

Plex, syncthing, rclone, motioneye, pyload all in seperate Lxc on the boot drive.

All data of those is on my mirror raid, including the lxc backups. The rclone lxc backs the important data onto my cloud drive.

conrad82
creator
link
fedilink
English
11Y

Do you use reverse proxy?

One of the reasons I use a single lxc is that I can reverse proxy containers without exposing ports / http to the LAN, it seemed like a good feature to me.

Fermiverse
link
fedilink
11Y

No reverse proxy. In LAN everything is seen and accessible.

No port is open to WAN, I connect via my router VPN from extern.

@TCB13@lemmy.world
link
fedilink
English
31Y

If you’re using LXC and your filesystem is BTRFS you can use the built in snapshots.

conrad82
creator
link
fedilink
English
11Y

Yes, before doing major changes i usually run a snapshot

I listened to https://thehomelab.show/ podcast today, and they mentioned that before doing major upgrades, you could create a clone VM from latest backups and test the upgrades before doing them for real. That way you both ensure safe upgrade and also make sure your backup is restorable.

It sounded like a good idea, but it got me thinking of the size of my LXC filled with user data… So I was wondering if I was doing it wrong

@TCB13@lemmy.world
link
fedilink
English
1
edit-2
1Y

With BTRFS you can take a snapshot, upgrade and if things go wrong rollback to the snapshot. Snapshot are incremental so you won’t have issues with your data.

I run my dockers all in one VM, with persistent volumes over NFS. That way the entire thing could take a dump and as long as I have the nfs volume, we’re Gucci.

@pqdinfo@lemmy.world
link
fedilink
English
21Y

It’s not always possible but it’s generally good practice to configure your applications to use external storage rather than file systems - MySQL/PostgreSQL for indexable data, and S3-clones like MinIO for blob storage.

One major reason for this is that these systems generally have data replication and fall over redundancy built-in. So you can have two or more physical servers, have an instance of each type of server on each, and have these stay synchronized. If one server goes down, the disks crash, or you need to upgrade, you can easily rebuild a set of redundant servers without downtime, and all you need to do is save the configurations (and take notes!)

Like I said, not always possible, but in general the more an application needs to store “user data”, the more likely it is it has the ability to use one of the above as a backend storage system. That will reduce, significantly, the amount of application servers that need to be backed up, and may reduce your need to consider using NFS etc to separate the data.

conrad82
creator
link
fedilink
English
11Y

Interesting! I felt S3 was more a business cloud storage api.

I did a quick search, and it seems neither syncthing or jellyfin is compatible with S3. What do you do in these cases?

Fermiverse
link
fedilink
11Y

Rclone can do this for you.

@pqdinfo@lemmy.world
link
fedilink
English
21Y

I’m not directly familiar with either, but syncthing seems to be about backing up, so I’m not entirely surprised it’s file oriented, and jellyfin doesn’t look like it’s about user maintained content so much as being a server of content. So I’m not entirely surprised neither would support S3/Minio.

Yeah it took me a while to realize what S3 is intended to be too. But you’ll find “Blob storage” now a major part of most cloud providers, whether they support the S3 protocol (which is Amazon’s) or their own, and it’s to be used precisely the way we’re talking about: user data. Things clicked for me when I was reading the DoveCot manuals and found S3 was supported as a first class back-end storage system like maildir.

I’m old though, I’m used to this kind of thing being done (badly) by NFS et al…

conrad82
creator
link
fedilink
English
1
edit-2
1Y

Huh. I recently set up local dovecot for archiving old emails, but not S3.

I’m curious, when you work on a document, how does that work; Is it a file on your hard drive, have you mounted a bucket somehow, do you sync using restful api somehow?

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 30 users / day
  • 79 users / week
  • 215 users / month
  • 844 users / 6 months
  • 1 subscriber
  • 1.42K Posts
  • 8.13K Comments
  • Modlog