I don’t work in IT at all. My self hosting journey started when I got sick of feeling powerless in the face of big tech companies who are increasingly ripping off customers or violating their right to privacy. There’s also the general mistrust that comes from my data being repeatedly breached or leaked because share holder profits are more important than investing in basic security.
I run a few servers myself with proxmox. FYI there is a script that removes that nag screen as well as configures some other useful things for proxmox self-hosters.
I have a workstation I use for video editing/vfx as well as gaming. Because of my work, I’m fortunate to have the latest high end GPUs and a 160" projector screen. I also have a few TVs in various rooms around the house.
Traditionally, if I want to watch something or play a video game, I have to go to the room with the jellyfin/plex/roku box to watch something and am limited to the work/gaming rig to play games. I can’t run renders and game at the same time. Buying an entire new pc so I can do both is a massive waste of money. If I want to do a test screening of a video I’m working on to see how it displays on various devices, I have to transfer the file around to these devices. This is limiting and inefficient to me.
I want to be able to go to any screen in my house: my living room TV, my large projector in my studio room, my tablet, or even my phone and switch between:
I’m a massive Nextcloud fan and have a server up and running for many years now.
But I understand all of the downvoted commenters. It is clunky and buggy as hell at times. Maybe it’s less noticeable when you’re running a single user instance, but once you have non tech literate users using it you begin to notice how inferior it is to the big boys like google drive in some aspects.
That said, I personally have a decent tolerance for fiddling and slight frustrations as a trade off for avoiding privacy disrespecting and arguably evil corporations.
I would recommend everybody looking for a gdrive, Dropbox, one drive alternative to at least give Nextcloud a go.
Thanks so much for the detailed reply. I have about 20TB of data on the disks otherwise I would take your advice to set up a different scheme. Luckily, as it’s a backup server I don’t need maximum speed. I set it up with mergerfs and snapraid because I’m essentially recycling old drives into this machine and that setup works pretty well for my situation.
The proxmox host is the default (ext4/lvm I believe). The drives are also all ext4. I very recently did a data drive upgrade and besides some timestamp discrepancies likely due to rsync, the SCSI semi-virtualized thing wasn’t an issue. I replaced the old drive with a larger one, hooked the old one up to a usb dongle and passed it through to OMV and I was able to transfer everything and get my new data drive hooked back into the mergerfs pool and snapraid. I’ll do a test and see if I can still access the files directly in the proxmox host just for educational purposes.
I’ll try to re-mount the NFS and see where that gets me. I’m also considering switching to a CIFS/SMB share as another commenter had posted. Unless that is susceptible to the same estale issue. I won’t be back at that location for about a week so I might not have an update for a little while.
Third time posting this reply due to the lemmy server upgrade.
Proxmox on bare metal. A VM with OMV and a VM of proxmox backup server. Multiple drives passed through to OMV and then mergerfs pools them together. That pool has two main shared folders. One is for a remote duplicati server that connects via SFTP. The other is an NfS for PBS. The PBS VM uses the NFS shared folder as storage. Everything worked until recently when I started getting estale errors. Duplicati still works fine
Looks like my reply got purged in the server update.
Running Proxmox baremetal. Two VMs: Proxmox Backup Server and OMV. Multiple HDDs passed through directly as SCSI to OMV. In OMV they’re combined into a mergerfs pool. Two shared folders on the pool: one dedicated to proxmox backups and the other for data backups. The Proxmox backup shared folder is an NFS share and the other shared folder is accessed by a remote duplicati server via SSH (sftp?). Within the proxmox backup server VM, the aforementioned NFS share is set up as a storage location.
I have no problems with the duplicati backups at all. The Proxmox Backup Server was operating fine as well initially but began throwing the estale error after about a month or two.
Is there a way to fix the estale error and also to prevent it from reoccurring?
Underlying system is running Proxmox. From there I have the relevant two VMs: OMV and Proxmox Backup Server. The hard drives are passed into OMV as SCSI drives. I had to add them from shell as the GUI doesn’t give the option. Within OMV I have the drives in a mergerfs pool, with a shared folder via NFS that is then selected as the storage from within the Proxmox Backup Server VM. OMV has another shared folder that is used by a remote duplicati server via SSH(SFTP?), but otherwise OMV has no other shared folders or services. Duplicati/OMV have no errors. PBS/OMV worked for a couple of months before the aforementioned error cropped up.
Also possibly relevant: No other processes or services are setup to access the shared folder used by PBS.
You suggested just adding the ISOs to local-lvm. Do you think it would be feasible to simply delete the local storage completely and then extend the local-lvm after, storing the ISOs there? I know extending volumes is much simpler than shrinking. And I imagine deleting completely is also easier than shrinking?
I tried to set up a nebula network but it seems like it has trouble when your hosts are behind a VPN service. The VPN must block the port or protocol the lighthouse is connecting with and I can’t figure out a way to bypass the VPN (at least on Mac split tunneling isn’t supported). I’m assuming you’re familiar with mesh networks…do you have any good youtube videos or resources you would recommend? The nice thing about VPN is it’s crazy simple to set up and seems to work with all types of system configurations. Nebula was pretty simple but seems like a pain to troubleshoot so far.
Funny you mention that. I was about to make a post about Nebula earlier. I learned about it through YouTuber apalrd a few months back and it seems perfect. I’m still trying to understand some of the complexities when utilizing a service that requires circumventing the mesh network for public access such as Nextcloud. I’ll probably make a post about this after I’ve done some more research. I think there’s some good discussion to be had about such a setup.
So each time I get shut down is during a large extended data transfer. I have my VPS server set up as a VPN hub that connects multiple servers. So typically when my traffic gets diverted to a black hole by DO, there was a consistent roughly 35MB/s inbound/outbound vpn traffic stream for 4-5 hours going through the VPS. My server gets shut down for 3-4 hours and I get a email notice that my server was under a massive DDoS attack and they diverted traffic to a black hole. I always respond informing them that it’s not a DDoS and explain the situation. They typically respond with “Utilize a service like Cloudfare which has DdoS protection”.
I’ve been really happy with them as a provider otherwise but this is a dealbreaker for me.
I’m curious if anybody has a more “self-hosted” solution, but I use burner numbers through the MySudo app and simply delete the number and buy a new one every few months.
If you look up Michael Bazell, he has a strategy for bulk buying voip numbers by tricking voip providers into believing you’re a large established business. You could buy dozens of numbers and just cycle through them. But that method requires a lot of work and social engineering and the providers are becoming privy to those tactics.
I replaced the drives, installed the newest version of PVE, then restored all of my VMs from local USB backup. I had to reconfigure a number of things such as HDD pass through and other network settings, but in the end the migration was a success.