• 0 Posts
  • 17 Comments
Joined 5M ago
cake
Cake day: Apr 05, 2024

help-circle
rss

I have been considering just installing Debian on a small PC then the jellyfin media player application set to auto start. I can think of a few different ways to get this done maybe with a couple user accounts.

I like the idea of being able to change the application that automatically starts. Maybe I want to try Kodi again. I would just change the startup app.


I’m spoiled now. I prefer ubiquiti equipment for my network, security camera, and even door access.

However, if you prefer completely open source I can recommend opnsense and openwrt. Personally I prefer a single point of configuration… So all ubiquiti for me… It makes it easy to restore a complete network configuration as you are discovering is a pain.

Maybe start with the new cloud gateway max as a router if you are interested.


You might look at gluetun. It lets you configure various VPN services from a docker container. The interesting part is that you can point other docker containers to utilize gluetun for networking. Essentially piping them through the configured VPN.


Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I’ve learned over time in optimizing systems.


Bookmark this if you utilize zfs at all. It will serve you well.

https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.


Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It’s better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

I went the zfs route because I’m familiar with it and I appreciate it’s native sharing options built into the filesystem. It’s cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.


Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks…

If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

Either way have fun.


This is a journey that will likely fill you with knowledge. During that process what you consider “easy” will change.

So the answer right now for you is use what is interesting to you.

Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

Just remember modern CPUs can host many services from a single box. How they do that can vary.


That’s somewhat true. However, the hardware support in bsd especially around video has been blah. If you are interesting in playing with zfs on linux I would recommend proxmox. That particular os is one of the few that allows you to install on a zfs rpool from the installer. Proxmox is basically a debian kernel that’s been modified a bit more for virtualization. One of the mods made was including zfs support from the installer.

Depending on what you get if you go the prox route you could still install bsd in a vm and play with filesystem. You may even find some other methods to get jellyfin the way you like it with lxc, vm, or docker.

I started out on various operating systems and settled on debian for a long time. The only reason I use prox is the web interface is nice for management and the native zfs support. I change things from time to time and snapshots have saved me from myself.


Hardware support can be a bit of an issue with bsd in my experience. But if you’re asking for hardware it doesn’t take as much as you may think for jellyfin.

It can transcode just fine with Intel quic sync.

So basically any moden Intel CPU or slightly older.

What you need to consider more is storage space for your system and if your system will do more than just Jellyfin.

I would recommend a bare bones server from super micro. Something you could throw in a few SSDs.

If you are not too stuck on bsd maybe have a look at Debian or proxmox. Either way I would recommend docker-ce. Mostly because this particular jellyfin instance is very well maintained.

https://fleet.linuxserver.io/image?name=linuxserver/jellyfin


So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I’m more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to “passthru” as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn’t import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn’t directly managing the disks. It was managing virtual disks.

Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

As for the error it’s typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


Repost to op as op claims his comments are being purged


Hmm. If you are going to have proxmox managing zfs anyway then why not just create datasets and share them directly from the hypervisor?

You can do that in terminal but if you prefer a gui you can install cockpit on the hypervisor with the zfs plugin. It would create a separate web gui on another port. Making it easy to create, manage, and share datasets as you desire.

It will save resources and simplify zfs management operations if you are interested in such a method.


What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

I ask because…

The “file lock ESTALE” error in the context of NFS indicates that the file lock has become “stale.” This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.


If you are somewhat comfortable with the cli you could install proxmox as zfs then create datasets off the pool to do whatever you want. If you wanted a nicer gui to manage zfs you could also install cockpit on the proxmox hypervisor directly along with the zfs plugin to manage the datasets and share them a bit easier. Obviously you could do all of that from the command line too.

Personally I use proxmox now where before I made use of Debian. The only reason I switched was it made vm/lxc management easy. As for truenas it’s also basically Debian with a different gui. These days I’m more focused on optimization in my home lab journey. I hope you enjoy the experience however you begin and whatever applications you start with.


I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.

Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.

For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.

Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.

My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn’t hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.

OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.

(It’s important to get this number right for performance because it can’t be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)

Only do zfs if you can bypass all the raid functions.

I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.

From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.

If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.

Have fun creating lxcs and vms for all the services you could want.


I like to utilize nginx proxy manager alongside docker-ce and portainer-ce.

This allows you to forward web traffic to a single internal NPM IP. As for setting up the service ips. I like to utilize the gateway ips that docker generates for each service.

If you have docker running on the same internal IP as NPM you can directly configure the docker gateway ips for each service within the NPM web configuration.

This dumps the associated traffic into the container network for another layer of isolation.

This is a bit of an advanced configuration but it works well for my environment.

I would just love some support for quic within NPM.


Yup and negligible. If I’m forced to contend with a windows environment bitlocker is utilized.

I also utilize a ram disk in a windows os. Imdisk in windows. I migrate temp files and logs into the ram disk. It saves on disk writes and increases privacy.

If pretty straightforward to encrypt if utilizing Linux right from install time.

As for my server I too utilize nextcloud. However, the nextcloud data is on a zfs dataset. This dataset is encrypted.

I did this by installing nextcloud from docker running within a proxmox container. That proxmox lxc container has the nextcloud dataset passed into it.