• 5 Posts
  • 13 Comments
Joined 1Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

There are really two reasons ECC is a “must-have” for me.

  • I’ve had some variant of a “homelab” for probably 15 years, maybe more. For a long time, I was plagued with crashes, random errors, etc. Once I stopped using consumer-grade parts and switched over to actual server hardware, these problems went away completely. I can actually use my homelab as the core of my home network instead of just something fun to play with. Some of this improvement is probably due to better power supplies, storage, server CPUs, etc, but ECC memory could very well play a part. This is just anecdotal, though.
  • ECC memory has saved me before. One of the memory modules in my NAS went bad; ECC detected the error, corrected it, and TrueNAS sent me an alert. Since most of the RAM in my NAS is used for a ZFS cache, this likely would have caused data loss had I been using non-error-corrected memory. Because I had ECC, I was able to shut down the server, pull the bad module, and start it back up with maybe 10 minutes of downtime as the worst result of the failed module.

I don’t care about ECC in my desktop PCs, but for anything “mission-critical,” which is basically everything in my server rack, I don’t feel safe without it. Pfsense is probably the most critical service, so whatever machine is running it had better have ECC.

I switched from bare-metal to a VM for largely the same reason you did. I was running Pfsense on an old-ish Supermicro server, and it was pushing my UPS too close to its power limit. It’s crazy to me that yours only pulled 40 watts, though; I think I saved about 150-175W by switching it to a VM. My entire rack contains a NAS, a Proxmox server, a few switches, and a couple of other miscellaneous things. Total power draw is about 600-650W, and jumps over 700W under a heavy load (file transfers, video encoding, etc). I still don’t like the idea of having Pfsense on a VM, though; I’d really like to be able to make changes to my Proxmox server without dropping connectivity to the entire property. My UPS tops out at 800W, though, so if I do switch back to bare-metal, I only have realistically 50-75W to spare.


I have a few services running on Proxmox that I’d like to switch over to bare metal. Pfsense for one. No need for an entire 1U server, but running on a dedicated machine would be great.

Every mini PC I find is always lacking in some regard. ECC memory is non-negotiable, as is an SFP+ port or the ability to add a low-profile PCIe NIC, and I’m done buying off-brand Chinese crop on Amazon.

If someone with a good reputation makes a reasonably-priced mini PC with ECC memory and at least some way to accept a 10Gb DAC, I’ll probably buy two.


I think I’m misunderstanding how LDAP works. It’s probably obvious, but I’ve never used it.

If my switch is expecting a username and password for login, how does it go from expecting a web login to “the LDAP server recognizes this person, and they have permissions to access network devices, so I’ll let them in.”?

Also, to be clear, I’m referring to the process of logging in and configuring the switch itself, not L2 switching or L3 routing.


Like several people here, I’ve also been interested in setting up an SSO solution for my home network, but I’m struggling to understand how it would actually work.

Lets say I set up an LDAP server. I log into my PC, and now my PC “knows” my identity from the LDAP server. Then I navigate to the web UI for one of my network switches. How does SSO work in this case? The way I see it, there are two possible solutions.

  • The switch has some built-in authentication mechanism that can authenticate with the LDAP server or something like Keycloak. I don’t see how this would work as it relies upon every single device on the network supporting a particular authentication mechanism.
  • I log into and authenticate with an HTTP forwarding server that then supplies the username/password to the switch. This seems clunky but could be reasonably secure as long as the username/password is sufficiently complex.

I generally understand how SSO works within a curated ecosystem like a Windows-based corporate network that uses primarily Microsoft software for everything. I have various Linux systems, Windows, a bunch of random software that needs authentication, and probably 10 different brands of networking equipment. What’s the solution here?


I decided to give up on it. Looking through the docs, they recommend that due to “reasons,” it should be restarted at least daily, preferably hourly. I don’t know if they have a memory leak or some other issue, but that was reason enough for me not to use it.

I installed TubeArchivist, and it suits my needs much better. Not only do I get an archive of my favorite channels, but when a new video is released, it gets automatically downloaded to my NAS and I can play it locally without worrying about buffering on my painfully slow internet connection.


Invidious - Can’t Subscribe
I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9. Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added. My first thought was a problem with the PostgreSQL database, but that wouldn't explain why *some* subscriptions work when I import them. I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory. What the hell is going on?
fedilink

Hosting private UHD video
I have a decent amount of video footage that I'd like to share with friends and family. My first thought was Youtube, but this is all home videos that I really don't want to share publicly. A large portion of my video footage is 4k/60, so I'm ideally looking for a solution where I can send somebody a link, and it gives a "similar to Youtube" experience when they click on the link. And by "similar to Youtube," I mean that the player automatically adjusts the video bitrate and resolution based on their internet speed. Trying to explain to extended family how to lower the bitrate if the video starts buffering isn't really an option. It needs to "just work" as soon as the link is clicked; some of the individuals I'd like to share video with are very much *not* technically inclined. I'd like to host it on my homelab, but my internet connection only has a 4Mbit upload, which is orders of magnitude lower than my video bitrate, so I'm assuming I would need to either use a 3rd-party video hosting service or set up a VPS with my hosting software of choice. Any suggestions? I prefer open-source self-hosted software, but I'm willing to pay for convenience.
fedilink

If it’s really impossible to add an extra drive, are you able to attach an external drive or map a networked drive that has space for your VMs and LXCs?

In your situation, what I would probably do is back up all my VMs to my NAS, replace the hard drive in my Proxmox hypervisor, re-install a fresh copy of Proxmox on the new drive, and restore the VMs back to my new Proxmox installation. If you don’t have a NAS, you could do this with a USB-attached hard drive, too.

Ideally, though, you should have separate drives for your Proxmox boot drive and your VMs. Even if you’re using a SFF PC that doesn’t have an extra drive bay, could you double-sided-tape a SSD to the bottom of the case and use this as your storage drive? I’ve certainly done it before.



I have a full-height server rack with large, loud, noisy, power-inefficient servers, so I can’t provide much of a good suggestion there. I did want to say that you might want to seriously reconsider using a single 10Tb hard drive.

Hard drives fail, and with a single drive, in the event of a failure, your data is gone. Using several smaller drives in an array provides redundancy, so that in the event of a drive failure, parity information exists on the other drives. As long as you replace the failed drive before anything else fails, you don’t lose any data. There are multiple different ways to do this, but I’ll use RAID as an example. In RAID5, one drive stores parity information. If any one drive fails, the array will continue running (albeit slower); you just need to replace the failed drive and allow your controller to rebuild the array. In a RAID5 configuration, you lose the space of one drive to parity. So if you have 4 4TB drives in a RAID5 configuration, you would have a total of 12TB of usable space. RAID6 lets you lose two drives, but you also lose two drives worth of space to parity, meaning your array would be more fault-tolerant, but you’d only have 8TB of space.

There are many different RAID configurations; far too many for me to go into them all here. You also have something called ZFS, which is a file system with many similarities to RAID (and a LOT of extra features… like snapshots). As an example, I have 12 10TB hard drives in my NAS. Two groups of 6 drives are configued as RAIDZ2 (similar to RAID6), for a total of 40TB usable space in each array. Those two arrays are then striped (like RAID0, so that data is written across both arrays with no redundancy at the striped level). In total, that means I have 80TB of usable space, and in a worst-case scenario, I could have 4 drives (two on each array) fail without losing data.

I’m not suggesting you need a setup like mine, but you could probably fit 3 4TB drives in a small case, use RAID5 or ZFS-RAIDZ1, and still have some redundancy. To be clear, RAID is not a substitution for a backup, but it can go a long way toward ensuring you don’t need to use your backup.


What kind of issues do you have with your ISP? I live in a rural area, so my options for ISP are limited; I have a VDSL connection supplemented by Starlink. Starlink uses CGNAT, so I can’t really host anything there unless I use something like Zerotier to Tailscale, but my VDSL connection works pretty well as long as I make sure to drop the bitrate to something that fits in my 4MBit upload. I have anything that accepts incoming connections behind an Nginx reverse proxy, and my routing policy is set up so that Nginx is forced onto the DSL connection.

Not really related to my original post, but I’ve spent way too much time tinkering with my network, so I was curious.


I’d guess that Plex uses ffmpeg internally, which would be the same as Jellyfin. I’ve been looking at both the P2000 and P4000, but I’m leaning a bit toward the T1000 because of the newer architecture. Good to hear that the P2000 is working for you.


Do you transcode 4k with tonemapping? My P400 does a great job as long as tonemapping is turned off, but that doesn’t do much to help me play HDR content. A GTX 1070 would be a great solution, and cheaper than some of the other cards I’m looking at, assuming it can do what I need it to.

I usually only ever have 1 concurrent stream, too. It’d be nice to have a GPU that could support 2 just in case both of us in my household want to use Jellyfin at the same time, but it’s certainly not essential.


GPU for 4k Transcoding in Jellyfin
I'm starting to get more and more HDR content, and I'm noticing an issue with my Jellyfin server. In nearly all cases, it's required to transcode and tone map the HDR content. All of it is in 4k. My little Quadro P400 just can't keep up. Encoder and decoder usage hovers around 15-17%, but the GPU core usage is pinned at 100% the entire time, and my framerate doesn't exceed 19fps, which makes the video skip so badly it's unwatchable. What's a reasonable upgrade? I'm thinking about the P4000, but that might be excessive. Also, it needs to fit in a low-profile slot. Edit: I'm shocked at how much good feedback I received on this post. Hopefully someone else will stumble on it in the future and be able to learn something. Ultimately, I decided to purchase a used RTX A2000 for just about $250. It's massively overkill for transcoding/tone mapping 4k, but once I'm brave enough to risk breaking my Proxmox install and setting up vGPU, I'm hoping to take advantage of the Tensor cores for AI object detection in my Blue Iris VM. Also, the A2000 supports AV1, and while I don't need that at the moment, it will be nice to have in the future, I think. Final Edit: I replaced the Quadro P400 with an RTX A2000 today. With the P400, transcoding 4k HEVC HDR to 4k HEVC (or h264) SDR with tone mapping resulted in transcode rate of about 19fps with 100% GPU usage. With the A2000, I'm getting a transcode rate of about 120fps with around 30% GPU usage; plenty of room for growth if I add 1 or 2 users to the server. For $250, it was well worth the upgrade.
fedilink

Thanks! Adapting the command you gave to work with snap will be fairly easy. Regarding a backup of config.php, I’ve tried to do that, but with a snap install, I get a permission denied error when I try to enter the config directory, and you can’t “sudo cd.” I’ll try logging in as root or changing permissions.


Nextcloud - Preview Settings as Snap in Ubuntu
I recently set up Nextcloud, and so far I'm really enjoying it. With the exception of Gmail and Backblaze, I'm no longer using any online services that aren't self-hosted on my own hardware; Nextcloud has allowed me to get rid of the last few Google services I was using. One issue I'm having is that images I have uploaded to Nextcloud do not have thumbnails when the image size is large. My phone takes photos at 200MP, so this constitutes a significant number of my photos. I've been researching the problem, and I think I need to set the following: 'preview_max_x' => null 'preview_max_y' => null 'preview_max_filesize_image' => -1 'preview_max_memory' => -1 I'm running Nextcloud on a Proxmox hypervisor with 32 cores and 128GB of memory, so I'm not concerned about using system resources; I can always allocate more. The issue I'm having is that I installed Nextcloud as a snap in Ubuntu Server. The last time I tried to use nextcloud.occ to change a configuration option, it set a string as an array, and triggered a bunch of php errors. As far as I can tell, I need to do something like this: sudo nextcloud.occ config:[something, maybe system]:set preview_max_x [some data goes here] How do I format this so that nextcloud.occ inserts the variable into my php config properly? Any examples of a nextcloud.occ command would be very much appreciated.
fedilink

You’ve gotten some good advice regarding VPNs, so I won’t go into that, but if you do decide to open SSH or any other port, I would encourage you to spend some time setting up a firewall to block incoming connections. I have several services on HTTPS open to the world, but my firewall only allows incoming connections from whitelisted IP ranges (basically just from my cell phone and my computer at work). The number of blocked incoming connections is staggering, and even if they’re not malicious, there is absolutely legitimate no reason for someone other than myself or the members of my household to be trying to access my network remotely.


Lemmy bandwidth requirements
I've been considering the idea of hosting my own instance of Lemmy, solely for my own use, maybe with 1 or 2 family members using it as well. I've seen several discussions regarding the requirements for system resources, but not much regarding bandwidth. I have an abundance of processing power, memory, and storage space in my homelab, but my internet connection is terrible. Not much available where I live. I have a 40/3 VDSL connection and a Starlink connection, but neither is particularly good in terms of upload. Seems like a VPS would be a good solution, but to me, that kind of defeats the purpose of self-hosting. I want to use my own hardware. So, for a personal-use Lemmy instance, what kind of bandwidth is recommended? I know my connection would be fine for 1 or 2 users, but I'll admit I'm not entirely sure how servers sync with each other in a federated network, and I could see that using a ton of bandwidth.
fedilink