Anyone that claims to speak on behalf of the universe is either a liar or a fool
Search engines are your friend
https://old.reddit.com/comments/vj6u54/comment/idhfltv?context=3
From the comment:
This is how I do it:
boot into gparted (make bootable usb https://gparted.org/liveusb.php)
open terminal
run lvdisplay to check the proxmox root and data lv name and path
sudo lvdisplay
resize the root lv (choose size, no curly brackets)
sudo lvreduce --resizefs -L {size}G /dev/pve/root
resize the data lv to use all available space
sudo lvresize -l +100%FREE /dev/pve/data
reboot into proxmox
pray
I do not suggest doing this. Just add another disk.
also: https://forum.proxmox.com/threads/can-i-remove-local-and-local-lvm.122850/#post-534378
Definitely ditch godaddy asap they are one of the worst companies to deal with.
I suggest https://njal.la/
So, you’re running two exclusive DNS resolvers, one on your router and one on your pihole box? Or just one on the pihole box and using the local address of it for all LAN dns?
Why have a firewall on the pihole box at all? As long as it isnt in the DMZ you shouldn’t need it. I would try disabling it completely and see if dns on your wg peers starts working.
Try using the lan address of the dns server instead of the wireguard address.
What are you using for dns? You may need to allow access from all interfaces if your dns server is also a wireguard peer
if you’re on pihole: https://docs.pi-hole.net/ftldns/interfaces/
Why do you suggest having separate devices for storage/compute?
because storage never needs to be upgraded beyond drive capacities, unless you need for a bunch of NVMe storage which requires more pcie lanes. The only reason you should have to change a board or cpu in a storage server is if it dies. If you need a new piece of hardware for its new features, it would be much easier to upgrade a different system rather than taking your storage offline to do it. whatever gpu you put in there now is going to be dated in a couple years when you may want to upgrade.
Have you had instances of memory corruption because you didn’t use ECC? I was under the impression from r/selfhosted that this problem was blown out of proportion.
no because i dont run big storage pools on desktop hardware. you may be able to run non ecc memory for a long time and not get any data corruptions, but that doesn’t mean you wont. also it’s not always obvious when there’s corruption especially in older data.
The reason I mentioned the E-key slot is because that way, I don’t have to use a PCIe slot for the adapter, which I might use for something else. I have no need for 10Gbe.
what are you going to do on those two x1 sockets? theyre really not good for anything other than usb and 1gbit (maybe 2.5gbit?) networking, or maybe a sound card. those m.2 adapters are more suited for minipcs that dont have any other pcie expansion options. not saying you can’t or shouldn’t do it, but why? especially when 10gbit options are much cheaper if you buy used.
If it’s old, it’s not efficient.
Efficiency is relative. I’m not suggesting you get a high clock much core server chip (though you could technically limit the clock speed and TDP and it would use as much power as a typical desktop) as there’s plenty of low power options that are ‘old’ (read:~4 years old is not that old). Maybe look into some Xeon-D embedded boards solely for your storage system. Many of those boards were made specifically for storage appliances. They can also be had pretty cheap on ebay or wherever.
If it’s new, it’s prohibitively expensive.
I’d say $120 is too expensive for this motherboard. Seems like it should be ~$60 with those specs and not to mention it being last gen. So even though you’re buying new you have an upgrade ceiling so why not buy a year or two older gear with more features and expansion.
Consumer hardware solves both of these problems. Yes, we don’t have iLO, but if someone is really motivated, they can use PiKVM. I am yet to figure out if I can run PiKVM without the hats on a different SBC but I think it can be done.
FYI iLO is HPs out of band (IPMI) implementation. PiKVM is definitely cool but its just adding more cost and another point of failure in your setup.
For me personally, I’ll be using said board in a NAS. With this board, I would no longer need an LSI HBA hogging my x16 port, which means if I ever decide to train ML models, I can get a GPU for myself.
If you want to train models and other gpu compute stuff like that, I would definitely shoot for a more current gen box just for that. In my (good) opinion, you should not run heavy compute loads on a server that is also serving/backing up your data,
I do not see why I absolutely need ECC memory for a NAS. I’m not going to store PBs of media/documents, it’ll likely be under 30TB (that’s a conservative estimate). I thought ECC memory is a nice-to-have (this is no enterprise workload).
ESPECIALLY if you’re not going to use ECC memory. No reason to put your important data at risk of corruption like that. I highly recommend holding out for something simple with DDR5 and a discrete GPU of your choosing for any actual compute workloads like that.
Prices for newer hardware like this may fall before you’re even ready to build this system, so keep that in mind. You’ll also have a much easier time selling it over older gen hardware in the future if you change your mind about whatever.
A 2230 E-key slot.
from original post. why would you want to do this in a server? if you got a different board with sockets that werent x1 you could just get a 2.5gbe card… or you know, 10gbit.
Don’t buy a consumer board for a server unless you aren’t using it to store important data. Like, say, a GPU cloud gaming or Plex server would be fine on consumer hardware ifyouf dont need out of band management.
Also ECC RDIMM is much easier to come by than ecc udimm but only works with epyc or Xeon chips. I say if you’re going for storage then definitely buy enterprise gear and if you’re going for raw CPU/GPU compute you should be fine with consumer hardware.
If you just want am4/5 ryzen chips, asrock rack makes some good boards with IPMI.
here’s their x570 board you can browse their site and they also have am5 boards and they dont have three gimped x16@x1 sockets.
Create a single drive zpool with each disk with the same pool name. Then add them in proxmox datacenter->storage -> add zfs and choose the pool you created and select the nodes with those disks.
Then you can proceed to setup HA and replication.