I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I’d need a contraption like this for my redundant network.
Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
If you have a bunch of nodes, what do you need redundant NICs for? The other nodes should pick up the slack.
It’s unlikely for the NIC or cable to suddenly go bad. If you only have one switch, you’re not protected against its failure, either.
There are still tons of reasons to have redundant data paths down to the switch level.
At the enterprise level, we assume even the switch can fail. As an additional note, only some smart/managed switches (typically the ones with removable modules and cost in the five to six figures USD per chassis) can run a firmware upgrade without blocking networking traffic.
So from a failure case and switching during an upgrade procedure, you absolutely want two switches if that’s your jam.
On my home system, I actually have four core switches: a Catalyst 3750X stack of two nodes for L3 and 1Gb/s switching, and then all my “fast stuff” is connected to a pair of ES-16-XG, each of which has a port channel of two 10G DACs back to to Catalyst stack, with one leg to each stack member.
To the point about NICs going bad - you’re right its infrequent but can happen, especially with consumer hardware and not enterprise hardware. Also, at the 10G fiber level, though infrequent, you still see SFPs and DACs go bad at a higher rate than NICs
I plan to have 2 switches.
Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.
Get a 10GbE nic and OpenVswitch
If those are gigabit, I think I have that exact adapter. I have never used it in production, but I have not run into any issues using it with a laptop when diagnosing. Theoretically you can connect hosts directly to each other via usb3 ala level one and have really fast through put but I have not even started investigating this.
Adding to this - I have those adapters to, ans fyi they don’t support jumbo frames.
The level1 video shows thunderbolt networking though. It is an interesting concept, but it requires nodes with at-least 2 thunderbolt ports in order to have more than 2 nodes.
You are right, I typed the wrong thing.
That product will never exist as there are only a handful of customers who would want it and even less who would pay for it.
Also, lookup the MTBF reports. It’s more likely that all your Client systems will fail before a switch does.
I’m going to go a different route than your question. If you have a spare m.2 slot and room in your PC, you can install a m.2 network adapter. I recently installed a m.2 to 2.5gbe adapters in a Dell 3060 SFF as a proof of concept at home for getting Proxmox ceph cluster working over 2.5gbe.
I used this adapter. https://www.ebay.com/itm/256214788974?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=96RQC3CqQ_u&sssrc=4429486&ssuid=9BfwgvpgRMG&var=&widget_ver=artemis&media=COPY
This is the way to do it for minipcs in my experience. As long as for some reason the box you’re using only allows for a whitelist of wlan cards to be used, but I haven’t run into any that does that yet.
Why not just use a separate switch and wireless AP for redundancy? Wi-Fi can be your backup if your wired switch goes down. Assuming your Dells have Wi-Fi cards, that is.