A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I have no experience about what you are trying to achieve, but rdma and related technologies (infiniband, qlogic, sr-iov, ROCE) is not it. These are network technologies that permit high bandwidth/low latency data transfer between hosts. Most of these bypass the IP stack entirely.
Infiniband is a network stack that enable RDMA, it’s only vendor is now NVIDIA which acquired mellanox. Qlogic was another vendor, but it got acquired by Intel that tried to market it as Omnipath, but it was spinned off to Cornelis network.
Sr-iov is a way to share an infiniband card to a virtual machine on the same host.
ROCE is an implementation of the rdma software stack over ethernet instead of infiniband.
I’m fairly sure there’s a way to provide compatible PCIe devices over IP on a network, or “some network” (if you’re bypassing the IP stack, perhaps). I just don’t know what it’s called, and I’m getting more confused by whether RDMA support can do this or not. Essentially, I want to leverage what SR-IOV allows me to do (create virtual functions of eligible PCIe devices) and pass them over IP or some other network tech to VMs/CTs on a different physical host.
I read a bit more and I’d like to add:
RoCE/iWARP is the technology with which one would be able to route DMA over the network. The bandwidth of the network is the bottleneck but we’ll ignore that for now.
SR-IOV is a way to share virtual functions of PCIe devices on the same host.
Regardless of whether one uses IB or iWARP, they can also route data to and from a PCIe device attached to a host to another host over the network. I still have to research the specifics but I’m now positive that it can be done.
Thanks
I believe what you’re looking for is ROCE: https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
But, I don’t know if there’s any FOSS/libre/etc hardware for it.
So it is RDMA.
Indeed, I have come across RoCE, and support seems to be quite active on Debian. I was looking at QLogic hardware for this, and whilst I know that firmware for such stuff is really difficult to find, I’m fine with just FOSS support on Debian
I think I misunderstood what exactly you wanted. I don’t think you’re getting remote GPU passthrough to virtual machines over ROCE without an absolute fuckton of custom work. The only people who can probably do this are Google or Microsoft. And they probably just use proprietary Nvidia implementations.
Well, I’m not a systems engineer, so I probably don’t understand the scale of something like this.
With that said, is it really hard to slap TCP/IP on top of SR-IOV? That is literally what I wanted to know, and I thought RDMA could do that. Can it not?