• 3 Posts
  • 5 Comments
Joined 2Y ago
cake
Cake day: Nov 25, 2022

help-circle
rss
Chaining routers and GUA IPv6 addresses
Hey fellow self-hosting lemmoids *Disclaimer: not at all a network specialist* I'm currently setting up a new home server in a network where I'm given GUA IPv6 addresses in a 64 bit subnet (which means, if I understand correctly, that I can set up many devices in my network that are accessible via a fixed IP to the oustide world). Everything works so far, my services are reachable. Now my problem is, that I need to use the router provided by my ISP, and it's - big surprise here - crap. The biggest concern for me is that I don't have fine-grained control over firewall rules. I can only open ports in groups (e.g. "Web", "All other ports") and I can only do this network-wide and not for specific IPs. I'm thinking about getting a second router with a better IPv6 firewall and only use the ISP router as a "modem". Now I'm not sure how things would play out regarding my GUA addresses. Could a potential second router also assign addresses to devices in that globally routable space directly? Or would I need some sort of NAT? I've seen some modern routers with the capability of "pass-through" IPv6 address allocation, but I'm unsure if the firewall of the router would still work in such a configuration. In IPv4 I used to have a similar setup, where router 1 would just forward all packets for some ports to router 2, which then would decide which device should receive them. Has any of you experience with a similar setup? And if so, could you even recommend a router? Many thanks!
fedilink


Thanks! Glad to see the 8x7B performing not too bad - I assume that’s a Mistral model? Also, does the CPU significantly affect inference speed in such a setup, do you know?


So you access the models directly via terminal? Is that convenient? Also, do you get satisfying inference speed and quality with a 16GB card?


Any of you have a self-hosted AI “hub”? (e.g. for LLM, stable-diffusion, …)
I've been looking into self-hosting LLMs or stable diffusion models using something like [LocalAI](https://localai.io/) and / or [Ollama](https://ollama.com/) and [LibreChat](https://www.librechat.ai/). Some questions to get a nice discussion going: - Any of you have experience with this? - What are your motivations? - What are you using in terms of hardware? - Considerations regarding energy efficiency and associated costs? - What about renting a GPU? Privacy implications?
fedilink

Why exactly are the IBM dependencies a problem for you?

I guess I just like independent, community-driven distros, since there’s less space for financially motivated enshittification. Just shortly after I decided to go with FCOS, RedHat / IBM decided to close down CentOS, for example.

I can’t really find good resources on how FCOS is working and what are the benefits. Is it updating the system/kernel automatically as well as the containers?

The system & kernel yes. The whole system is basically a read-only system “image” for which the devs make sure all the packages play nicely together. Packages are not updated individually, but whole system “image” are released periodically, which the system then downloads automatically and reboots (you decide when it actually reboots through the config). If anything goes wrong, the system is rolled back to the previous “image”.

When you go with podman, there’s a systemd service you can enable which will update the containers (i.e. pull the specified image tag). I’m not aware of a similar mechanism for Docker, which is why I use watchtower for that which has been working smoothly so far.

Edit:

And what are generally, in your opinion, the advantages of FCOS?

For me, it’s the (quite safely designed) auto-updates of the base system (I just feel like having to do less repetitive work), infrastructure-as-code aspect, and the container mindset (as I containerize everything anyways). Also I just have a weakness for new, fancy stuff.


I use Fedora CoreOS on my homeserver and a bunch of VPSs. Migrated the homeserver just recently, but I’ve migrated the first VPSs a bit more than a year ago. So far, I had no problems with it. There’s a low-traffic mailing list where the devs inform about security issues and breaking changes to the whole container stack.

I used debian before for some years, but at some point became tired of manually updating the system (which is probably one of the biggest benefit of FCOS). It takes, however, quite some time to put your first Ignition config together, and debugging is tedious as you have to redeploy to see if a bug / error is now gone (I’ve used a VM for that).

I use podman on some, Docker on other servers (you can’t use both at the same time). Both have been working well so far.

I’d recommend it, but would also recommend taking a look at Flatcar Linux which is more or less the same without the IBM dependency (which makes my stomach hurt sometimes).


Migrated my self-hosted Nextcloud to AIO and I absolutely love it
Just wanted to share my happiness. AIO is the new (at least on my timeline) installation method of Nextcloud, where most of the heavy-lifting is taken care of automatically. https://github.com/nextcloud/all-in-one
fedilink