I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
@robber@lemmy.ml
creator
link
fedilink
English
14M

Sounds like a rather frustrating journey for you.

@snekerpimp@lemmy.world
link
fedilink
English
14M

It has been. I started in this because I liked picking up kick ass enterprise hardware really cheap and playing around with what it can do. Used enterprise hardware is so damn expensive now, it’s cheaper and easier to do everything with consumer products and use the rx6700 in my gaming rig. Just don’t want that running llms and always on.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 0 users online
  • 30 users / day
  • 79 users / week
  • 215 users / month
  • 844 users / 6 months
  • 1 subscriber
  • 1.42K Posts
  • 8.13K Comments
  • Modlog