publication croisée depuis : https://lemmy.world/post/1474932

Hi there.

I wanted to run LLMs locally on my server (for better privacy), and was wondering if:

  1. I could use Intel ARC/AMD GPUs - these are often less expensive and AMD has open source drivers, which is something I like.
  2. If a PCIe x4 Gen 3 slot would be enough (it’s an x16 slot with x4 speeds) - this is an important consideration.
  3. Would 8GB of RAM (in the GPU, I believe it’s called VRAM?) be enough?

I’m looking at language models to train on my Reddit and Lemmy content, in an aim to make it write like me (and maybe even better than me? Who knows). I don’t quite know which models I will train, or how I will do so (I certainly won’t be writing anything from scratch), but I was wondering; with the explosion of FOSS AI models, maybe something like this would be possible with the hardware constraints I mentioned above?

Does the speed of the connection between the GPU and the CPU really matter in such applications?

Thanks!

Self Hosted - Self-hosting your services.
!selfhost@lemmy.ml
Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules

  • No harassment
  • crossposts from c/Open Source & c/docker & related may be allowed, depending on context
  • Video Promoting is allowed if is within the topic.
  • No spamming.
  • Stay friendly.
  • Follow the lemmy.ml instance rules.
  • Tag your post. (Read under)

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

  • Lemmy doesn’t have tags yet, so mark it with [Question], [Help], [Project], [Other], [Promoting] or other you may think is appropriate.

Cross-posting

If you see a rule-breaker please DM the mods!

  • 1 user online
  • 1 user / day
  • 1 user / week
  • 12 users / month
  • 36 users / 6 months
  • 1 subscriber
  • 161 Posts
  • 283 Comments
  • Modlog