I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

@d416@lemmy.world
link
fedilink
English
14M

The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama

@dlundh@lemmy.world
link
fedilink
English
24M

I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. https://youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo

NetworkChuck is the man

applepie
link
fedilink
-34M

You would need 24gb vram card to even start this thing up. Prolly would yield shiti results

Bipta
link
fedilink
24M

They didn’t even mention a specific model. Why would you say they need 24gb to run any model? That’s just not true.

applepie
link
fedilink
04M

I didnt say any. Based on what he is asking, he can’t just run this shit on an old laptop.

Sims
link
fedilink
English
24M

You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don’t have a name ready for you, but take a look here: https://github.com/slavakurilyak/awesome-ai-agents

umami_wasabi
link
fedilink
English
14M

deleted by creator

@pushECX@lemmy.world
link
fedilink
English
1
edit-2
4M

I’d recommend trying LM Studio (https://lmstudio.ai/). You can use it to run language models locally. It has a pretty nice UI and it’s fairly easy to use.

I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.

umami_wasabi
link
fedilink
English
24M

GPT4ALL with the LocalDocs plugin?

@RichardoC@lemmy.world
link
fedilink
English
14M

Jan.ai might be a good starting point or ollama? There’s https://tales.fromprod.com/2024/111/using-your-own-hardware-for-llms.html which has some guidance for using jan.ai for both server and client

@Evotech@lemmy.world
link
fedilink
English
1
edit-2
4M

There’s a few.

Very easy if you set it up with Docker.

Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on

Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.

https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image

https://docs.danswer.dev/quickstart

@s38b35M5@lemmy.world
link
fedilink
English
14M

https://matilabs.ai/2024/02/07/run-llms-locally/

Haven’t done this yet, but this is a source I saved in response to a similar question a while back.

@Sekki@lemmy.ml
link
fedilink
English
24M

While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:

  1. Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much “a lot” means when it comes to the literature

  2. Create a model yourself using only your literature data

Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.

Im just a CS student so not an expert in this regard ;)

@s38b35M5@lemmy.world
link
fedilink
English
14M

Thx for this comment.

My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, “I did this.” I fully expect it to be painful and not very fulfilling…

I’m in the early stages of this myself and haven’t actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was ‘RAG’ Retrieval-Augmented Generation.

ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I’m going to play with it later today. https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 31 users / day
  • 80 users / week
  • 216 users / month
  • 845 users / 6 months
  • 1 subscriber
  • 1.42K Posts
  • 8.13K Comments
  • Modlog