Say I have a large txt or CSV file with data I want to search. And say I have several files.

What is the best way to index and make this data searchable? I’ve been using grep, but it is not ideal.

Is there any self hostable docker container for indexing and searching this? Or maybe should I use SQL?

@h0bbl3s@lemmy.world
link
fedilink
5
edit-2
4M

You can import CSV files directly into an SQLite database. Then you are free to do whatever sql searches or manipulations you want. Python and golang both have great SQLite libraries from personal experience, but I’d be surprised if there is any language that doesn’t have a decent one.

If you are running Linux most distros have an SQLite gui browser in the repos that is pretty powerful.

Ephera
link
fedilink
14M

I’d be surprised if there is any language that doesn’t have a decent one.

Yeah, SQLite provides a library implemented in C. Because C doesn’t require a runtime, it’s possible for other languages to call into this C library. All you need is a relatively thin wrapper library, which provides an API that feels good to use in the respective language.

Import it into access(Satan whispers quietly into you ear)

Eager Eagle
link
fedilink
English
3
edit-2
4M

Excel / OnlyOffice?

I love self-hosted tools, but you can do a lot on a spreadsheet.

Btw, if the files are not too large, you can query them using SQL without even hosting a database just by using Pandas. This avoids the problem of updating entries and handling migrations in case the CSVs change over time.

@Dust0741@lemmy.world
creator
link
fedilink
14M

Files won’t change and are hundreds of GBs

Eager Eagle
link
fedilink
English
14M

ok, database it is then

Are they roughly 55GB compressed?

Spill the beans!

@Dust0741@lemmy.world
creator
link
fedilink
24M

👀

Anna
link
fedilink
1
edit-2
4M

Depends on the size of data, use case like will you be doing any constant updates to it or just reading, you mentioned you have several files so do you need joins if so what will be an approx max number of joins you’ll be doing on per query basis, I guess you said CSV so I’m assuming it is structured data and not semi structured or unstructured.

Few more questions, do you need a fast indexing but are not planning on doing any complex operations, areyoiu going to do a lot of OLTP operations and you need ACID. Or are you going OLAP route. are you planning on distributed database if so then which 2 do you want from CAP, do you want batch processing or stream processing,

I’ve few dozen other questions also

Self Hosted - Self-hosting your services.
!selfhost@lemmy.ml
Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules

  • No harassment
  • crossposts from c/Open Source & c/docker & related may be allowed, depending on context
  • Video Promoting is allowed if is within the topic.
  • No spamming.
  • Stay friendly.
  • Follow the lemmy.ml instance rules.
  • Tag your post. (Read under)

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

  • Lemmy doesn’t have tags yet, so mark it with [Question], [Help], [Project], [Other], [Promoting] or other you may think is appropriate.

Cross-posting

If you see a rule-breaker please DM the mods!

  • 0 users online
  • 2 users / day
  • 2 users / week
  • 5 users / month
  • 38 users / 6 months
  • 1 subscriber
  • 168 Posts
  • 303 Comments
  • Modlog