• 0 Posts
  • 25 Comments
Joined 1Y ago
cake
Cake day: Jun 10, 2023

help-circle
rss

Extra question for people who have been using it. It says the bandwidth is unlimited, how unlimited are we talking about?. I was considering getting one to use as a reverse-proxy into my home lab to be accessible from the outside, which would mean lots of bandwidth usage, media streaming amounts.


That’s the one I use exactly because of that. I know compose, not going to learn another program to do the same, just want something that gives me an easier way to edit them than sshing into my box and using an editor.


Others have suggested Markdown formats, if you’re willing to do that you might want to look at Silverbullet.


Not the user you’ve asked but I’m using Silverbullet and have been loving it, it ticks every box of what I was looking for:

  • Self hosted
  • Stores files in plain markdown text format
  • You can edit those files externally and Silverbullet picks up the changes
  • Allows customization and expansion easily
  • Provides queries that allow you to extend markdown to pull data from other files
  • These use an SQLite db to get these things to work fast, but if you delete them they get regenerated
  • Can be easily synchronized with multiple nodes by using synching to sync the markdown files

I’ve tried several, but I’ve had a major incident and lost all of the recipes I had because of a database corruption.

So I decided against keeping recipes in databases. I migrated to Notion, but I kept looking for a replacement since that’s not self-hosted. Eventually I ran across Silverbullet, and I’ve been using it for everything, so far it’s been great. Not exactly specifically what you asked but it can be used for it and works great.


Portage has supported binary packages since forever, back in 2012 I had some binary packages on my system, I clearly remember because it was a pain in the ass to compile certain things, for those I installed the binary version. It’s like Debian supporting source packages, it’s been there since forever but people don’t know about it.



I use diun and rss feeds. So far I’ve had different levels of success with different services.

For example for Immich the RSS is a lot more useful because it lets you know when you need to run manual steps.


If you’re going to start from the default Nextcloud instead of AIO you might as well try it on docker. Setting it up is easy regardless, but if you don’t install it using docker keeping it up to date is a pain in the ass.


Yup, syncthing allows for a folder to be synced to multiple places, so I don’t see any problem with that. In fact I have 3 computers syncing things between themselves.


So? If your laptop is off there’s no way to sync to it. If you have a server available you just set syncthing there as well.


What’s the problem with syncthing? It can keep those 3 synced perfectly fine, no?


I’ve never used Incus, but it’s not clear to me why you would choose it over docker, you said that it would be preferable if performance was better, I can already tell you it’s not, best case scenario is equivalent performance (since docker runs natively), but I doubt any VM can match that.


But is there brute-force prevention mechanisms, e.g. delaying logins by a few seconds?


Actually mermaid seems to be able to do all I’m doing with plantuml and syntax is very similar, might give that a try before since that one would also work in offline mode.


I feel like facepalming myself to death for having asked such a stupid question before running an ls -a on the folder.

One last question, I’ve been reading on Plugs because there’s one thing that I use regularly that I think doesn’t exist and want to know if it would be possible for me to implement, it’s called plantuml. Essentially it’s a plug that would act on a specific block of code, like the latex one, and would use POST the code to a configurable url, get an image as return and display that instead.


I said hundreds or thousands, I don’t expect to be creating hundreds of thousands of pages, but from your reply on the other thread SQLite should be more than capable of handling this scale.

Nice knowing that you have close to a thousand and it’s still fine. It will take me a long time to get to that amount of pages, but if I can get started with this it seems like an awesome way of storing knowledge bases, so I expect it will grow quite rapidly as I migrate all of my different things into it.


SQLite should be more than enough, I can’t find the file on the space folder though, is it created inside the docker container on server startup? Is there a reason not to store it in space so it doesn’t need to be regenerated each time?


This looks awesome and exactly what I have been looking for.

One question about implementation just out of curiosity, is there any database? I’m worried that when it gets to hundreds or thousands of pages querying things becomes slow if it’s just scanning files.


I have used Vultr and I’m quite happy with them, however I had not moved backup level data into the servers so can’t attest that they’ll work great for you.


You have a 5GB file:

RAID 0: Each of your 5 disks stores 1GB of that data in alternating chunks (e.g. the first disk has bytes 1, 6, 11, second disk has 2, 7, 12, etc), occupying a total of 5GB. When you want to access it all disks read in parallel so you get 5x the speed of a single disk. However if one of the disks goes away you lose the entire file.

RAID 1: The file is stored entirely on two disks, occupying 10GB, giving a read speed of 2x, and if any single disk fails you still have your entire data.

RAID 5: Split the file in only 3 chunks similar to above, call them A, B and C, disk 1 has AB, disk 2 has BC, disk 3 has AC, the other two disks have nothing. This occupies a total of 10GB, it’s read at most st 3x the speed of a single disk, but if any single one of the 5 disks fails you still have all of your file available. However if 2 disks fail you might incur in data loss.

That’s a rough idea and not entirely accurate, but it’s a good representation to understand how they work on a high level.


I use netdata, it’s quick and easy, but I don’t think it monitors docker containers specifically.


I use it, but it has issues, for example you need to remember how you wrote the website name, and if you ever change your master password you need to change the password of every site, and if you must change the password of a single site, you need to remember the counter for each site.

It’s a cool idea, and worth it to generate passwords, but I would still advise to have other methods, and if you have those other methods it becomes kind of pointless. Still a very cool idea and very manageable for a low number of sites.


Correlation does not imply causation. It could be that the same thing that is lowering your internet speed is affecting LAN, perhaps the router not being enough to handle the traffic, or something in the network occupying a lot of bandwidth which is only active when there is internet (e.g. a download client, or worse a download client accessing a NAS).

In any case you need to give more info into what your setup looks like, e.g.

  • is the Jellyfin server wired or wireless?
  • what is the maximum speed you reach when disconnected from the internet?
  • are you accessing via a computer or phone? And if a computer is it wired?
  • does the Jellyfin server has other services running that could occupy bandwidth?
  • is there a NAS for that Jellyfin server?
  • is the TP-Link acting like a router or a switch?
  • how are you measuring speed?
  • have you monitored the Jellyfin server bandwidth usage during those tests? Does it drop or remains constant while you’re testing and disconnect the internet? (If it remains constant it means you’re saturating your server’s connection, so while it is connected to the internet it L’s only using a limited bandwidth for you and using the rest for something else, if you disconnect the internet it allocates everything for you because it’s not doing anything else anymore. If on the other hand the usage increases it signifies the router is not being able to handle the traffic)

I wouldn’t do Windows, Linux will give you freedom to use docker for most things that you might want to host. As for which distro use whatever you find nice, there’s not going to be much difference. Some of the things people are suggesting are great for extremely advanced use cases, for just spinning up some services whatever you feel more comfortable would be best.