Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

  • 4 Posts
  • 27 Comments
Joined 4Y ago
cake
Cake day: Jun 25, 2020

help-circle
rss

They are quite solid but be aware that the web UI is dog slow and the menus weirdly designed.


If you’re using containers for everything anyways, the distro you use doesn’t much matter.

If Ubuntu works for you and switching away would mean significant effort, I see no reason to switch outside of curiosity.


Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.


Glad I could save you some money :)


I would not buy a CPU without seeing a real-world measurement of idle total system power consumption if you’re concerned about energy (and therefore cost) efficiency in any way. Especially on desktop platforms where manufacturers historically do not care one bit about efficiency. You could easily spend many hundred € every year if it’s bad. I was not able to find any measurements for that specific CPU.

Be faster at transcoding video. This is primarily so I can use PhotoPrism for video clips. Real-time transcoding 4K 80mbps video down to something streamabke would be nice. Despite getting QuickSync to work on the Celeron, I can’t pull more than 20fps unless I drop the output to like 640x480.

That shouldn’t be the case. I’d look into getting this fixed properly before spending a ton of money for new hardware that you may not actually need. It smells like to me that encode or decode part aren’t actually being done in hardware here.

What codec and pixel format are the source files?
How quickly can you decode them? Try running ffmpeg manually with VAAPI decode, -c copy, and a null sink on the files in question.

What codec are you trying to transcode to? Apollo lake can’t encode HEVC 10 bit. Try encoding a testsrc (testsrc=duration=10:size=3840x2160:rate=30) to AVC 10 bit or HEVC 8 bit.


I would not “share” it synchronously as @gratux@lemmy.blahaj.zone recommended because in that case the data is only stored on one device and almost always accessed remotely. If the internet connection is gone, you’d no longer have access to the data and if the VPS dies, your data would be gone on all other machines too.

If you want to use Nextcloud anyways, that would be an option.

If all you want to do is have a shared synchronised state between multiple machines though, Syncthing would be a much lighter weight purpose-built alternative.


I have never used it but https://selfprivacy.org/ looks pretty interesting. The way it supposedly works is that their app sets up a VPS for you in a guided manner. They set up the services you want (i.e. Nextcloud and Bitwarden) and configure things like backups and HTTPS for you.

The technical foundations are sound (NixOS) and they’re funded in part by NLnet.

They Might be worth trying out if you want control over your data but don’t want the responsibility of setting up and maintaining your services while still ultimately being in control of everything.


@const_void@lemmy.ml suggested that HW accelerated video decode doesn’t work, is that the case?

Does GPU accel in general (OGL and vulkan) work?

Does Widevine DRM work?

Highly specific long-shot question but is the Shield TV’s GPU fast enough for https://github.com/bloc97/Anime4K/?


Get yourself a domain name. It doesn’t cost a whole lot and also allows you to complete DNS-01 challenges for SSL certs. It’s also, like, your own. That’s also a requirement for owning your email address.
(If you really don’t want to pay and don’t care about email, you can also use a shared domain DNS such as dedyn.io.)

You then simply set records to the Tailscale IP addresses of the hosts and you’re good to go. Alternatively, you can also set them to the hosts’ LAN subnet addresses and forward your subnet via a single subnet router; that’s how I do it.


Actual: How to import data with proper readable payee?
cross-posted from: https://lemmy.ml/post/11150038 > I'm trying out Actual and have imported my bank's (Sparkasse) data for my checking account via CSV. In the CSV import, I obviously had to set the correct fields and was a bit confused because Actual only has the "Payee" field while my CSVs have IBAN, BIC and a free text name (i.e. "Employer GmbH".) > > IBAN is preferable because it's a unique ID while the free text name can be empty or possibly even change(?). (Don't know how that works.) > OTOH, the free text name is preferable because I (as a human) can use it to infer the actual payee while the IBANs are just a bunch of numbers. > > Is it possible to use IBAN aswell as the free text name or have a mapping between IBAN and a display name? > > How do you handle that?
fedilink

Actual: How to import data with proper readable payee?
cross-posted from: https://lemmy.ml/post/11150038 > I'm trying out Actual and have imported my bank's (Sparkasse) data for my checking account via CSV. In the CSV import, I obviously had to set the correct fields and was a bit confused because Actual only has the "Payee" field while my CSVs have IBAN, BIC and a free text name (i.e. "Employer GmbH".) > > IBAN is preferable because it's a unique ID while the free text name can be empty or possibly even change(?). (Don't know how that works.) > OTOH, the free text name is preferable because I (as a human) can use it to infer the actual payee while the IBANs are just a bunch of numbers. > > Is it possible to use IBAN aswell as the free text name or have a mapping between IBAN and a display name? > > How do you handle that?
fedilink

Note that some SOHO router appliances block DNS responses with local addresses (“rebind protection”). You may have to explicitly allow-list your domain(s).


You can always use regular DNS and simply point your domain’s records at hosts on your home’s local network and/or the mesh VPN addresses. I do that with Tailscale.


That is not representative of what you’d get with an Intel card then. While they implement the same standard (AV1), they’re entirely different encoders with entirely different image quality characteristics.


The “av1” numbers, which codec is that? There are many av1 encoders and even for Intel HW accel, there are at least two.


Even has a KVM for emergency access ;)


It’s not and it’s insane. TDP is a fucky “metric”.


It won’t. In fact, it might even make that part worse because the quieter parts would become even quieter.

What you need here is a “midnight mode” which is just a compressor; it reduces the dynamic range. Since dynamic range is an aspect of audio quality, this is not something you generally want.

Gain normalisation just ensures that different audio tracks are, on “average”, the same volume so that you don’t have to change volume all the time to accommodate the different mix of each song.

Spotify has these features for example under it’s “Normalise volume” setting; the first two settings do gain normalisation and the high setting also adds a compressor I believe.


Subnet forwarding does not work in that direction. What you’ve done is allow devices in your Tailnet (i.e. your remote machine) to access 192.168.1.0/24 by using your laptop as a proxy, not the other way around; the chromecast doesn’t know it could reach your remote machine via your laptop.

This would be a giant hack and it’s unlikely to work but it’s possible you could get the Chromecast to communicate with the remote machine via your laptop by setting the default gateway of the Chromecast’s network connection to the local IP address of the laptop.
It’ll probably lose internet connection that way, not sure Jellyfin needs that (don’t think so?).

I’d rather recommend you look into getting Tailscale onto that Chromecast. I’ve never used these things, so I don’t know whether that’s possible.


It might be underpowered, it might not be. Just test it out? Do you notice performance issues related to your router?


I thought about switching the router to a dedicated one without a wireless access point

Is there a reason for this? Unless it has specific issues you’d like to fix, I’d just keep using the current router and simply disable its WiFi.


Well, unlike us, they’re obviously living in a country which massively subsidises energy cost. But it seems they either haven’t done the math properly or their measuring device is broken because even they shouldn’t be paying pennies per month.

You can do the calculation for yearly cost yourself; it’s not too hard. The two variables you need are energy pice and power.

Let’s say you’ve got 30W idle power draw at 0.4€/kWh. That comes out to ~105€/year if you ran it 24/7.

You can plug in arbitrary values yourself: https://numbat.dev/?q=1+year+*+30W+*+(0.4€%2FkWh)⏎


Unless you have specific needs for compute, I’d go with that.

You really ought to look into idle power though. At $0.1/kWh, 1W is about $1/year. You can extrapolate from there.
TDP doesn’t matter here but the i3 is likely more efficient under load.

The shipping cost is quite extreme though. Not sure I’d pay that.


Why do they need to be part of your Tailnet? I usually just share a single machine.


Based on that generic request, you’re just going to get everyone’s personal favourite server OS here. You’ll need to give more details to get something tailored to your needs.


I think what they meant is that it was once a pre-built computer.

I believe the swapping was done in order to show what could be done to improve its efficiency; how much you’d be missing out on.
Their results were that they doesn’t decrease power usage by much and that even one of the cheapest modifications makes little financial sense under quite ideal conditions (German energy prices).



Please tell us the complete chain of active network components between your device and the server.

I.e. Device -> Wifi -> Access Point -> Switch -> Server


I have multiple devices split across two locations and I end up having to use hard drives to periodically move files back over to my main desktop for sorting and archiving. If I want to access older files, I have to copy them from my main storage on the desktop to a hard drive, my NextCloud, or whatever device I want to access them on. I would like to avoid this drudgery by moving my file storage to a NAS

A NAS is a good idea but do note that this sort of setup can work aswell with the correct tooling.

don’t really even need access outside the network, though it could be useful if I understood it enough to keep it secure

I can highly recommend Tailscale for this purpose.

run a few docker images for things like media server, open project, restyaboard, etc. I’m not sure if it makes sense to do this on the NAS or just get a simple NAS and do this stuff in a VM on my laptop or with a Rasberry Pie.

Depends. Many people host such things on their NAS since the NAS is always on anyways and barely does anything most of the time, so it’s perfectly valid to do that.

Can I purchase/build a simple NAS that I use for storage and serve the files for my media server through a different device like my laptop?

Yes but in a home setting, it usually makes sense to keep the services running on the same device which stores the service’s data.

It sounds like some of the pre-built machines can use drives of different sizes which would allow me to re-use the barely used drives inside of the WD devices. Do any of the self-build solutions allow for this.

Sure. Unless you need assistance setting up a Linux system (I doubt you do) or building a computer, self-built is almost always better. I use a low-power Intel single-board-computer (Celeron J4105) in a small PC case for this purpose.

For pooling different sized drives, I use btrfs but the same could be achieved using ZFS or even LVM.

Do note that, unless you have specific uptime requirements, RAID is pretty wasteful in a home setting in both hardware and time thinking about it.
When it comes to digital hygiene, figure out backups first. 3-2-1!

I would LOVE some book/media/community recommendations for digital hygiene and how to handle store, backup, maintain the deluge of information in our modern lives.

I’ve found git-annex for myself. It’s quite a rabbit hole and takes a lot of effort to understand and really use well but it’s an incredible tool that has greatly aided simplifying my storage setup.

One of the best things about it is that it separates data from metadata. You always have the metadata but you don’t need to hold the data all in one place.
This means you can re-organise files on your laptop without those files actually being present on the laptop. They could be anywhere; on a hard disk sitting unplugged on a shelf, in the cloud, on some other machine that’s turned off, on the NAS etc. but you can move them around in the filesystem as if they were there. If you needed some file’s content content, you can ask git-annex where the file’s content is stored, i.e. plug in the hard drive and then ask it to copy the data over.

That’s the gist of it (git-annex can do a bunch of other cool stuff) but this really helped my get my shit together w.r.t. storage hygiene.


Hm, DJVU seems like an ancient format and it also only supports JPEG and J2K as far as lossy formats go.

I’d love to use more modern formats such as AVIF, HEIF or even WEBP but paperless doesn’t support some of them and images in general can only represent one page while many of my scans have multiple pages.


How do you encode your paper scans?
I assume many of you host a DMS such as Paperless and use it to organise the dead trees you still receive in the snail mail for some reason in the year of the lord 2023. How do you encode your scans? JPEG is pretty meh for text even at better quantisation levels ("dirty" artefacts everywhere) and PNGs are quite large. More modern formats don't go into a PDF, which means multiple pages aren't possible (at least not in Paperless). Discussion on GH: https://github.com/paperless-ngx/paperless-ngx/discussions/3756
fedilink