• 1 Post
  • 24 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss
  1. What electricity costs in my area. $0.32/KWh at the wrong time of day.

I assume you have this on a UPS. What about using a smart plug to switch to UPS during the expensive part of the day, then back to mains to charge when it’s cheaper? I imagine that needs a bigger UPS than one would ordinarily spec, and that cost would probably outweigh the electric bill, but never know.


pihole, in front of my own DNS, because it’s easier to have them to domain filtering.

mythtv/kodi, because I’d rather buy DVDs than stream; rather stream than pirate; but still like to watch the local news.

LAMP stack, because I like watching some local sensor data, including fitness equipment, and it’s a convenient place to keep recipes and links to things I buy regularly but rarely (like furnace filters).

Homeassistant, because they already have interfaces to some sensors that I didn’t want to sort out, and it’s useful to have some lights on timers.

I also host, internally, a fake version of quicken.com, because it lets me update stock quotes in Quicken2012 and has saved me having to upgrade or learn a new platform.


Ditto on hardware raid. Adding a hardware controller just inserts a potentially catastrophic point of failure. With software raid and raid-likes, you can probably recover/rebuild, and it’s not like the overhead is the big burden it was back in the 90s.


There are 3rd party plugins for kodi to work with a lot of streaming services, using your account and not ‘cheating’ in any way that’s obvious to me.

Netflix: https://forum.kodi.tv/showthread.php?tid=329767

Fairly extensive collection: https://github.com/matthuisman/slyguy.addons


HA doesn’t require 4/4/32, that’s just the hardware the HA people sell. (which, given that your phone may be 8/16/128, is hardly “robust”). Generally, the Home Assistant crowd kind of target an audience that’s probably already running some kind of home server, NAS, or router, and HA can probably be installed on that device.

Theoretically, there’s no reason the HA server couldn’t be installed on your phone, except then your smart home functions would only work while your phone is in the house and not sleeping. Kind of defeats the point of a lot of it, unless you’re just thinking of smart home like “remote control for everything.” Regardless, much smaller niche for an already-small market, and apparently not a priority for the dev team.


Ditto. Started 20 years ago with one service I wanted. Complicated it a little more every time some new use case or interesting trinket came up, and now it’s the most complicated network in the neighborhood. Weekend projects once a year add up.

If you have the resources, experiment with new services on a completely different server than everything else. The testing-production model exists for a reason: backups are good, but restoring everything is a pain in the ass.

I also like to keep a text editor open and paste everything I’m doing, as I do it, into that window. Clean it up a little, and you’ve got documentation for when you eventually have to change/fix it.


I’d tried that…this has been going on for five days, and I can not describe my level of frustration. But I solved it, literally just now.

Despite systemctl status apparmor.service claiming it was inactive, it was secretly active. audit.log was so full of sudo that I failed to see all of the

apparmor="DENIED" operation="mknod" profile="/usr/sbin/named" name="/etc/bind/dnssec-keys/K[zone].+013+16035.l6WOJd" pid=152161 comm="isc-net-0002" requested_mask="c" denied_mask="c" fsuid=124 ouid=124FSUID="bind" OUID="bind"

That made me realize, when I thought I fixed the apparmor rule, I’d used /etc/bind/dnskey/ rw instead of /etc/bind/dnskey/** rw

The bind manual claims that you don’t need to manually create keys or manually include them in your zone file, if you use dnssec-policy default or presumably any other policy with inline-signing. Claims that bind will generate its own keys, write them, and even manage timed rotation or migration to a new policy. I can’t confirm or deny that, because it definitely found the keys I had manually created (one of which was $INCLUDEd in the zone file, and one not) and used them. It also edited them and created .state files.

I feel like I should take the rest of the day off and celebrate.


Bind 9.18.18 dnssec key location and privileges?
[update, solved] It was apparmor, which was lying about being inactive. Ubuntu's default profile denies bind write access to its config directory. Needed to add `/etc/bind/dnskeys/** rw`, reload apparmor, and it's all good. Trying to switch my internal domain from `auto-dnssec maintain` to `dnssec-policy default.` Zone is signed but not secure and logs are full of `zone_rekey:dns_dnssec_keymgr failed: error occurred writing key to disk` key-directory is /etc/bind/dnskeys, owned bind:bind, and named runs as bind I've set every directory I could think of to 777: /etc/bind, /etc/bind/dnskeys, /var/lib/bind, /var/cache/bind, /var/log/bind. I disabled apparmor, in case it was blocking. A signed zone file appears, but I can't dig any DNSKEYs or RRSIGs. named-checkzone says there's nsec records in the signed file, so something is happening, but I'm guessing it all stops when keymgr fails to write the key. I tried manually generating a key and sticking it in dnskeys, but this doesn't appear to be used.
fedilink

My guess is Firefox. I’m using Kodi - OSMC/libreelec - and it coasts along at 1080p, with plenty of spare CPU to run pihole and some environmental monitors. Haven’t tried anything 4k, but supposedly Pi4 offloads that to hardware decoding and handles it just fine. (as long as the codec is supported).


Pi 4’s were hard to get there for a while. Pi 5’s are expensive. Lot of other SBCs are also expensive, as in not all that much cheaper than a 2-3 generations old low-end x86. That makes them less attractive for special purpose computing, especially among people who have a lot of old hardware lying around.

Any desktop from the last decade can easily host multiple single-household computer services, and it’s easier to maintain just one box than a half dozen SBCs, with a half dozen power supplies, a half dozen network connections, etc. Selfhosters often have a ‘real’ computer running 24/7 for video transcoding or something, so hosting a bunch of minimal-use services on it doesn’t even increase the electric bill.

For me, the most interesting aspect of those SBCs was GPIO and access to raw sensor data. In the last few years, ‘smart home’ technology seems to have really exploded, to where many of the sensors I was interested in 10 years ago are now available with zigbee, bluetooth or even wifi connectivity, so you don’t need that GPIO anymore. There are still some specific control applications where, for me, Pi’s make sense, but I’m more likely to migrate towards Pi-0 than Pi-5.

SBCs were also an attractive solution for media/home theater displays, as clients for plex/jellyfin/mythtv servers, but modern smart-TVs seem mostly to have built-in clients for most of those. Personally, I’m still happy with kodi running on a pi-4 and a 15 year old dumb TV.


Traditionally, RAID-0 “stripes” data across exactly 2 disks, writing half the data to each, trying to get twice the I/O speed out of disks that are much slower than the data bus. This also has the effect of looking like one disk twice the size of either physical disk, but if either disk fails, you lose the whole array. RAID-1 “mirrors” data across multiple identical disks, writing exactly the same data to all of them, again higher I/O performance, but providing redundancy instead of size. RAID-5 is like an extension of RAID-0 or a combination of -0 and -1, writing data across multiple disks, with an extra ‘parity’ disk for error correction. It requires (n) identical-sized disks but gives you storage capacity of (n-1), and allows you to rebuild the array in case any one disk fails. Any of these look to the filesystem like a single disk.

As @ahto@feddit.de says, none of those matter for TrueNAS. Technically, trueNAS creates “JBOD” - just a bunch of disks - and uses the file system to combine all those separate disks into one logical structure. From the user perspective, these all look exactly the same, but ZFS allows for much more complicated distributions of data and more diverse sizes of physical disks.


You might be surprised how much attention family will put into your media, especially any pictures, movies, or audio that you created, when you’re gone. It’s a way to commune with their memory of you. My family still regularly trots out boxes of physical photographs of grandparents’ grandparents & homes no one has visited in 70 years.


Others have explained the line.

Worth noting that not all implementations of head accept negative line counts (i.e. last n lines), and you might substitute tail.

i.e.: ls -1 /backup/*.dump | tail -2 | xargs rm -f


I think finding content is the key component of OP’s question. If you host an instance that has only your own subscriptions, the content will feel light, but the extra load on other instances will be minimal and at their convenience. If you load your instance with popular communities so that your All feed pops up weird and interesting content, then the extra load on other instances will be much larger than your personal browsing.


This was a huge benefit of my self-hosting. I started with just file server and mythTV, but mythTV uses mySQL, and once I had a db running, I found all kind of other uses for it. I’ve used Quicken (2012) to track finances, but then I figured out I could use my brokerage’s API to get the raw data and make my own graphs. My rowing machine has an API to get all kind of metrics, including a heart monitor. Environmental sensors. I haven’t gone as far as ‘smart’ scale, or a wearable that would track sleep. Then a bunch of python to make pretty graphs for web pages.

Honestly, I think it’s the pleasure of seeing new dots show up on the rowing graph that keeps me doing it.


You might be able to solve some of these issues by changing the systemd service descriptions. Change/add an After keyword to make sure the network storage is fully mounted before trying to start the actual service.

https://www.golinuxcloud.com/start-systemd-service-after-nfs-mount/


Thee main issues are latency and bandwidth, especially on a server with limited RAM. If you’re careful to manage just the data over NAS, it’s probably fine, especially if the application’s primary job is server data over the same network to clients. That will reduce your effective bandwidth, as data has to go NAS->server->client over the same wires/wifi. If the application does a lot of processing, like a database, you’ll start to compromise that function more noticeably.

On applications with low user count, on a home network, it’s probably fine, but if you’re a hosting company trying to maximize the the data served per CPU cycle, then having the CPU wait on the network is lost money. Those orgs will often have a second, super-fast network to connect processing nodes to storage nodes, and that architecture is not too hard to implement at home. Get a second network card for your server, and plug the NAS into it to avoid dual-transmission over the same wires. [ed: forgot you said you have that second network]

The real issue is having application code itself on NAS. Anytime the server has to page apps in or out of memory, you impose millisecond-scale network latency on top of microsecond-scale SSD latency, and you put 1 Gb/s network cap on top of 3-6 Gb/s SSD bandwidth. If you’re running a lot of containers in small RAM, there can be a lot of memory paging, and introducing millisecond delays every time the CPU switches context will be really noticeable.


Everybody else is already talking about homeassistant. I’m going to add that there are zigbee, z-wave, and rarely bluetooth based alternatives for almost all of the nest/alexa/etc accessories, and those work through a local hub.


Most of what you’ll find is user management and system administration, because LDAP is a common backend for user authentication &c at larger sites, but there’s no reason it can’t store arbitrary data. https://openldap.org/ slapd on the backend and maybe https://directory.apache.org/studio/ on the frontend. But since it’s built around user authentication, it has layers of security and access control that really complicate understanding the actual system.


What you’re describing sounds a lot like LDAP, but it could be I’m just triggered by “schemas.” LDAP would be the backend; there’s a whole slew of LDAP browsers, but none of them really seem like they’re targeted at users.


Definitely dual stack if you do. The real benefit of IPv6 is that, supposedly, each of your internal devices can have its own address and be directly accessible, but I don’t think anyone actually wants all of their internal network exposed to the internet. My ISP provides IPv6, but only a single /128 address, so everything still goes through NAT.

Setting it up was definitely a learning process - SLAAC vs DHCP; isc’s dhcpd uses all different keywords for 6 vs 4, you have to run 6 and 4 in separate processes. It’s definitely doable, but I think the main benefit is the knowledge you gain.


If they’re #10 (US) screws, they’d be 4.8mm major diameter and 24 or 32 threads per inch, so something like M4.8-1.06 or M4.8-0.78. If M5-0.8 thread half way, it sounds more like 10-32.

If you’re outside the US, that might be why the previous owners resorted to the ugga-dugga. That will (probably) have wrecked those holes for either their factory pitch or whatever the owners used. You might consider getting a 5MM drill and a 6mm hand tap. You might have fair luck with 10-32 screws, depending on how hard they are to get in your country.


Set one up yesterday. My first experience with VPS, but it was straightforward; seems plenty fast, but I haven’t done anything to push either memory or CPU.


racknerd is running a deal on 1cpu/1GB/25GB 4TB bandwidth at $13/year


2 spare drives and a safe deposit box ($10/yr). Swap the bank box once a month or so. My upstream bandwidth isn’t enough to make cloud saves practical, and if anything happens, retrieving the drive is faster than shipping a replacement, nevermind restoring from cloud.

Of course, my system is a few TB, not a few dozen.