• 0 Posts
  • 13 Comments
Joined 1Y ago
cake
Cake day: Jul 30, 2023

help-circle
rss

Memory unlocked that’s been a hot minute ago

Didn’t apple used to make their own IR remote for that? Is the hardware onboard the Mini preset to use their hardware or is it more generic once Linux is installed?


As others said, depends on your use case. There are lots of good discussions here about mirroring vs single disks, different vendors, etc. Some backup systems may want you to have a large filesystem available that would not be otherwise attainable without a RAID 5/6.

Enterprise backups tend to fall along the recommendation called 3-2-1:

  • 3 copies of the data, of which
  • 2 are backups, and
  • 1 is off-site (and preferably offline)

On my home system, I have 3-2-0 for most data and 4-3-0 for my most important virtual machines. My home system doesn’t have an off-site, but I do have two external hard drives connected to my NAS.

  • All devices are backed up to the NAS for fast recovery access between 1w and 24h RPO
  • The NAS backs up various parts of itself to the external hard drives every 24h
    • Data is split up by role and convenience factor - just putting stuff together like Tetris pieces, spreading out the NAS between the two drives
    • The most critical data for me to have first during a recovery is backed up to BOTH external disks
  • Coincidentally, both drives happen to be from different vendors, but I didn’t initially plan it that way, the Seagate drive was a gift and the WD drive was on sale

Story time

I had one of my two backup drives fail a few months ago. Literally actually nothing of value was lost, just went down to the electronics shop and bought a bigger drive from the same vendor (preserving the one on each vendor approach). Reformatted the disk, recreated the backup job, then ran the first transfer. Pretty much not a big deal, all the data was still in 2 other places - the source itself, and the NAS primary array.

The most important thing to determine about a backup when you plan one - think about how much the data is valuable to you. That’s how much you might be willing to spend on keeping that data safe.


What platform?

Another user said it - what your asking for isn’t a backup, it’s just data transfer.

It sounds like you’re looking for a storage backend that hosts all your data and can download data to the client side on the fly.

If your use case is Windows, Nextcloud Desktop may be what you looking for. I have a similar setup with the game clips folder. It detects changes and auto uploads then, while deleting less recently used data that’s properly server side. This feature might be in Mac but I haven’t tested it.

Backup wise, I capture an rsync of the nextcloud database and filesystem server-side and store it on a different chassis. That then gets backed up again to a USB drive I can grab and run.

Nextcloud also supports external storage, which the server directly connects to: https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html


I don’t have an immediate answer for you on encryption. I know most of the communication is encrypted in flight for AD, and on disk passwords are stored hashed unless the “use reversible encryption field is checked”. There are (in Microsoft terms) gMSAs (group-managed service accounts) but other than using one for ADFS (their oath provider), I have little knowledge of how it actually works on the inside.

AD also provides encryption key backup services for Bitlocker (MS full-partition encryption for NTFS) and the local account manager I mentioned, LAPS. Recovering those keys requires either a global admin account or specific permission delegation. On disk, I know MS has an encryption provider that works with the TPM, but I don’t have any data about whether that system is used (or where the decryptor is located) for these accounts types with recoverable credentials.

I did read a story recently about a cyber security firm working with an org who had gotten their way all the way down to domain admin, but needed a biometric unlocked Bitwarden to pop the final backup server to “own” the org. They indicated that there was native windows encryption going on, and managed to break in using a now-patched vulnerability in Bitwarden to recover a decryption key achievable by resetting the domain admin’s password and doing some windows magic. On my DC at home, all I know is it doesn’t need my password to reboot so there’s credentials recovery somewhere.

Directly to your question about short term use passwords: I’m not sure there’s a way to do it out of the box in MS AD without getting into some overcomplicated process. Accounts themselves can have per-OU password expiration policies that are nanosecond accurate (I know because I once accidentally set a password policy to 365 nanoseconds instead of a year), and you can even set whole account expiry (which would prevent the user from unlocking their expired password with a changed one). Theoretically, you could design/find a system that interacts with your domain to set, impound/encrypt, and manage the account and password expiration of a given set of users, but that would likely be add on software.


  1. Yes I do - MS AD DC

  2. I don’t have a ton of users, but I have a ton of computers. AD keeps them in sync. Plus I can point services like gitea and vCenter at it for even more. Guacamole highly benefits from this arrangement since I can set the password to match the AD password, and all users on all devices subsequently auto-login, even after a password change.

  3. Used to run single domain controller, now I have two (leftover free forever licenses from college). I plan to upgrade them as a tick/tock so I’m not spending a fortune on licensing frequently

  4. With native Windows clients and I believe sssd realmd joins, the default config is to cache the last hash you used to log in. So if you log in regularly to a server it should have an up to date cache should your DC cluster become unavailable. This feature is also used on corporate laptops that need to roam from the building without an always-on VPN. Enterprises will generally also ensure a backup local account is set up (and optionally auto-rotated) in case the domain becomes unavailable in a bad way so that IT can recover your computer.

  5. I used to run in homemade a Free IPA and a MS AD in a cross forest trust when I started ~5-6y ago on the directory stuff. Windows and Mac were joined to AD, Linux was joined to IPA. (I tried to join Mac to IPA but there was only a limited LDAP connector and AD was more painless and less maintenance). One user to rule them all still. IPA has loads of great features - I especially enjoyed setting my shell, sudoers rules, and ssh keys from the directory to be available everywhere instantly.

But, I had some reliability problems (which may be resolved, I have not followed up) with the update system of IPA at the time, so I ended up burning it down and rejoining all the Linux servers to AD. Since then, the only feature I’ve lost is centralized sudo and ssh keys (shell can be set in AD if you’re clever). sssd handles six key MS group policies using libini, mapping them into relevant PAM policies so you even have some authorization that can be pushed from the DC like in Windows, with some relatively sane defaults.

I will warn - some MS group policies violate Linux INI spec (especially service definitions and firewall rules) can coredump libini, so you should put your Linux servers in a dedicated OU with their own group policies and limited settings in the default domain policy.


I’m probably the overkill case because I have AD+vC and a ton of VMs.

RPO 24H for main desktop and critical VMs like vCenter, domain controllers, DHCP, DNS, Unifi controller, etc.

Twice a week for laptops and remote desktop target VMs

Once a week for everything else.

Backups are kept: (may be plus or minus a bit)

  • Daily backups for a week
  • Weekly backups for a month
  • Monthly backups for a year
  • Yearly backups for 2-3y

The software I have (Synology Active Backup) captures data using incremental backups where possible, but if it loses its incremental marker (system restore in windows, change-block tracking in VMware, rsync for file servers), it will generate a full backup and deduplicate (iirc).

From the many times this has saved me from various bad things happening for various reasons, I want to say the RTO is about 2-6h for a VM to restore and 18 for a desktop to restore from the point at which I decide to go back to a backup.

Right now my main limitation is my poor quad core Synology is running a little hot on the CPU front, so some of those have farther apart RPOs than I’d like.


  1. Where is the server located? Are you looking at an intranet location or internet?

  2. Is the client connected to the VPN concentrator via IPv4 or IPv6?

  3. Is the VPN concentrator connected to the server via IPv4 or IPv6?

What you ask may be possible depending on those answers.


Going to summarize a lot of comments here with one - VPNs are very powerful tools that can do lots of things. Traffic can be configured to go in several directions. We really have to know more about your use case to advise you as to what config you might need.

Going to just write a ton of words on paper here - OP, let me know if any of this sounds like what you’re trying to do, and I can try to give a better explanation (or if something was confusing, let me know).

VPN that uses the client’s IP when sending data out of the VPN server

That’s the specific sentence I’m getting caught on myself. It could mean several things, some of which have been mentioned, some haven’t.

  • Site to site VPN: Two (generally) fixed devices operate a VPN connection between them and utilize some form of non-NAT routing so that every child device behind each site sees it’s “real” counterpart without getting NATed. However, NAT is typically still configured for IPv4 facing the internet, so each device shows an internet “exit IP” matching the site it’s on. Typically, the device with the most powerful / most stable / most central / least restrictive would be the receiver, while the other nodes would be initiators pointed to that receiver. In larger maps, you could build multiple hub/spoke systems as needed.

  • Sub-type of site to site possible: where one site tunnels all of its data over to the second site, and the second site is the one that provides NAT. This is similar in nature to how GL.Inet routers operate their VPN switch, but IMHO more powerful of you have greater control over the server compared to subscribing to a public VPN service. Notably for you example, the internet NAT exit device can be either the initiator or the receiver.

  • Normal VPN but without NAT: this is another possible expansion of what you’ve written, with one word adjusted - it operates the VPN but preserves the client IP as it’s entering the network. This is how most corporate remote access VPNs operate, since it would be overloaded and pointless to have every remote worker from a small pool of IP addresses when you don’t even need to use a NAT engine for intranet.

My remote access VPN for my home lab is of the latter type, and I have a few of the sites to site connections floating around with various protocols.

For mine, I have two VPN servers: one internal server that works tightly with my home firewall, and one remote server running inside a VPS. Both the firewall and VPS apply NAT rules to egress traffic, but internal bound traffic is not NATed and simply passed along the site to site connections to wherever it needs to go. My home-side remote access VPN is simply a “dumb” VPN server that has the VPN protocol port forwarded back to it and passes almost raw traffic to the firewall for processing.

For routing, since each VPN requires its own subnet, I use FRR with a mixture of OSPF and iBGP (depending on how old the link is)

For VPN protocols, I currently am using strongSwan for IPsec, but it’s really easy to slap OpenVPN onto that routing stack I already set up and have the routes propagate inward.


Any VPN that terminates on the firewall (be it site to site or remote access / “road warrior”) may be affected, but not all will. Some VPN tech uses very efficient computations. Notably affected VPNs are OpenVPN and IPSec / StrongSwan.

If the VPN doesn’t terminate on the firewall, you’re in the clear. So even if your work provided an OpenVPN client to you that’s affected by AES-NI, because the tunnel runs between your work laptop and the work server, the firewall is not part of the encryption pipeline.

Another affected technology may be some (reverse) proxies and web servers. This would be software running on the firewall like haproxy, nginx, squid. See https://serverfault.com/a/729735 for one example. In this variation of the check, you’d be running one of these bits of software on the firewall itself and either exposing an internal service (such as Nextcloud) to the internet, or in the case of squid doing some HTTP/S filtering for a tightly locked down network. However, if you just port forwarded 443/TCP to your nextcloud server (as an example), your nextcloud server would be the one caring about the AES-NI decrypt/encrypt. Like VPN, it matters to the extent of where the AES decrypt/encrypt occurred.

Personally, I’d recommend you get AES-NI if you can. It makes running a personal VPN easier down the road if you think you might want to go that route. But if you know for sure you won’t need any of the tech I mentioned (including https web proxy on the firewall), you won’t miss it if it’s not there.

Edit: I don’t know what processors you’re looking at that are missing AES-NI, but I think you have to go to some really really old tech on x86 to be missing it. Those (especially if they’re AMD FX / Opteron from the Bulldozer/Piledriver era) may have other performance concerns. Specifically for those old AMD processors (Not Ryzen/Epyc), just hard pass if you need something that runs slightly fast. They’re just too inefficient.


If you’re ok leaving a monitor plugged in (but can be off), my go-to is Parsec. Bonus points is that it works without needing a VPN (it uses UDP NAT hole punching like Chrome Remote Desktop). If you’ll be far far away from home, Chrome Remote Desktop tends to be slightly more reliable over high latency than parsec for me - but that could just be because I tuned mine for super low latency when nearby.

Good news is, you can run both at the same time and see how they treat ya! (And both are free for base use, but parsec has a handful of premium features you can pay for if you like it) I have Parsec, CRD, RDP, and SSH all set up in various forms to get back “home” when I’m not.


It’s probably also way cheaper to do it that way. As far as I could tell when I checked in on it some time ago, most of the content goes through a Cloudflare proxy straight to a GCP S3-compatible bucket.



There are still tons of reasons to have redundant data paths down to the switch level.

At the enterprise level, we assume even the switch can fail. As an additional note, only some smart/managed switches (typically the ones with removable modules and cost in the five to six figures USD per chassis) can run a firmware upgrade without blocking networking traffic.

So from a failure case and switching during an upgrade procedure, you absolutely want two switches if that’s your jam.

On my home system, I actually have four core switches: a Catalyst 3750X stack of two nodes for L3 and 1Gb/s switching, and then all my “fast stuff” is connected to a pair of ES-16-XG, each of which has a port channel of two 10G DACs back to to Catalyst stack, with one leg to each stack member.

To the point about NICs going bad - you’re right its infrequent but can happen, especially with consumer hardware and not enterprise hardware. Also, at the 10G fiber level, though infrequent, you still see SFPs and DACs go bad at a higher rate than NICs