• 1 Post
  • 19 Comments
Joined 1Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

Did it sound cold? Because I didn’t mean that, I just meant to actually answer the question from my PoV. Just for the record, I also did not down vote you.

So yeah, use whatever footgun you prefer, I don’t judge :)


Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.

Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.

That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.


Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.

You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.

I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.


I think k8s is a different beast, that requires way more domain specific knowledge besides server/Linux basic administration. I do run it, but it’s an evolution of a need, specifically when you want to manage a fleet of machines running containers.


Because the lxc way is inherently different from the docker/podman way. It’s aimed at running full systems, rather than mono process containers. It has it’s use cases, but they are not as common IMHO.



I would say Docker. There is no substantial benefit in running podman, while docker is a widely adopted tool (which means more tooling in the ecosystem, easier to find answers to questions etc.). The difference is not huge tbh, and some time ago the biggest advantage for podman was being able to run rootless, while docker was stuck with a root daemon. This is not the case anymore (docker can run rootless), so I would say unless you have some specific argument to use podman, stick with docker.


I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.


OK, but how do you solve the problem? Trusting an image is not so different than downloading a random deb and installing it, which maybe configures a systemd unit as well. If not containers you still have to run the application somehow.

Ultimately my point is that containers allow you to do things securely, exactly like other tools. You don’t even have to trust the image, you can build your own. In fact, almost every tool I add to my lab, I end up opening a PR for a hardened image and a tighter helm chart.

In any case, I would not expose such application outside of a VPN, which is a blanket security practice that most selhosters should do for most of their services…


They are not as secure, because there are less controls for ENV variables. Anybody in the same PID namespaces can cat /proc/PID/environ and read them. For files (say, config file) you can use mount namespaces and the regular file permissions to restrict access.

Of course you can mess up a secret implementation, but a chmod’d 600 file from another user requires some sort of arbitrary read vulnerability or privilege escalation (assuming another application on the same host is compromised, for example). If you get low-privileged access to the host, chances are you can dump the ENV for all processes.

Security-wise, ENV variables are worse compared to just a mounted config file, for example.


The problem is in fact in the applications. If these support loading secrets from a file, then the problem does not exist. Even with the weak secrets implementation in kubernetes, it is still far better than ENV variables.

The disappointing thing is that in many “selfhost” apps, often the credentials to specify are either db credentials or some sort of initial password, which could totally be read from file or be generated randomly at first run.

I agree that the issue is information disclosure, but the problem is that ENV variables are stored in memory, are accessible to many other processes on the same system, etc. They are just not a good way to store sensitive information.


In general, a mounted file would be better, because it is easier to restrict access to filesystem objects both via permissions and namespacing. Also it is more future proof, as the actual ideal solution is to use secret managers like Vault (which are overkill for many hobbyist), which can render secrets to files (or to ENV, but same security issue applies here).


Absolutely not. Many applications used ENV variables for sensitive stuff even before. Let’s remember that the vulnerability here is being able to execute phpinfo remotely.

Containerization can do good for security, in general.


The only thing that makes this case worse in docker is that more info is in ENV variables. The vulnerability has nothing to do with containers though, and using ENV variables to provide sensitive data is in general a bad decision, since they can be leaked to any process with /proc access.

Unfortunately, ENV is still a common way which people use to pass data to applications inside containers, but it is not in any way a requirement imposed by the tech.


Shared folders for me are an extremely rare use case (in fact, I generally don’t even use folders as I rely always on the search), but the way I achieve it is creating collections for the people to share it with. For example my “sister” collection in which I put stuff that I want to share with her, and give her access. Also each user can have their own collection that they can manage, to give access to other people (so far, this has never been the case for me).


Would you be able to tell me a little more about your work? Also, what role/path in security would you recommend for a Cloud admin/System Admin?

Well, I started as an IT ops person, I got lucky before the first job was still in a fairly modern environment, and I got introduced to k8s, containers and linux administration (we were running k8s on baremetal). Slowly I moved more and more towards security, specifically infrastructure/platform security, which to be honest, is not too far from a regular Cloud/System admin. However, the big difference is in mindset and priorities, which slide from availability to mostly confidentiality and integrity. My job essentially consists on supporting the security of whatever Kubernetes cluster we run, both managed and on baremetal, with the usual spinkle of network security in the middle, and a strong focus in secure computation (i.e., container security). The actual work can range from research and experimentation, to concrete setup or development of new tooling, to developing standards and guidelines.

(Cloud) Security Engineering seems an obvious path for a cloud/system admin, and I don’t think it’s extremely hard to build the necessary security knowledge on top of a solid engineering background!


Sorry, I am not super clear of what you are asking.

You do have gluetun which is used to connect to NordVPN. Then you have wireguard, to which you connect from somewhere, and you want essentially:

client -> wireguard -> wireguard container -> gluetun container -> internet?


I work in security, so there is no really devops/sysadmin prospect for me. That said, I use ansible and (mostly) terraform professionally and for my lab, so that’s a good idea nevertheless. I don’t have much BSD experience, what do you think are the key reasons to go that route instead of Linux?


Choosing an hypervisor
Hello everyone! During one of those illuminated evenings, I got the idea to move my small server in Scaleway to some more powerful server in Hetzner. If I will make the move, I am thinking of splitting the server in various VMs, to host different services that belongs to different trust boundaries, for example: * A Lemmy/writefreely instance * Vaultwarden/Gitea * Wireguard tunnel to my home infrastructure * Blogs, and other convenience services In order to achieve the best level of separation, I was thinking of using VMs. My default choice would be Proxmox, because I used it in the past, and because I generally trust it, however I am trying to evaluate multiple options, and maybe someone has good or better experiences to share. Other options I thought about are: * Run everything in Docker. I am going to do this nevertheless, but Docker escapes are always possible, especially with public facing images that I did not write myself and/or that require a host volume. * KVM directly? I am OK even without a GUI to be honest. I am not aware if there is some ansible module or even better Terraform provider for this, it would be great. (EDIT: I found https://registry.terraform.io/providers/dmacvicar/libvirt/0.7.1 which seems awesome!) * ESxi? I have no experience with this solution. Any idea or recommendation?
fedilink

For Kubernetes you can use Velero. I tried it, but I didn’t like it (overly complex for my use case), so I wrote my own tool.

Essentially the strategy for me is fairly straightforward, but it depends on the data you have.

I have mostly 2 types:

  • manifests and configuration. This I have all in git (as I am using flux).
  • persistent volumes. I use openEBS, but for a low resources cluster I use host volumes only. For these I have written my tool that simply runs as a daemonset with the whole root of the host mounted in RO and the DAC_read_search capability, queries the API for volumes and backs up using restic the whole PV to Backblaze. Incidentally, this is also the same way I do all my other backups, outside K8s (I.e.borg or restic to b2).

I chose b2 mostly for the price, but any s3 will do. Since all I am uploading there is encrypted anyway, I don’t need to worry about the privacy implication of having a third party potentially having access to my data.