𝘋𝘪𝘳𝘬

Somewhere between Linux woes, gaming, open source, 3D printing, recreational coding, and occasional ranting.

🔗 Me, but elsewhere

🇬🇧 / 🇩🇪

  • 3 Posts
  • 30 Comments
Joined 2Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

There – of course – won’t be a singular official source stating “Hey guys, we’re open core now”. You need to put this together bit-by-bit.

Here are some links for research

  • Official statement on the takeover
  • Gitea Enterprise/Gitea Cloud hiding features behind a cloud solution and a paywall which makes Gitea itself open-core
  • Open Letter to the new Gitea owners with a summary and a reply, signed by a lot of Gitea devs and FOSS scene people.
  • As @gratux@lemmy.blahaj.zone mentioned: A fork under the name Forgejo was done due to new Gitea owners did not care much about the concerns. (Started as asoft-fork but with 10.0 it became a hard fork.)
  • Gitea owners made it mandaroy to remove copyright headers and set the corporation as copyright holder. Here, here, and here

Never heard of 99% in that list.

Also, Gitea should not be there. It is a corporate -owned open core project that was hostilely taken away from the community.


Authentication with NPM is pretty straightforward. You basically just configure an ACL, add your users, and configure the proxy host to use that ACL.

I found this video explaining it: https://youtu.be/0CSvMUJEXIw?t=62

NPM unfortunately has a long-term bug since 2020, that needs you to add a specific configuration when setting up the ACL as shown in the video.

At the point where he is on the “Access” tab with all the allow and deny entries, you need to add an allow entry with 0.0.0.0/0 as IP address.

Other than that, the setup shown in the video works in the most recent version.


How do you handle SSL certs and internet access in your setup?

I have NPM running as “gateway” between my LAN and the Internet and let handle it all of my vertificates using the built-in Let’s Encrypt features. None of my hosted applications know anything about certificates in their Docker containers.

As for your questions:

  1. You can and should – it makes managing the applications much easier. You should use some containerization. Subdomains and correct routing will be done by the reverse proxy. You basically tell the proxy “when a request for foo.example.com comes in, forward it to myserver.local, port 12345” where 12345 is the port the container communicates over.
  2. 100% depends on your use case. I purchased a domain because I host stuff for external access, too. I just have my setup to report it’s external IP address to my domain provider. It basically is some dynamic DNS service but with a “real domain”. If you plan to just host for yourself and your friends, some generic subdomain from a dynamic DNS service would do the trick. (Using NPMs Let’s Encrypt configuration will work with that, too.)
  3. You can’t. Every georestricting can be circumvented. If you want to restrict access, use HTTP basic auth. You can set that up using NPM, too. So users authenticate against NPM and only when it was successful,m the routing to the actual content will be done.
  4. You might want to look into Cloudflare Tunnel to hide your real IP address and protect against DDoS attacks.
  5. No 🙂


What do you think the “v” in “vps” stands for?


You don’t need to freeze the state of the RAM, you freeze the whole virtual machine - including the virtual RAM.


If it is in the RAM, they can read it. Since it is a virtual server they can freeze and clone the current state and connect to that copy and read all data that is currently encrypted/opened without you even knowing.


Does it support logging in to YouTube to have access to purchased content, premium content, subscribers-only content, etc?



I remember ZoneMinder.

A full-featured, open source, state-of-the-art video surveillance software system.

https://zoneminder.com/

Is this still a thing nowadays?



It’s absurdly complex and annoying and lacks proper documentation.

There currently is no sane way to deploy it via docker since it needs half a dozen of different containers and volumes and networks to barely work at all - overwriting/ruining your already existing setup while doing so.

The cleanest would likely be setting up a VM where you set up docker in and let Lemmy do whatever it wants.



Temporary workaround applications/scripts become de-facto standards sounds familiar. They disabled loading script files in Powershell but you can still copy&paste the file’s content …

People have no idea how absurd IT in corporations is.


Big international corporate, IT security hired by personal connections instead of skill, IT security never worked in daily business.

The fun thing is, that they refer to NIST guidelines. Which is even funnier because NIST says 12 digits are enough, user-generated 8 digits are fine, no complexity rules, and password changes only “when necessary” (i.e. security breaches).

https://sprinto.com/blog/nist-password-guidelines ff.


They are so heavy on security I have a Citrix environment that takes me 3 logins

My daily routine:

  1. Take laptop out of locked shelf
  2. Start Laptop and enter boot password
  3. Enter Bitlocker password
  4. Enter username (not saved) and password
  5. Open Citrix website and login with different username and password
  6. Enter MFA token to access said website
  7. Start server connection
  8. Enter different username/password (not saved) to access server
  9. Enter different MFA token for the server login
  10. Start the business-specific application with 3rd set of not saved and different login data

They also have plans to make MFA mandatory for laptop login, too.

Passwords need to be at least 15 characters long for laptops and 30 for servers and 10 for the business-specific application. All need to have uppercase, lowercase, numbers, and special characters and need to be changed every 60 days (for the server login) and cannot be the last 30 passwords.


Any small Linux distro would do. Just install Docker and maybe Portainer (as container itself of course) if you want a web UI.


Yes, Freenginx should/would/will be a drop-in replacement, at least int he beginning. We’ll see how this works out over time. Forks purely out of frustration never lived long enough to gain a user base and attract devs. But it’s an “anti corporate bullshit” fork and this alone puts it on my watchlist.


Thanks, this looks actually pretty great. From the description it’s basically BusyBox httpd but with Nginx stability and production-readiness and functionality. It also seems to be actively developed.


Best way to dockerize a static website?
I'm currently researching the best method for running a static website from Docker. The site consists of one single HTML file, a bunch of CSS files, and a few JS files. On server-side nothing needs to be preprocessed. The website uses JS to request some JSON files, though. Handling of the files is doing via client-side JS, the server only need to - serve the files. The website is intended to be used as selfhosted web application and is quite niche so there won't be much load and not many concurrent users. I boiled it down to the following options: 1. BusyBox in a selfmade Docker container, manually running `httpd` or [The smallest Docker image ...](https://lipanski.com/posts/smallest-docker-image-static-website) 2. `php:latest` (ignoring the fact, that the built-in webserver is meant for development and not for production) 3. Nginx serving the files ([but this](https://thenewstack.io/freenginx-a-fork-of-nginx/)) For all of the variants I found information online. From the options I found I actually prefer the BusyBox route because it seems the cleanest with the least amount of overhead (I just need to serve the files, the rest is done on the client). Do you have any other ideas? How do you host static content?
fedilink


How do others handle situations like this?

A company I worked for had an external storage drive with the needed capacity stored in a safe deposit locker and every Friday someone drove to the bank, got the drive, drove back to the office, performed the backup and brought the drive back to the bank.


Imagine you’re so much against dubbed media that you pay a shady site 60 dollars a year to give you pirated and unofficial subtitles of questionable quality and some that are generated by an AI.


How do YOU create your Docker images?
Currently I’m planning to dockerize some web applications but I didn’t find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server. What I currently have is: 1. A local computer with a directory where the application that I want to dockerize is located 2. A “docker server” running Portainer without shell/ssh access 3. A place where I can upload/host the Docker images and where I can pull the images from on the “Docker server” 4. Basic knowledge on how to write the needed `Dockerfile` What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer. Ideally something where I can build the images and upload them but without *that something* “littering Docker-related files all over my system”. Something like a VM that resets on every start maybe? So … build the image, upload to repository, close the terminal window, and forget that anything ever happened. What is YOUR solution to create and upload Docker images in a clean and sane way?
fedilink

You need to reimplement TOTP on a per-service base. There are hardware tokens available, so you could use one of them (Token2, maybe?) instead on user side. You still need to allow custom secrets for your services so you can enter the token ID there. Are you sure you meant a (TOTP) token and not single sign-on?


Have a look at Forgejo which is a soft fork run by a nonprofit organization of Gitea which is owned by a for-profit company.

https://forgejo.org/

It need very little system resources and still gives you all the common features you know from commercial Git hosting providers.

And yes, you can mirror existing Git repos using a web UI.


Use open-source software! Do not rely on “someone else’s computer”. Build your own locally hosted cloud! If you can use open-source hardware when doing so: awesome. If not, make at least sure that everything needed to run the system is open.


NPM is such a blessing! It works absolutely flawless!


I run it since a few weeks and already moved all my publicly hosted repositories there.


I STILL can’t get it my own instance of Lemmy running. The instructions are unclear. They have bugs in their docker-compose.yml file. It’s really bad.

It’s a whole mess, yes. Also they want to create random containers and random volumes all over the place with random IDs for names and by default suggest messing with upstream files and configuration before creating the containers.

I hope the devs will one day provide a proper container with environment variables for configuration.


and a Mastodon instance

A Mastodon Mastodon instance, or just something to interact with Mastodon-compatible services? For the later maybe have a look in GoToSocial, especially when you host it as single-user instance just for you.

It needs very low resources and fits in a single Docker container with a single volume. All you need to keep in mind is that it is alpha software and not every single feature is fully supported yet. You also need a client because GoToSocial is just a server/back-end.

how much time investment do you think is needed to keep everything running smoothly

For GoToSocial it took me around a weekend, including learning how to use Docker and trying things out a lot.


[Rant] … all I want is dropping some files into a directory and call it a day
I can't help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!) Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don't easily align with an established infrastructure. “So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!” Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds. Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served. My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment. I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
fedilink