• 8 Posts
  • 35 Comments
Joined 1Y ago
cake
Cake day: Jun 27, 2023

help-circle
rss
SOLVED - Vikunja Email Config Help
Does anyone have a working Vikunja instance sending emails through Gmail? I’ve enabled the mailer options and entered the info but the test_email function times out. I’ve checked all the information and even tried different ports. Honestly at this point it doesn’t have to be Gmail (I’m just most familiar with this workflow). I just need my Vikunja instance to send emails. Edit: I was able to solve my issue. You can only create Gmail app passwords if you have 2FA enabled. I also had the wrong address (it’s smtp.gmqil.com not smtp.google.com)
fedilink

I don’t have an answer for you but I have one instead. When I attempted to do swarm my biggest challenge was shared storage. I was attempting to run a swarm with shared storage on a NAS. Literally could not run apps, ran into a ton of problems running stacks (NAS share tried SMB and NFS). How did you get around this problem?



Love that username tho!! Yeah might just do RSS. I already run FreshRSS and it’s ability to filter stuff would probably come in handy too


This sounds like the simplest and most effective solution. Thanks!


Notification when new app versions are released
TL;DR: is there an app that can alert me when a new version of some other app is available? I have about 12 - 15 services (freshrss, heimdall, photoprism, Wordpress, etc) running using docker compose spread across 4 hosts. Through my self-hosting journey I’ve been burned a few times using “latest” images so I now pin app image versions within compose. The problem then becomes that every couple of weeks, I have to go out to different GitHub’s, docker hub, etc. to see if a new update for that service is available. It gets a bit tedious with 12-15 services every couple of weeks so I need a centralized and more efficient way of “keeping up”. Is there some type of app that can track whether an app/service has a new version available? Ideally it can send me some type of notification, self-hostable, and ideally not Portainer?
fedilink

Copy/paste from another comment

“Just to be clear I just need to track my sales/revenue (even if input is manual) and track expenses (bonus if I could upload a picture of a receipt).

I don’t need to actually send an invoice (I do this straight from my website and it’s a seamless integration so not looking to reinvent this wheel, yet!)

Given the above, is in InvoiceNinja still a good candidate?”


Simple sales and expense tracker software
I’m running a very small business and now I have a need to start tracking my sales and expenses for the business. Not looking for a full blown Quickbooks type of thing but if that’s all that’s available then no big deal, I can just use what I need and ignore the rest. Obviously, I have to self host this. Hardware available varies but I have several raspberry pi’s laying around not doing much (3, 4 & 5). Ideally dockerized. My research shows GnuCash, Akaunting and Odoo. What does this awesome community recommend? P.S. Tried spinning up Akaunting on an rpi 5 and encountered a breaking bug (already reported to their github).
fedilink

Well that’s kinda why I came here to the greater community as I wasn’t really sure if there would be any performance gains or other upsides I’m not aware of. Based on general feedback, it appears that there’s no clear upside to incus.


Sanity Check. Docker vs Incus (LXD)
My apologies for the long post. I have a single server running Unraid with about 12 services (Pihole, Wordpress, Heimdall, Jellyfin, etc.) all running on Docker. This server is also acting as my home lab NAS. Everything runs fine for my use case (at least for right now) but I’ve been thinking about creating some type of compute cluster for my services instead of a single server. Recently, I saw a discussion about Proxmox, Docker, LXD and Incus where a user felt that Incus was a better option to all the others. Curious, I started reading up on Incus and playing around with it and contemplating switching all my services from Docker in Unraid to an Incus cluster (I’m thinking around 3 nodes) and leaving the Unraid server to serve as a NAS only. In a nutshell Incus/LXD appear to be (to me) a lightweight version of a VM in which case I would have to manually install and configure each service I have running. Based on the services I have running, that seems like a ton of work to switch to Incus when I could just do 3 physical servers (Debian) in docker swarm mode or a Proxmox cluster with 3 Debian VMs with docker in swarm mode. I’d all possible, I would like to keep my services containerized rather then actual VMs. What has me thinking that a switch to Incus may be worth it is performance. If the performance of my services is significantly better in Incus/LXDs as compared to Docker, then that’s worth it to me. I have not been able to find any type of performance comparison between Incus/LXD and Docker. I don’t know if there are other reasons as to “Incus over Proxmox and Docker” which is why I’m asking the greater community. Here’s my question: Based on your experience and taking into consideration my use case (home lab/home use), do the pros and cons of Incus outweigh accomplishing my goal by creating standalone hosts cluster or Proxmox cluster?
fedilink

It’s not working because it is against Cloudflare’s ToS unfortunately.

First I would ask, do you really have to make Jellyfin publicly accessible?

If yes, are you able to setup a VPN (i.e. Wireguard) and access Jellyfin through that instead?

If you don’t want the VPN route then isolate the NPM and Jellyfin instance from the rest of your server infrastructure and run the setup you described (open ports directly to the NPM instance). That is how most people that don’t want to do Cloudflare are running public access to self hosted services. But first, ask yourself the questions above.


Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.

For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.

There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.

If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).


Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.

Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)

AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.

Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.

I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad

Hope this helps


I’m currently using Backblaze. I also researched Wasabi and AWS.


Can’t speak for those but I tried Kopia and it did the job okay. Ultimately tho I landed on rclone.


Lots of answers in the comment about this particular storage type/vendor. Regardless, to answer your original question, rclone. Hands down. If you spend 30-60 minutes actually reading their documentation, you are set and understand so much more of what’s going on under the hood.


Fair point. I failed to mentioned features in my previous comment. Things like WHOIS Privacy are essential to me and I imagine it is for most of us (self hosters)


In my opinion it really comes down to support, price (first year and renewal) and ethics.

For the ethics piece, if you think Google is an evil company then avoid Google Domains, as an example.


Did not know about this one! Just added it to my pi hole instance. Thank you!


When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).

What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).

You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.

So basically it goes like this

Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee

Hope this help!

Edit: Fix typos


Sad indeed. Maybe raising an issue on GitHub? Even if you don’t end up using cloudbeaver, it’s worth reporting it. Maybe they don’t know there’s a problem with this component of their app.


I do remember being a bit lost with initial connection to a postgres when I first spun up the app. I clicked around for a few minutes but after than it has been very handy. My use case was extremely basic as I just needed to manipulate some records that I did not know the right query for and to visualize the rows I needed.


Have you taken a look at CloudBeaver? I’m not sure I understand what an ERD is but I’ve used this to manage and work with databases before. Pretty easy, UI is not bad at all and it’s self host-able (through docker). I don’t know if it meets your criteria 100% but worth checking out.


When I was looking for a DMS I ran across MayanEDMS. I never got a chance to stand up any DMS but it may be worth checking out their site in case it meets your needs.

Not exactly DMS but I have a WikiJS instance running with MFA enabled and access control. For example, my wife and I can access a set of documents we deem sensitive but other users can’t. I use WikiJS for all my documentation needs.


Try through the browser first as suggested by someone else. If you are running the Docker container, check you port mappings.


How does it obfuscate the point? A layered approach to security.


I am running SMB although it’s not publicly available and setup with specific users having specific access to specific shares.

Good note on crowdsec


Not sure this is what I’m looking for as it appears to be an XDR SIEM vendor.



Hahahahaha this actually made me chuckle. Thanks for that!


Okay sure same thing as Windows. If you aren’t reckless with the things you install and run then you are likely fine BUT there’s always a chance. All it takes is one slip up. Same logic as having a lock in the door knob and a deadbolt. By your logic (and many others), the lock on the door knob is sufficient and that may be okay with you BUT I’m gonna put a deadbolt on too just in case.

We can argue about this all day long. You will have valid points and so will I.


Alternative to ClamAV?
TL;DR - What are you running as a means of “antivirus” on Linux servers? I have a few small Debian 12 servers running my services and would like to enhance my security posture. Some services are exposed to the internet and I’ve done quite a few things to protect the services and the hosts. When it comes to “antivirus”, I was looking at ClamAV as it seemed to be the most recommended. However, when I read the documentation, it stated that the recommended RAM was at least 2-4 gigs. Some of my servers have more power than other but some do not meet this requirement. The lower powered hosts are rpi3s and some Lenovo tinys. When I searched for alternatives, I came across rkhunter and chrootkit, but they seem to no longer be maintained as their latest release was several years ago. If possible, I’d like to run the same software across all my servers for simplicity and uniformity. If you have a similar setup, what are you running? Any other recommendations? P.S. if you are of the mindset that Linux doesn’t need this kind of protection then fine, that’s your belief, not mine. So please just skip this post.
fedilink

Just went searching for something like this as my wife wanted to start a “journal”. The requirements were simple, private, nothing too crazy complicated to use, web interface, easy setup and tear down (in case she didn’t like it). Started up an instance of Ghost, way overkill, was looking at WriteFreely, stood up an instance of Bookstack. She’s trying it out now, nothing bad to report so far. The hierarchy is a bit confusing to grasp but when you put it in the context of something like shelve = My Journal, Book= 2023 Vacation or 2023 or Homeschooling, Chapters = 1st week of Vacation or First year Homeschool, Pages = Todays date. It started clicking with her a bit more. If you find something better, please report back!


Well that’s good news. For now, I’ve created a different path in my array. I’ve reconfigured photoprism to look at this new path for the originals and cleared out the database one more time. I’m in the process of fully reuploading/resyncing my devices (two phones). Once I have that then I will write up a script to see which objects are missing from the old path to the new and viceversa to figure out why Im short ~5,000 objects. Once I have that list then I can rehollad the missing objects and im back in business (hopefully)


That’s the thing, if I do a count of the objects in the actual storage I get 27k but based on the count of the two devices that I backup using PhotoPrism I should have at least 31k between the two phones. So somehow I’ve lost ~5k. It may have not been a big deal to just do a full sync with PhotoSync again to copy over whatever I was missing between the two phones and storage BUT given the fact that I had to rebuild Photoprism’s database I’m not confident that the new database will have the same unique Id for each picture as before so if I kick off another full sync with PhotoSync it may copy everything again because “the new database doesn’t have a record of that picture”.

I reinfected everything once the new database was built but again I’m not sure if the new unique ids or however Photoprism knows that it already has that object will match and skip the upload or if it will just accept as a new a object.


Photoprism rebuild issues
TL;DR - had to rebuild my PhotoPrism database and now my originals count is off by ~5,000. Can I do a full sync of my devices and have it only upload what is missing? Hello gurus, I’ve been running Photoprism for quite some time and I’m happy with it. I ran to an unrelated issue with my database (MariaDB) and has to rebuild the database. PhotoPrism uses this instance of MariaDB so naturally the metadata was gone. The original pictures (originals) were stored in a separate array so at a minimum I still have all my pictures. I rebuilt the database and PhotoPrism (docker container) and pointed it to the array for the originals. Once that was done, I logged in to the PhotoPrism UI and perform a complete rescan and index of my originals. Once it was done, I noticed that my originals count was 27,000 but i should have 31,000 objects (according to a picture I took of the PhotoPrism UI I took the night before rebuilding the database). So I started digging a bit. - The array itself (where my originals are stored) is showing 27,000 objects. - The pictures I took the night before rebuilding the database and PhotoPrism containers said that the count of originals was ~31,000. - The two main devices backing media to PhotoPrism is my phone and my wife’s phone. My phone shows ~4,500 and my wife’s sores ~26,500. - Since these two phones are previously fully backups a few weeks before the rebuild I should have ~31,000 objects in the originals. My question is, can I redo a full backups sync of both phones (through PhotoSync) and have it only copy the objects that are not in the originals? Since the database has to be rebuilt, I fear that if I do another full sync, it will just copy everything again and I end up with ~60,000 objects rather than the ~31,000 I should have. What can I do to see which objects are missing between my devices and PhotoPrism and how can I only copy those over to PhotoPrism?
fedilink

UPDATE: Decide to give rclone a try and try to automate it all through scripts. So far I the rclone script checking for errors, logging to a file and sending discord notifications.


Unraid to Backblaze
For those of you running Unraid and backing up their data to Backblaze, how are you doing it? I’ve been playing a bit with KopiaUI but what is the easiest and most straight forward way? Bonus points if I can have the same “client/software/utility” backup data from non-servers (Windows, macOS and Linux) in my network to said Unraid server. I don’t want to complicate the setup with a bunch of dependencies and things that would make the recovery process long and tedious or require in-depth knowledge to use it for the recovery process. To recap: Workstations/laptop > back up data with what? > Unraid Unraid > back up data with what? > Backblaze
fedilink

Alright I’ll give it a try and see what happens. Thanks for your help!


Yea I always try to dedicate networks to each app and if it’s a full stack app then one for front end (nginx and app) and another for backend (app and database).

I didn’t think about spinning up the alpine container to troubleshoot so that’s another great pointer for future soul crushing and head bashing sessions!


Yea not sure why it didn’t just crash and hid behind all kinds of successful messages.

Fair enough! If I create a secondary config as you are suggesting, wouldn’t it create a conflict with the server blocks of default.conf? If I remember correctly, default.conf has a server listen 80 block going to localhost (which in my case wouldn’t be the correct path since the app is in another container) so wouldn’t nginx get confused because it doesn’t know which block to follow???

Or maybe I saw the block in default.conf but it was all commented it out out of the box. Idk I had to step away for a sec. As you can imagine I’ve been bashing my head for hours and it turned out to be some bs I should have probably read the entire log stream. So I’m pretty angry/decompressing at the moment.


  1. 192.168.0.3 is the IP of the django app container (checked with docker inspect app | grep IP and docker logs nginx which shows blah blah upstream http://192.168.0.2:8020 blah blah)
  2. I created a “frontend” network. This nginx and app container are both connected to this network but only nginx has the forwarding (0.0.0.0:80 and 0.0.0.0:443). The app container is set to EXPOSE 8020 in the Dockerfile and docker compose and the entrypoint.sh has this line after the usual django commands gunicorn app.wsgi:application --user www-data --bind 0.0.0.0:8020 --workers 3.

SOLVED… ALMOST THERE??? There were no signs (docker logs app) of an issue, until I scrolled all the way to the very top (way past all the successful migrations, tasks being run upon boot and successful messages). There was an uncaught exception to boot gunicorn workers because of middleware I had removed from my dependencies a few days ago. Searched through my code, removed any calls and settings for this middleware package. Redeployed the app and now I can hit the public page.

What now? So now that it looks like everything is working. What is the best practice for the nginx.conf? Leave it all in /etc/nginx/nginx.conf (with user as root), reestablish the out box nginx.conf and /etc/nginx/conf.d/default.conf and just override the default.conf or add a secondary config like /etc/nginx/conf.d/app.conf and leave default.conf as configured out of box? What is the best practice around this?


Defeated by NGINX
Heads up! Long post and lots of head bashing against the wall. Context: I have a written an a python app (Django). I have dockerized the deployment and the compose file has three containers, app, nginx and postgres. I'm currently trying to deploy a demo of it in a VPS running Debian 11. Information below has been redacted (IPs, Domain name, etc.) Problem: I keep running into 502 errors. Locally things work very well even with nginx (but running on 80). As I try to deploy this I'm trying to configure nginx the best I can and redirecting http traffic to https and ssl certs. The nginx logs simply say "connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: demo.example.com, request: "GET / HTTP/1.1", upstream: "http://192.168.0.2:8020/", host: "demo.example.com"". I have tried just about everything. What I've tried: - Adding my server block configs to /etc/nginx/conf.d/default.conf - Adding my server block configs to a new file in /etc/nginx/conf.d/app.conf and leaving default at out of box config. - Tried putting the above config (default.conf and app.conf) in sites-available (/etc/nginx/sites-available/* not at the same time tho). - Recreated /etc/nginx/nginx.conf by copy/pasting out of box nginx.conf and then adding server blocks directly in nginx.conf - Running nginx -t inside of the nginx container (Syntax and config were "successful") - Running nginx -T when recreated /etc/nginx/nginx.conf - nginx -T when the server blocks where in /etc/nginx/conf.d/* lead me to think that since there were two server listen 80 blocks that I should ensure only one listen 80 block was being read by the container hence the recreated /etc/nginx/nginx.conf from above - Restarted container each time a change was made. - Changed the user block from nginx (no dice when using nginx as user) to www-data, root and nobody - Deleted my entire docker data and redeployed everything a few times. - Double checked the upstream block 1,000 times - Confirmed upstream block container is running and on the right exposed port Checked access.log and error.log but they were both empty (not sure why, tried cat and tail) - Probably forgetting more stuff (6 hours deep in the same error loop by now) How can you help: Please take a look at the nginx.conf config below and see if you guys can spot a problem, PLEASE! This is my current /etc/nginx/nginx.conf ` user www-data; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; upstream djangoapp { server app:8020; } server { listen 80; listen [::]:80; server_name demo.example.com; return 301 https://$host$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; server_name demo.example.com; ssl_certificate /etc/letsencrypt/live/demo.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/demo.example.com/privkey.pem; #ssl_protocols TLSv1.2 TLSv1.3; #ssl_prefer_server_ciphers on; location / { proxy_pass http://djangoapp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header Upgrade $http_upgrade; #proxy_set_header Connection keep-alive; proxy_redirect off; } location /static/ { autoindex on; alias /static/; } } } ` - EDIT: I have also confirmed that both containers are connected to the same docker network (docker network inspect frontend) - EDIT 2: Solved my problem. See my comments to @chaospatterns. TLDR there was an uncaught exception in the app but it didn’t cause a crash with the container. Had to dig deep into logs to find it.
fedilink