Copy/paste from another comment
“Just to be clear I just need to track my sales/revenue (even if input is manual) and track expenses (bonus if I could upload a picture of a receipt).
I don’t need to actually send an invoice (I do this straight from my website and it’s a seamless integration so not looking to reinvent this wheel, yet!)
Given the above, is in InvoiceNinja still a good candidate?”
It’s not working because it is against Cloudflare’s ToS unfortunately.
First I would ask, do you really have to make Jellyfin publicly accessible?
If yes, are you able to setup a VPN (i.e. Wireguard) and access Jellyfin through that instead?
If you don’t want the VPN route then isolate the NPM and Jellyfin instance from the rest of your server infrastructure and run the setup you described (open ports directly to the NPM instance). That is how most people that don’t want to do Cloudflare are running public access to self hosted services. But first, ask yourself the questions above.
Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.
For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.
There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.
If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).
Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.
Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)
AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.
Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.
I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad
Hope this helps
When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).
What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).
You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.
So basically it goes like this
Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee
Hope this help!
Edit: Fix typos
I do remember being a bit lost with initial connection to a postgres when I first spun up the app. I clicked around for a few minutes but after than it has been very handy. My use case was extremely basic as I just needed to manipulate some records that I did not know the right query for and to visualize the rows I needed.
Have you taken a look at CloudBeaver? I’m not sure I understand what an ERD is but I’ve used this to manage and work with databases before. Pretty easy, UI is not bad at all and it’s self host-able (through docker). I don’t know if it meets your criteria 100% but worth checking out.
When I was looking for a DMS I ran across MayanEDMS. I never got a chance to stand up any DMS but it may be worth checking out their site in case it meets your needs.
Not exactly DMS but I have a WikiJS instance running with MFA enabled and access control. For example, my wife and I can access a set of documents we deem sensitive but other users can’t. I use WikiJS for all my documentation needs.
Okay sure same thing as Windows. If you aren’t reckless with the things you install and run then you are likely fine BUT there’s always a chance. All it takes is one slip up. Same logic as having a lock in the door knob and a deadbolt. By your logic (and many others), the lock on the door knob is sufficient and that may be okay with you BUT I’m gonna put a deadbolt on too just in case.
We can argue about this all day long. You will have valid points and so will I.
Just went searching for something like this as my wife wanted to start a “journal”. The requirements were simple, private, nothing too crazy complicated to use, web interface, easy setup and tear down (in case she didn’t like it). Started up an instance of Ghost, way overkill, was looking at WriteFreely, stood up an instance of Bookstack. She’s trying it out now, nothing bad to report so far. The hierarchy is a bit confusing to grasp but when you put it in the context of something like shelve = My Journal, Book= 2023 Vacation or 2023 or Homeschooling, Chapters = 1st week of Vacation or First year Homeschool, Pages = Todays date. It started clicking with her a bit more. If you find something better, please report back!
Well that’s good news. For now, I’ve created a different path in my array. I’ve reconfigured photoprism to look at this new path for the originals and cleared out the database one more time. I’m in the process of fully reuploading/resyncing my devices (two phones). Once I have that then I will write up a script to see which objects are missing from the old path to the new and viceversa to figure out why Im short ~5,000 objects. Once I have that list then I can rehollad the missing objects and im back in business (hopefully)
That’s the thing, if I do a count of the objects in the actual storage I get 27k but based on the count of the two devices that I backup using PhotoPrism I should have at least 31k between the two phones. So somehow I’ve lost ~5k. It may have not been a big deal to just do a full sync with PhotoSync again to copy over whatever I was missing between the two phones and storage BUT given the fact that I had to rebuild Photoprism’s database I’m not confident that the new database will have the same unique Id for each picture as before so if I kick off another full sync with PhotoSync it may copy everything again because “the new database doesn’t have a record of that picture”.
I reinfected everything once the new database was built but again I’m not sure if the new unique ids or however Photoprism knows that it already has that object will match and skip the upload or if it will just accept as a new a object.
Yea I always try to dedicate networks to each app and if it’s a full stack app then one for front end (nginx and app) and another for backend (app and database).
I didn’t think about spinning up the alpine container to troubleshoot so that’s another great pointer for future soul crushing and head bashing sessions!
Yea not sure why it didn’t just crash and hid behind all kinds of successful messages.
Fair enough! If I create a secondary config as you are suggesting, wouldn’t it create a conflict with the server blocks of default.conf? If I remember correctly, default.conf has a server listen 80 block going to localhost (which in my case wouldn’t be the correct path since the app is in another container) so wouldn’t nginx get confused because it doesn’t know which block to follow???
Or maybe I saw the block in default.conf but it was all commented it out out of the box. Idk I had to step away for a sec. As you can imagine I’ve been bashing my head for hours and it turned out to be some bs I should have probably read the entire log stream. So I’m pretty angry/decompressing at the moment.
gunicorn app.wsgi:application --user www-data --bind 0.0.0.0:8020 --workers 3
.SOLVED… ALMOST THERE??? There were no signs (docker logs app) of an issue, until I scrolled all the way to the very top (way past all the successful migrations, tasks being run upon boot and successful messages). There was an uncaught exception to boot gunicorn workers because of middleware I had removed from my dependencies a few days ago. Searched through my code, removed any calls and settings for this middleware package. Redeployed the app and now I can hit the public page.
What now? So now that it looks like everything is working. What is the best practice for the nginx.conf? Leave it all in /etc/nginx/nginx.conf (with user as root), reestablish the out box nginx.conf and /etc/nginx/conf.d/default.conf and just override the default.conf or add a secondary config like /etc/nginx/conf.d/app.conf and leave default.conf as configured out of box? What is the best practice around this?
I don’t have an answer for you but I have one instead. When I attempted to do swarm my biggest challenge was shared storage. I was attempting to run a swarm with shared storage on a NAS. Literally could not run apps, ran into a ton of problems running stacks (NAS share tried SMB and NFS). How did you get around this problem?