That said, what's the best way to reduce or offload the maintenance surface area in a way that's most trivial to budget? I've been away from infrastructure myself for about 10-15 years now, and am just barely familiar with what Docker, k8s, and all this other stuff is.
Is there some way to box up the application code and libraries and let it run on a remote machine that someone else maintains? Amazon Fargate seems close but I'm terrified about misusing their services and ending up with a huge bill. Heroku might work too if I can slot everything I'm doing into their services. I really would like to have a timeseries DB like Influx or Timescale, but the load is so tiny that Sqlite or Postgres might do just fine. The cloud offerings from Influx and Timescale are just too expensive -- what we're doing would run just fine on an Rpi4 with a USB3 attached drive, but now we're back to a machine that needs to be maintained.
One thing I'm quite weary of is pay per request. I would much rather pay a fixed cost per month and make users wait, rather than having the price per month just be a question mark. Again it's an internal app -- I'm not trying to hit any kind of latency target, and I can build clients with some degree of robustness to services going down or timing out.
The most important things to me are hands-offness and fixed cost. Again, I would love to stick an Rpi in the corner and forget about it if this were not such an irresponsible thing to do.
Any thoughts?
Simplicity is the best for long term maintenance. I work with a legacy app bundled into docker containers. We simply have to at this point or deploying to a new host or dev box would be impossibly complex. Instead we have bundled that complexity into the docker build. It's still there and still requires a lot of maintenance. Don't do this if you can avoid it.
If you want your app to live for a long time you simply need support. You need engineers to keep it running. This is true of a lot of infrastructure from buildings to cars.
If you're ina university can you get support from the I.T. team. I know research teams probably don't want to deal with the politics of getting them involved but of you can make it their responsibility you will have the best chance of keeping it going for 10+ years.
Most universities will have internal servers for hosting this sort of thing.
I have something very similar set up to run a few Game Servers and provide a file and container management user interface.
It's been running for a few years whenever I hear about a security incident I go check and it's always updated this far.
Theoretically you should be able to do something very similar by using aws cloud formation. Every time you go to update you would create a new image, install your dependencies, save it, put an instance of the new image up, divert new traffic to the new instance, shut down and delete the old instance.
If you can swing a say 10usd pm budget then you can get something of comparable scale from any of the VPS providers.
Simple linux firewall in front of it & have a think about backups and that should be it
If you can stick the service behind a free cloudflare to hide the IP and filter out a bit of the internet nastiness
Whether you docker it or not is personal preference. Def wouldn't do K8S if a rpi in a corner is your ideal vision
AWS AppRunner (built on Fargate technology) would be another option, but using AWS is definitely more involved than Heroku. With the exception of network traffic which is pay-as-yo-go, you can calculate max pricing for CPU/memory depending on the app configuration.
I use both of them at $dayjob. Unless there are good reasons to choose AWS, I would always recommend to start with Heroku.
Build on the heroku 20 stack and you are good for 5 years, till 2025: https://devcenter.heroku.com/articles/stack
Have you considered running your software on a synology box or similar high end NAS?
Not sure it would work well if exposed to the internet, but if it's a small group and limited network access it could be quite a simple solution.