Any counter / pro arguments are welcome.
I have an ultra cheap VPS instance that I run wireguard on, and expose these servers to the internet through there. The Mini-PCs are like NUCs, so they hardly consume much power, and I have paid less than 6 months worth of comparable AWS costs to own and run them till now.
The two biggest issues I have are power backup - UPS works for only 3-4 hours, after which the servers shut down, and internet connections - I have 2 100Mbps fiber lines load balanced, but the reliability of consumer internet leaves things to be desired.
I spend roughly 2-3 hours every other month to maintain the whole thing, which is pretty much hands-off. I'd say it's been totally worth it for me, but I still use AWS for mostly S3 and SES.
- Get a static IP from your ISP
- Point the DNS record at the static IP
- On the server run NGINX + Gunicorn + Django
That setup will have >99.5% uptime and handle 1000 - 10,000 concurrent users, depending on the complexity of the website.
You can even run localstack on both machines to have mocks of AWS services that you can use the AWS CLI against.
You can even run Kubernetes on your nodes and run whatever you need on them. Plenty of folks do that; check out the homelab scene.
I personally don't because it costs me $5/mo to run several "serverless" functions via API gateways that communicate through message queues, some on a schedule, host some websites with globally-accessible DNS records and hold a ton of backups for stuff I care about, without me having to do a ton of systems administration to keep the lights on.
(Most of the cost is DNS, actually! Everything else falls within free tier limits. Genius move on AWS's part, as I know AWS well and have recommended it to several large companies; they've made their money from me for sure)
I did that back in the day (ran my own email server). I'd rather do other things with my time now.
That said here is an incomplete list of things in the greater system that self-hosters frequently need to think about, and that AWS sometimes does a good job of handling for you:
- Routing/DNS/Static IP
- Administrative access control
- Firewall
- Load balancing
- Redundant storage
- Server failover
- Power redundancy
- Network redundancy
In particular, some of these things benefit a lot more from economies of scale than the server hosting itself.
I do what you are describing for fun, but I would never recommend it for a business, even if it is reliable.
To get around the reliability problems, I have the on-prem laptop environment update cloudflare entries with keep-alive timestamp messages. .. I then have a Google script that monitors the keepalive and if too much time has passed, a “failover” is done and everything that was running on-prem is spun up in digital ocean. The failover script completes by updating DNS and pointing to DO
I don’t allow the system to fail-back, I always investigate every time there is a failover, but it rarely happens and is usually due to power outages.
But they might not be silent enough to not disturb you when sleeping.
Another con is that they aren’t rackmountable. So if you want to stack them you need to use some non-standard solution for that. And if a HDD/SSD fails, you need to order a replacement yourself, etc etc.
It’s totally doable. But is it worth the hassle? That’s up to you.
If you use VMs, you have to update and maintain the hypervisor. You update it and it may break. You need to take care of networking, firewalls, snapshoting, etc. If you use AWS, there is a lot of external services available, eg, add block storage as needed, create a S3 bucket and connect it, etc. Static IP address is another good thing.
technology has, and hasn't advanced. there's wordpress and php and the world is still the same.
You can build all that stuff, from scratch. Go to best buy, build a computer, set it up at home, get another computer for redundancy, get another ISP for backup then get a battery backup in case the power goes out. The pieces are there, it's all available to you.
If you're just discovering now, that you can do for free what someone else is charging you money for, then you're one of today's lucky 10,000. https://xkcd.com/1053/
If you already know how to run the desired workloads on the laptops, then most of what remains is a way to make it accessible to the rest of your infrastructure. Cloudflared, nebula, or wireguard tunnels are a few options.
I use nebula (https://github.com/slackhq/nebula) to network all the machines of my homelab, regardless of how they're connected to the internet, including my laptops. I do think there's space for more lightweight and batteries-included options to take advantage of connected resources in this model.
The unfortunate reality, at least in Australia, is that fast internet with the same level of uptime as what you'd get in a datacenter costs more than just renting a server on Digital Ocean.
Perhaps there's a market out there for small cloud providers who could amortise this expensive internet cost among dozens or hundreds of servers, but for a one-off you're either going to need to deal with the unreliability and low upload speeds of home internet or just cough at the trough to get a cloud instance.
Back in the day I ran a bunch of sites off of spare equipment. Now my expectations are higher. But for backend stuff it's be fine, just design with failure in mind.
IPV6 works, so long as you have a jump box to get you from a CGNAT'ed IPV4 network into the global IPV6 world.
If you want to skip on the jump box, you could give up some convenience and go with TOR to get yourself back to your home-AWS setup.
I would say this is the biggest barrier to utilizing home compute.
As for reliability, you could use just program stuff to dependency failure rates of 5% (up time of 95%).
I would not run any Docker containers or Kubernetes as that will be too much work. I would stick to bare metal and not bother with isolation. Just make sure you are using certificate authentication or SSH. Do not use a user name and password or expected to be pwned.
Lastly, I would definitely consider using gitlab.com to host code and run a gitlab runner from home. The previous parts of what I wrote become moot. The runner just connects when it can and runs jobs.
It's not that difficult if you know some Linux and router configuration. Most of the time the difficult part will be exposing your computer to the Internet as you probably don't have a fixed IP
There are lots of easy ways to do it. It would take less than an hour to setup depending on how you wanted to configure it (longer if you were going to colo it).
Managing your own hardware does come with tradeoffs though. If it's not already clear how to do this the tradeoffs may not be very pleasant.
With AWS or other cloud providers you're in large part paying for convenience (as well as reliability and a few other odds and ends).
If you don't mind a learning experience though, there is nothing wrong with using them this way. Understand it will be a learning experience though, probably with associated downtime as you learn.
You've got availability concerns (power, network, hardware failure, etc), the networking to get a static IP in the place the laptop lives might be a bit tricky, but you can absolutely do this.
The main downside is lack of remote controls. If it craps out, I can't remotely reset it.
this is essentially what the "hybrid cloud" approach is: you have some things in the cloud and some other things in a physical datacenter where you have your own machines.
there are some issues and limitations that you'll find though:
- networking (bandwidth & latency)
- OS updates and security
- reliability (power outages)
- thermals & cooling
and probably something more that i'm forgetting.as long as you're fine with the tradeoff, you can definitely do that.