HACKER Q&A
📣 grocketeer

Why can't my old laptop be an AWS replacement?


I have two old macbook pros with 8vcpu's and 16 GB of ram each. A comparable computer on AWS would be approximately $100 a month. Why isn't there an easy way to utilize my unused hardware for my production servers (if not for mission-critical stuff, perhaps just for background jobs)?

Any counter / pro arguments are welcome.


  👤 deltasquare4 Accepted Answer ✓
They'd be adequate at running a bunch of services, but cannot bring the connectivity and reliability that AWS provides. I myself have been running a bunch of Chinese Mini-PCs to the same effect since last few years now, and found Reddit Homelab community to be a source of inspiration.

I have an ultra cheap VPS instance that I run wireguard on, and expose these servers to the internet through there. The Mini-PCs are like NUCs, so they hardly consume much power, and I have paid less than 6 months worth of comparable AWS costs to own and run them till now.

The two biggest issues I have are power backup - UPS works for only 3-4 hours, after which the servers shut down, and internet connections - I have 2 100Mbps fiber lines load balanced, but the reliability of consumer internet leaves things to be desired.

I spend roughly 2-3 hours every other month to maintain the whole thing, which is pretty much hands-off. I'd say it's been totally worth it for me, but I still use AWS for mostly S3 and SES.


👤 sjducb
It is easy. In the olden days it was the only way.

- Get a static IP from your ISP

- Point the DNS record at the static IP

- On the server run NGINX + Gunicorn + Django

That setup will have >99.5% uptime and handle 1000 - 10,000 concurrent users, depending on the complexity of the website.


👤 nunez
You can totally do that.

You can even run localstack on both machines to have mocks of AWS services that you can use the AWS CLI against.

You can even run Kubernetes on your nodes and run whatever you need on them. Plenty of folks do that; check out the homelab scene.

I personally don't because it costs me $5/mo to run several "serverless" functions via API gateways that communicate through message queues, some on a schedule, host some websites with globally-accessible DNS records and hold a ton of backups for stuff I care about, without me having to do a ton of systems administration to keep the lights on.

(Most of the cost is DNS, actually! Everything else falls within free tier limits. Genius move on AWS's part, as I know AWS well and have recommended it to several large companies; they've made their money from me for sure)

I did that back in the day (ran my own email server). I'd rather do other things with my time now.


👤 peeters
If you're just talking about this as an alternative to renting metal in EC2, it's probably worth acknowledging why that exists in the first place. Bare metal server hosting doesn't exist to provide some unique functionality. It's outsourcing server maintenance. It's an answer to the question: "why can't we just pay somebody else to keep this server available for us?". If you don't want to outsource that task, you're not fundamentally missing out on anything by not using it.

That said here is an incomplete list of things in the greater system that self-hosters frequently need to think about, and that AWS sometimes does a good job of handling for you:

- Routing/DNS/Static IP

- Administrative access control

- Firewall

- Load balancing

- Redundant storage

- Server failover

- Power redundancy

- Network redundancy

In particular, some of these things benefit a lot more from economies of scale than the server hosting itself.


👤 ratg13
You can do whatever you want it’s all a question of your tolerance for risk and the complexity of the environment you want to maintain.

I do what you are describing for fun, but I would never recommend it for a business, even if it is reliable.

To get around the reliability problems, I have the on-prem laptop environment update cloudflare entries with keep-alive timestamp messages. .. I then have a Google script that monitors the keepalive and if too much time has passed, a “failover” is done and everything that was running on-prem is spun up in digital ocean. The failover script completes by updating DNS and pointing to DO

I don’t allow the system to fail-back, I always investigate every time there is a failover, but it rarely happens and is usually due to power outages.


👤 cpach
You can just install Linux on them and put them in your wardrobe.

But they might not be silent enough to not disturb you when sleeping.

Another con is that they aren’t rackmountable. So if you want to stack them you need to use some non-standard solution for that. And if a HDD/SSD fails, you need to order a replacement yourself, etc etc.

It’s totally doable. But is it worth the hassle? That’s up to you.


👤 aborsy
One issue is uptime. Sometimes the power goes off and you need a ups. If you have LUKS enabled, you need to be physically near the server to restart it.

If you use VMs, you have to update and maintain the hypervisor. You update it and it may break. You need to take care of networking, firewalls, snapshoting, etc. If you use AWS, there is a lot of external services available, eg, add block storage as needed, create a S3 bucket and connect it, etc. Static IP address is another good thing.


👤 fragmede
I mean, it is. There's a bunch of little details to get right, but it is. You're paying Amazon for the pleasure of not having to deal with those details, but those details are just life. My power's out so my website's down? That's fine for my website. I'm not running Google.com here. It's just my lil personal site.

technology has, and hasn't advanced. there's wordpress and php and the world is still the same.

You can build all that stuff, from scratch. Go to best buy, build a computer, set it up at home, get another computer for redundancy, get another ISP for backup then get a battery backup in case the power goes out. The pieces are there, it's all available to you.

If you're just discovering now, that you can do for free what someone else is charging you money for, then you're one of today's lucky 10,000. https://xkcd.com/1053/


👤 kerbyhughes
Part of what you're paying for with the AWS instance is "easy", documented integration with the rest of the AWS ecosystem, including things like access control and monitoring.

If you already know how to run the desired workloads on the laptops, then most of what remains is a way to make it accessible to the rest of your infrastructure. Cloudflared, nebula, or wireguard tunnels are a few options.

I use nebula (https://github.com/slackhq/nebula) to network all the machines of my homelab, regardless of how they're connected to the internet, including my laptops. I do think there's space for more lightweight and batteries-included options to take advantage of connected resources in this model.


👤 AussieWog93
I've thought the exact same thing recently, after getting some mini PCs from a mate who does used IT equipment for dirt cheap.

The unfortunate reality, at least in Australia, is that fast internet with the same level of uptime as what you'd get in a datacenter costs more than just renting a server on Digital Ocean.

Perhaps there's a market out there for small cloud providers who could amortise this expensive internet cost among dozens or hundreds of servers, but for a one-off you're either going to need to deal with the unreliability and low upload speeds of home internet or just cough at the trough to get a cloud instance.


👤 mannyv
I put esxi on my old 5,1 that had 64gb of ram and used it for a test hadoop cluster. I also had like 8 macpro 1,1 and 3,1s as nodes. Now i use an old thinkcenter with 128gb or 192g. of ram as my main testbed.

Back in the day I ran a bunch of sites off of spare equipment. Now my expectations are higher. But for backend stuff it's be fine, just design with failure in mind.


👤 gbraad
i use some older Lenovo ThinkCentre Tiny for my homelab. at some point they ran openstack. now they run containers. so yeah, you can

👤 favflam
Yes, but you need a static ip address, resilient server software components, acceptance of occasional inconvenience.

IPV6 works, so long as you have a jump box to get you from a CGNAT'ed IPV4 network into the global IPV6 world.

If you want to skip on the jump box, you could give up some convenience and go with TOR to get yourself back to your home-AWS setup.

I would say this is the biggest barrier to utilizing home compute.

As for reliability, you could use just program stuff to dependency failure rates of 5% (up time of 95%).

I would not run any Docker containers or Kubernetes as that will be too much work. I would stick to bare metal and not bother with isolation. Just make sure you are using certificate authentication or SSH. Do not use a user name and password or expected to be pwned.

Lastly, I would definitely consider using gitlab.com to host code and run a gitlab runner from home. The previous parts of what I wrote become moot. The runner just connects when it can and runs jobs.


👤 harrygeez
> Why isn't there an easy way to utilize my unused hardware for my production servers

It's not that difficult if you know some Linux and router configuration. Most of the time the difficult part will be exposing your computer to the Internet as you probably don't have a fixed IP


👤 f0e4c2f7
> Why isn't there an easy way to utilize my unused hardware for my production servers

There are lots of easy ways to do it. It would take less than an hour to setup depending on how you wanted to configure it (longer if you were going to colo it).

Managing your own hardware does come with tradeoffs though. If it's not already clear how to do this the tradeoffs may not be very pleasant.

With AWS or other cloud providers you're in large part paying for convenience (as well as reliability and a few other odds and ends).

If you don't mind a learning experience though, there is nothing wrong with using them this way. Understand it will be a learning experience though, probably with associated downtime as you learn.


👤 QuinnyPig
Running our services on computers is exactly what we as an industry did in the old-timey days.

You've got availability concerns (power, network, hardware failure, etc), the networking to get a static IP in the place the laptop lives might be a bit tricky, but you can absolutely do this.



👤 ryukoposting
I do a similar thing with my desktop. When I'm travelling, I switch on Hamachi before I leave. That way, I can access it (and the various services I run on it) remotely.

The main downside is lack of remote controls. If it craps out, I can't remotely reset it.


👤 achempion
You can absolutely do this for your CI if it allows self-hosted instances. For production as others pointed out, you need connectivity and reliability when something goes down at 3am (power, internet, disk failure).

👤 throwaway4good
It can be. Or you could use a Mac Mini which is suitable for this. And I don't think it is particular hard configuring your local network and getting a static ip from your ISP.

👤 DarthNebo
You can alway setup a kafka or message queue consumer on these sort of hardware. I'm testing my RTX card for some offloading of free-tier volume as well

👤 danjoredd
You can do that, sure, but the point of AWS is how easy you can expand. If you don't need that elasticity a laptop can totally work as a server

👤 quickthrower2
Resilience is going to be a big factor here. Laptops are built assuming they wont be used as a dedicated server.

👤 muzani
It can, but it doesn't work with autoscaling. If you have something that scales, you can probably work it into the architecture, similar to spot instances, but spot instances should be much cheaper than the standard cost.

👤 znpy
you can. create a vpn connection to your aws vpc (either via some custom solution or a managed one), install and configure your laptops and there you go.

this is essentially what the "hybrid cloud" approach is: you have some things in the cloud and some other things in a physical datacenter where you have your own machines.

there are some issues and limitations that you'll find though:

    - networking (bandwidth & latency)
    - OS updates and security
    - reliability (power outages)
    - thermals & cooling
and probably something more that i'm forgetting.

as long as you're fine with the tradeoff, you can definitely do that.


👤 marenkay
Here is a quick one: old MacBooks run modern Linux just fine. You can run something like Proxmox on both and you suddenly have room for a few virtual machines running whatever you like.

👤 shortrounddev2
Dynamic IPs and uptime