For the past few years I've been doing this across AWS and DigitalOcean but I'm starting to think I can probably get more bang for my buck by colocating a rack server and spinning up a few VMs. The up front cost of a used rack server don't seem too bad and the monthly colocation costs are then much better than what you can get for the same price on a cloud provider.
I'm pretty happy to spend the extra time and energy on the management side since it's mostly side projects and experimentation that I'll be doing with it.
Before I take the plunge and give it a go, I'm wondering if there's any gotcha's I should know about that aren't immediately obvious to someone who's not done this before? I'll be looking to colocate probably a single 1U or 2U server somewhere in the UK if that makes any difference.
Checkout Web Hosting Talk: https://www.webhostingtalk.com/forumdisplay.php?f=36
For some deals, pick something with low latency to you... and start there.
I’ve rented hundreds of servers over the years and only done colo a handful of times. A fine experience to have, but learning you need a KVM/IPMI through a dedicated server to start, and experiencing the ropes of hardware failures is going to be easier with a dedicated.
Total cost in Australia was about $1200 for hardware and it will last me 3-5 years depending on availability of RAM upgrades in the future. My last homelab lasted 8 years on 32GB but then I started playing with things like Elasticsearch in docker :D
I spent way too much time than I can account for on the command line of colocated machines. It's a great learning experience, but you're really on your own. There's nothing teaching you that as much as exposure to a real server.
I'll never forget the experience of just also installing asterisk via apt-get back then because it seemed fun, then forgetting it and a year later getting some call from the abuse department because my server did weird things.
Logging in, seeing strange processes and not knowing any better than to nuke it from orbit and losing two of my customers' websites without recent backup.
I wouldn't take a vanilla colocated VM these days over any solid cloud provider, but then again everything that I've learned about WHY I wouldn't do that and the internals of linux came from tinkering.
Maybe it's because I'm spoiled by magically working loadbalancers, reliable DNS, live migration of failed machines.
Maybe it's because I just feel too old to call people for failed hard disks or I'm on call for a 40 million user webpage.
So sure, if you don't have much to lose or your load is _extremely_ compute and traffic sensitive, go ahead — it's a great way to get started.
If your goal is to build the next Facebook there though, do yourself a favor and start on a reliable cloud provider. The massive difference in pricing exists for _something_ — mainly for you to stop worrying and work on your core product instead. Which is good: Opportunity cost is real.
2. Do some research as to which provider gives you what you need for the best price. I tested many and finally stayed with Hetzner, your mileage may vary.
3. If you do anything serious, think about backups and failover first.
4. Don't consider RAID as backup. It can happen that multiple drives fail at once - and it does really happen.
5. Be prepared for a drive failure. It doesn't happen often, but when it does, you better start the diagnostics and rebuilding the array straight away.
6. Think about your failover strategy: will you be fine if your machine burns in a datacenter fire? If not, rent at least two machines in different locations. Making sure they're in sync is your duty and an interesting challenge in itself.
7. How are you going to send e-mail? If not via a smarthost (an external service by Amazon, Google etc.), you need to configure your server and DNS properly, add specific quirks for Google and Microsoft and be patient - building reputation takes time.
8. Do you need ECC? If you don't, you can use desktop CPUs with much better bang for the buck. [0]
Most other quirks are related to your use cases, e.g. if you plan to do streaming, there are other factors at play that might influence your choice.
[0] https://jan.rychter.com/enblog/cloud-server-cpu-performance-...
Once you have the Business Internet, run a PI cluster, get a NUC on the faster side and load it with VMs, serve your resume from a static site off you're old not-used-anymore PC, do whatever you want.
If you want to play with the hardware it is best to self host it assuming your Internet pipe is good enough for whatever tasks you envision. I have fat pipe so I self host but I also keep standby and some other dedicated servers on Hetzner.
I've learned a ton (and saved a bunch of money) transitioning from EC2, RDS, and Elastic Cloud to a single OVH dedicated server running Proxmox to host multiple VMs.
Colocation in the UK tends to be pretty expensive. Unless you're able to get it for free / at a significant discount on mates rates, it's generally unviable.
If you've reach the point that it is economically viable for you (considering the up front cost of the hardware + ongoing colo costs), you now need to factor in the ongoing support costs of the hardware. You may get lucky and have nothing break (happened to me twice over ~8 years), you may get unlucky and lose PSUs, DIMMs, disks, fans, controllers, etc. The severity of this depends on your particular hardware configuration. When things break, you either need spares on site, or you need to take/ship spares to site. You now need to either get remote hands to fix this for you, or you need to fix it yourself.
Honestly, you're almost always better off just using one of the cheaper cloud providers and/or renting bare metal than trying to do this.
However, if you've determined that you're in that sweet spot where it does actually make more sense to colo a server (as I did for many years), then go for it, and have fun!
Like a lot of other commenters have noted, make sure you have an IPMI/ILO/DRAC interface to your server, you probably want that sitting behind a firewall rather than being directly exposed to the internet (so now you need a real firewall as well to protect your IPMI interface) unless you want your server pwned in no time.
If you're thinking you can't do it at home due to space / noise constraints, look at smaller sized units like NUCs, laptops, or other SFF PCs that can be used as virtualisation devices. You wont get as much bang for your buck as you would from a used rackmount server, but the administrative overhead costs are a lot lower than dealing with DCs.
1. Startups use aws /digital ocean because its so easy and $5-10/month isn’t much
2. Startup or consultancy with traction: either you have too many projects or they start getting real traffic so the instances sizes and bandwidth / storage costs grow
3. Small company: You realize self hosting is probably cheaper and the machines are way faster self hosted
4. Medium size company: You self host and seem be saving money, but eventually you scale to need a team to support the infrastructure itself
5. Large company / enterprise: You go back to aws because the costs are high, but that is still preferable to having a bad version of aws written adhoc internally :)
For you it probably doesn’t matter as you’re probably 1-3 above. So it mostly depends on if you like managing infrastructure yourself. The cost savings are likely there, but only if it’s easy and fun for you to support it
A Dell R240 or R340 would be fine and on the smaller side, or if you're really pressed for space look at some of the Supermicro machines.
You'll gain local network speeds by keeping it in your home, and save a bunch of money
Remember that a colo means that if you have a hardware issue, you have to go there so have one close to you. Otherwise, there are "remote hands" you can hire per hour.
Also, if you don't have a specific hardware need, it's likely that a dedicated server would be enough. If there's a problem, you always can swap it for another one in a very short time. If it's our hardware, you're responsible of everything and that might not bring any positive point on your side.