HACKER Q&A
📣 exabrial

Advice on Colocating Servers?


Looking at our VPS provider bills each month makes me cringe. I was thinking maybe we could offload non-critical systems to our own colocated servers onsite or in a cabinet in a local data center.

Has anyone done this and what was your experience?

How do you select a colo and what do you look for?

How do you manage the hardware and how much savings in time/$ is there really?


  👤 indymike Accepted Answer ✓
> Looking at our VPS provider bills each month makes me cringe.

Hosting costs often go up slowly over the years, and eventually, you have an unsustainable price. Just get quotes from a few other providers and go back to your current host and ask what they can do about the 70% price difference.

> Has anyone done this and what was your experience?

2/3 of the companies I own are on AWS. The other company is on dedicated, colocated hardware. The one on dedicated hardware gets zero benefit from CDN and cloud services as it's just a Django/Mysql monolith where every request and response are going to be different. We moved it off of AWS because there was little benefit, and we would reduce our hosting costs to a few hundred dollars a month for 20x more hardware performance.

> How do you manage the hardware and how much savings in time/$ is there really?

For the two companies on AWS, it saves us three $100k/year salaries per year. So, yes, it's more expensive than colocated hardware, but a lot less expensive than the colocated hardware and the three additional people required to provide support and ensure service levels. For the colocated hardware, we use fabric (an old python ssh automation library) to manage the servers and make heavy use of systemd's logging and systemctl for managing processes and logs. It works well, and there's maybe 1 hr a month of actual admin work, mostly dealing with OS updates and the occasional deployment hiccup.


👤 linsomniac
We have presence both at AWS and in a colo facility.

I wouldn't say it's a slam dunk for either one. The Colo monthly bill is much lower than AWS, but when you add in our initial and yearly hardware expenses they are similar. They run different workloads, not smaller or larger, just different.

I'd generally say that maintenance at the Colo has been low, but we've had some instances where managing our own hardware has delayed rollouts (getting a vendor to replace some bad RAM on a brand new box), and where we put huge amounts of time into investigating hardware problems.

In particular we have a couple of machines that have been in service a year or two, that started experiencing data corruption and performance issues. We have really good hardware support, and they tried, but honestly I think the issue is that the SSDs they were sourcing were weren't great. I've spent probably 100+ hours on that issue alone.

There's also the cost of maintaining your own setup. The number of hours we've poured into Kafka, ElasticSearch, and Kubernetes that probably could have been reduced by just clicking a button at AWS is pretty high.

Also, it's very nice to just have resources. We are spending a lot of money on primary and backup copies of some data at S3, plus egress. It would be cheaper to serve it from the Colo, but then we need ~50TB of storage. Provisioning that amount of storage at the colo wouldn't be cheap. Disks are cheap, storage is not.


👤 jve
I'd like to chime in from perspective of data center operator.

I don't see technical support being talked about here, but rather lack of it (hardware failures etc). I don't know if it is the norm in industry or not (I'm just a techie), but we do actually have 24/7 support available. With the ability to actually call support over the phone. The 1st line support is easily penetrable - exactly when you ask questions/operations that are not in their competence. Some premium customers even directly calls to upper level support. Kind of not official, but that's what it is when relationships get developer with customers.

So basically, depending on the support level, you can get your hardware troubleshooted/replaced and appropriate actions carried out within OS level (if we administer your OS or you provide access just in time). We actually have value-added support, that means, we do manage infrastructure for customers at certain level: AD, SCCM, MySQL, MS SQL, networking, linux stuff, private cloud, etc. I was actually a techie (until taking another position 2 months ago) for various MS things including SQL Server performance troubleshooting - when some application gets mis-developed and it has impact on performance, it was job for me to identify and/or mitigate the issues at hand.

I don't know if value-added support is standard or not in industry, but we surely try to add to our specialty.

Another point


👤 aparks517
I colo at a local ISP. I've been with them for about a year and I'm happy. Selection was easy: I wanted a local shop and there's only one in town. I had worked with them before on other projects and figured we would get along well.

I manage the hardware myself, mostly remotely. Occasionally I'll go onsite to upgrade or repair something. I buy used servers and parts from a little over five years ago or so. A lot of folks buy new and replace every five years, so this is a sweet-spot for performance per dollar. Kinda like buying a car just off-lease.

Working a cycle behind has its own benefits. If you pick a very common model you'll get cheap and easy-to-find parts. I was able to get a full set of spares for far less than what a typical hardware maintenance contract would cost per year (and I have them on hand rather than in 2-4 hours). Drivers (especially open source drivers) will be better developed and less buggy and you can probably find someone else's notes about anything that usually goes wrong.

Of course if you need cutting-edge gear, this won't be a good option. But I don't, so I take advantage!

I think whether you'll save money depends a lot on how you do things. There are efficiencies of scale that big providers benefit from, but there are also efficiencies you can leverage if you're small and flexible (like tailoring hardware to your use-cases and not doing/supporting stuff you don't personally use).

I didn't make the move to save money, but to get more power and flexibility. So far, so good!

Good luck! If you decide to dive in, I hope you'll come back and let us know how it goes!


👤 johngalt
Don't. It doesn't make sense to try to jump into an area rapidly being commoditized. At most, rent bare metal/dedicated servers rather than VPS.

> Has anyone done this and what was your experience?

It works as expected. Big cost/time upfront to get all the equipment setup, then things mostly just worked. The primary challenge is transferability and standardization.

> How do you select a colo and what do you look for?

Bandwidth availability. Proximity to the team supporting it. Reputation among customers.

> How do you manage the hardware and how much savings in time/$ is there really?

Whatever custom stack is built in the colo must be well documented, and have solid maintenance schedules, and change control procedures. The cost savings exists, but it's never as much as you think.

It's difficult to articulate the specific drawbacks. It's more organizational than operational. Imagine that you have a server dropping at random times. The logs say something ambiguous, and now you have to become full time sysadmin to some elusive problem. No one in your organization will want to deal with these sorts of issues, nor will they want you to burn time on it. There will inevitably be other tasks higher on the list. Operational issues gradually build up like other forms of technical debt. No one receives applause for updating the server BIOS over the weekend. Operational discipline/maintenance inevitably becomes last priority.


👤 PaywallBuster
Usually it goes the other way around.

Your system critical servers are too costly/too resource intensive and you move them to dedicated.

If you're simply looking to reduce costs, why do you want colo?

Can rent dedicated server from anywhere Hetzner and OVH already mentioned, there's many others.

- Want cheap storage? Backblaze?

- Want cheap VPS? Vultr?

- Cheap storage VPS? Time4VPS or something else?

- Cheap dedicated server? Hetzner/OVH

- Cheap bandwidth? Maybe a CDN provider like bunny.net or a dedicated server provider like https://100gbps.org/ can offload traffic

Plenty of options for what you're looking to optimize for, just need to google

Colo is a whole different game, why go in that direction


👤 runako
Suggestion: share a ballpark of your VPS bill to get better advice. The best guidance will depend on whether your VPS bill is order of $500/mo, $5k/mo, $50k/mo, or higher.

It also might help to share some characteristics of your workload. Is it CPU or disk-intensive? What kind of uptime expectations do you have? How effectively do you expect to be able to predict your requirements in the future?


👤 traceroute66
Colo is great.

Sure all the cloud fanbois will tell you that you should only ever use the cloud. But frankly if you have decent tech competence in-house and you do the math honestly, the cloud is far more expensive than colo, particularly for 24x7 loads over a multi-year period.

If you buy quality kit and you put it in quality colo facilities then your hardware management should be minimal.

Your main problem is going to be with your "How do you select a colo and what do you look for?" question.

Whole books could be written on the subject of choosing a colo facility.

If you are serious about colo, you should start by doing a few site tours with different operators. Doing that you can (start to) develop a feel for what differentiates facilities.

There a lots of things you could look at, but the two near the top your list should be:

     - Connectivity: Ideally you want a carrier-neutral facility with a good selection of carriers to choose from
     - Power: Self-explanatory really. How much do I get ? Is it true A&B feed ? Are two good starter questions.
If you look at those two items first, then you can start to draw up a shortlist from that. Once you have the shortlist, you can start nitpicking between the options.

👤 bcrosby95
> How do you manage the hardware and how much savings in time/$ is there really?

Probably depends upon how many servers you need. We colocate around 2 dozen servers and between things like updates, maintenance, and hardware, we average maybe 1 work day per month managing them.

By far our most common failure is just hard drives. We have a box of spares. Our second most common is motherboard failures - popped capacitors - from servers that are 10+ years old.

Last time anything failed was about 9 months ago. Before then we went a few years without a single hardware failure. But back in the 00's we got a bad batch of power supplies that hand a tendency to catch fire - those were interesting times.

The colo center is just a 5 minute drive from our office. And there's remote hands for simple things.


👤 dahfizz
A company I worked for maintained a very large on prem data center to much success. We maintained some production / user facing infra, but it was mostly for internal stuff.

We had one massive compute cluster running vmware as our personal cloud. Any dev could spin up VMs as they needed. Once this was set up (which was a lot of work), the maintenance cost in $ and time was basically 0. We also had as assortment of baremetal servers used for all sorts of things.

One of the reasons I think it worked so well for us is because IT/Linux/sysadmin skills were very high throughout the company (which I have since learned is rare). Any engineer could manage VM images, recable a server, configure VLANs, etc. If this wasn't the case, we probably would have needed double the IT team, and a lot of the cost savings would disappear.


👤 kijin
If VPS/cloud costs too much for you, try renting dedicated (bare metal) servers. There are lots of options out there, from tiny RPi-type boards all the way to multi-CPU behemoths. You don't need to bear the upfront cost, you're not on the hook for replacing faulty parts down the road, and the total cost of rental over ~5 years isn't massively different from the cost of buying, colocating, and fixing your own hardware.

I know someone who rents a 3990X monster for less than $1K/mo. Renting comparable compute capacity from any well-known VPS/cloud provider would cost at least 10 times as much. I also know someone who rents a cluster of servers with several large disks, pushing hundreds of TB of outbound transfer, again for less than $1K/mo. The bandwidth alone would cost many times as much on AWS, not to mention the storage. Of course you'd be missing the redundancy of AWS, but whether you really need it for your specific use case is your decision to make. Anyway, the point is that most of the savings are realized in the move from VPS/cloud to bare metal, not in the move from rental to colo.


👤 Nextgrid
Do you actually need colocation or can you do with a middle-ground of just renting a bare-metal server from Hetzner or OVH?

👤 cardine
We specifically colo GPU hardware due to how expensive it is.

For instance this year we spent ~$500k (rough estimate) on purchasing GPU hardware. With cloud that would have costed us ~$80k/mo - so after ~6 months we break even on the hardware and then after that just have to spend the much smaller colo bill. We expect hardware to last us 3+ years so we come out far ahead by buying.

Our work is so specialized anyways that if we went cloud we would still have to have the same dev ops staff anyways so there is not much cost difference on that side of things. We had to invest a little bit more upfront in deployment, but in the end it is peanuts compared to the $80k/mo that we are saving in cloud costs.

The only trouble is when we need to scale beyond our currently owned hardware. In that case, once we are above capacity of servers we own, we make another purchase and in the interim use cloud to bridge the gap.

In the end purchasing hardware ourselves is one of the highest ROI moves we have made and I would highly recommend it to others. Of course if your cloud bill is much lower you might find the cost savings to not be enough to be worth the hassle.


👤 electric_mayhem
Colo is fraught with peril.

Do your due diligence in vetting any company before committing your gear and uptime to their care.

There’s a whole lot of clowns operating as colo resellers. And by clowns I mean a lot of them range from incompetent to outright scammy.


👤 kodah
Lots of folks focus on compute and disk cost when they're in the cloud because they're usually the two biggest items on the bill. That's reasonable, but when transitioning a distributed system to a hybrid infrastructure model (eg: Cloud and a Dedicated Colo) it's important to factor in network cost. Cloud providers usually charge for ingress or egress and have marginal cost for inter-DC traffic (eg: availability zone to availability zone on AWS). Distributed systems are chatty by nature, so if requests are constantly leaving your cloud and entering your new DC you're potentially paying twice for the same transaction. This cost adds up fairly quickly. The same thing will occur if you operate an application in two regions on AWS and have a lot of region<>region activity.

👤 0xbadcafebee
Don't manage your own gear. It's a bad idea that almost never helps you, like doing all your car maintenance yourself. Changing your oil yourself? Cheap and easy. Replacing your entire A/C system? Pay someone to do it. Life's too short, and you won't do as good a job as a pro who does it all day.

Things a VPS provider does:

  - Buy hardware (every 3 years)
  - Rack hardware
  - Network the hardware
  - Power the hardware
  - Troubleshoot and replace buggy/shitty/dying hardware
  - Provide an API to:
    - out-of-band power management
    - out-of-band console management
    - remote deployment of VMs
    - out-of-band network maintenance
  - Patch security holes
  - Deal with internet routing issues
  - Move everything to a new colo once the old colo becomes more unstable and expensive
If you want to do all of that, you might as well get someone to pay you to co-lo their stuff too, and then provide an interface for them to use, and then you're a VPS provider.

There is only one good reason to colo your own gear: if you have to. If what you want to do simply isn't possible at a VPS provider, or is too costly, or you wouldn't get the guarantees you need. It's the last resort.


👤 mgbmtl
I helped manage dedicated servers in colo for a small company for a few years, and I don't have very good memories about it.

We had a cabinets with inter-connected machines, so we had our own routers, load balancers, two uplinks, etc. It's a lot of stuff to manage for maybe 25 servers (which ran VMs).

Currently I use OVH and Hetzner and find it to be the best of both worlds. Hardware isn't my problem, and the costs are low. We run ZFS and KVM, so moving a VM around is easy.

On another project though, we split $5k of hardware for a server with 20TB of disks, 4U. The server has been running for over 5 years without issues. For a single, special-use server, colo can be nice.


👤 chatmasta
Check the WebHostingTalk forums [0] for deals and promotions, especially from small providers (assuming your business case is compatible with that).

You might also want to start by renting a “dedicated server” instead of colocating your own hardware. The savings will be still be significant (especially with unmetered bandwidth, if you’re comparing to cloud vendor price gouging).

As for personal recommendations, I’ve had good experience with Psychz.net FWIW.

[0] https://www.webhostingtalk.com/


👤 robcohen
One trick I've found for colos -> Find a small local ISP or WISP, get to know the owners. Do research on local colos and what they charge for say one rack. Then halve that price, split it amongst 2-3 friends, and make an offer to the ISP. More often than not they'll accept, especially if you can pay 3 months+ in advance.

👤 AndyJames
Find colo with 24/7 access, high up time (good networking, multiple connections to the internet from independent providers and proper setup for working when there's no power) and "remote hands" in case you need to manually reset server.

Rest of the questions you have to answer yourself. Initial server cost will be way higher than VPS plus maintenance and paying for colocation is also not cheap. Servers will have to be upgraded every ~5 years, depending on the scale and you have to buy machine for the worst case scenario, there's no automatic scaling so if you sometimes need 32 cores and 1TB of RAM you have to buy that even if 99% of the time it will sit at idle.

I would rather find cheaper VPSes for non-critical systems or work on optimization of current solution.


👤 throwaway7220
I've done this off and on for the better part of two decades. I know a few good colo's across the US if your interested.

Much of it is going to depend on your workloads. If your just running emepheral vm's on something like vmware or another hypervisor, you won't run into much of a problem.

Things start getting a bit more complicated if you are going to be using HA storage/databases. But again, that depends on your workload. And some datacenters will be happy to manage that for you.

There is alot of money that can be saved when your workloads are fairly static though. The key is putting together some hardware automation (jumpstart your hypervisors) and handling the depreciation cycle.


👤 deeblering4
To the people suggesting that renting or installing a few servers in a leased rack space with redundant cooling, power, conditioning and 24x7 security is somehow dangerous or hard, please go home and sleep it off. You are drunk off cloud kool aid.

👤 edude03
I’ve done both and am currently doing both. Like other commenters have said it depends heavily on a lot of your specific circumstances.

I’d be happy to give you more advice if you can say more about who’d be managing it, what your current costs are, and roughly what your use case is, etc. For some generic advice though, I’d say renting dedicated servers is typically the way to go unless you have a specific reason you want your own hardware.

In my case, my reason is experimenting with RDMA over Converged Ethernet for machine learning and I couldn’t find machines with GPUs and capable interconnects for rent. If you don’t have specialized requirements though any provider is probably fine


👤 cpach
I’ve heard good things about Packet.com. They were acquired and is now part of Equinix Metal. Might be worth having a look at: https://metal.equinix.com/

👤 a2tech
I have a pretty even split of customers in cloud providers and hosting on rented servers in a data center. Using something like Hivelocity kind of splits the difference--they provide the machine for a low premium and you do all the management. They'll handle all hardware swaps and physical poking/prodding. The price beats the pants off of hosting in AWS/DO, but its pricier than just outright buying the hardware. All things have tradeoffs.

👤 Ologn
> I was thinking maybe we could offload non-critical systems to our own colocated servers onsite

Definitely have a strategy for cooling the servers in place. If you put two dozen servers of a certain type in a room, how much are they going to warm the room up? How are you going to cool the room off in the summer (and other seasons), and what will the temperature of that room be in the summer on Saturday at 6 PM, and will anybody be around on Saturday at 6 PM, or Sunday at 4 AM if needed? If you have a ventless portable air conditioner in the server room (not that I am recommending it, but I have walked into many on-site server rooms with them), does condensation form in it? If it drops condensation into a bottle, who empties the bottle? What do you do if the condensation bottle fills up at 6 AM on a July Saturday and A/C goes off and then the temperature rises through the July day?

It's good you are thinking about this and planning this, because I have seen this happen in an unplanned manner many times. Two or three "non-critical" (until they crash) systems are put in a room on-site. Then without much planning, another server is added, and then another. Then it starts to get too hot and a ventless portable air conditioner is put in. Then the condensation bottle fills up and you suddenly have a meltdown in the room as one server after another overheats and goes on the fritz. I have seen this happen at small companies, I have seen this happen at Fortune 1000 companies.

So my advice - have heating fully planned out, and be aware that once you set a server room up on-site and its working, other divisions will start wanting to put servers in there, so pre-plan the rooms maximum capacity. I suppose electricity and racking and access and security and such need to be planned out as well. The main problem I have seen is overheating, as people, without planning, keep squeezing just one more server into the room.


👤 nextweek2
Don't do it, the last thing you want is a raid card malfunction whilst you are on holiday. You'll be talking downtime of days not minutes.

You have to plan for worst case scenarios and how your team mobilise. These things are the "insurance" you are buying when you make the hardware someone else's problem.


👤 malux85
It can work for some usage patterns, but it’s important to understand the trade offs

Hosted gives up easy hardware replacement, easier redundancy, 24/7 monitoring etc

But a hybrid approach worked well for one place I worked at.

Their work pattern was:

Job scheduled by scheduler and pushed into queue (all AWS systems)

Local cluster of 20 machines pull from that queue, compute the job, and store the result back.

It worked because: 1) if a job failed the scheduler would re-schedule it

2) the local machines were treated as disposable, if one broke it was repaired or thrown away

3) deployment to the local nodes was simply copying a binary and config file

4) the latency didn’t matter as most effort of the jobs was compute

5) the bandwidth didn’t matter because the jobs were small and didn’t require much data

6) the tasks were embarrassingly parallel, so throughput scaled linearly with nodes

Sometimes it can work, but without knowing the specifics of your compute jobs the best I can do are the above generalisations


👤 citizenpaul
The experience is pretty much the same except for depending on how secure the facility is.

Look for? Mostly costs.Unless you have some regulation requirements.

Savings is going to be heavily dependent on what you do with the stuff there. If you have legacy stuff or huge full server usage (looking at you accounting systems) a colo can be a lifesaver. If you are 100% wiz bang 2.0 or whatever there is probably little reason to have a colo at all.

Just make sure there is some provision for the fact that its not a one time cost and servers need replaced ever 5 years or so and parts need replaced. You don't want the "computers don't have moving parts why should we replace them" conversation when a critical server is on its last legs at 10years old or more.


👤 71a54xd
Anyone have suggestions for sites in the tri-state area?

The biggest advantage to colo is crystal clear and consistent broadband. Fios fiber is as close as you'll get in an apartment or office, but in most cases "gigabit" service is around 800mbps down and a meager 40-50mbps up :( .

Also, if anyone here is looking to split a rack in NYC or within 1hr of Manhattan I'd be very interested in splitting a full rack of colo. I have some gpu hardware for a side-project I need to get out of my apt and connect to real broadband. My only requirement is 1G/1G broadband and around 1300W of power. (email: hydratedchia at gmail dot com !)


👤 bullen
I solve this by building a 3x VPS (one node per continent)/2x home fiber combination (for redundancy). That way you get the best of both worlds:

For real-time stuff you use VPS, for bandwidth/storage you use home since those are cheaper there!

I use all passively cooled, lead-acid backed; Atom (loadbalancer/fast disk) + Raspberry 2/4 (slow disk/workhorse) and my own stack that uses Java.

This week google had a breakdown, so now I'm going to make my backups (AWS in asia and IONOS in US) part of the default DNS.

Only problem I have is IONOS does not easily scale up so I dont know what to do for low latency VPS workhorse in the US!


👤 ab_testing
Why not rent one or more dedicated servers rather than collocation. That way you don’t have to tend to hardware issues while still being cheaper than the cloud.

👤 jtchang
These days you have to treat colocation as a fungible resource. It's good for batch processing and things like that where you can distribute the work.

👤 jonathanbentz
How does your billing work? Are you on a plan for that VPS with dedicated, fixed burstable, or 95th percentile burstable? You may be able to find some less cringe worthy bills if you change how you are billed. Although that might mean you have to change providers, too.

👤 altmind
I won't give you any advice on the feasibility and price, but if you want cheap Colo, HE offers a full cabinet in California for 400/mo. That's a gr deal. In Chicago just a rack without the internet is around 1200

👤 throwaway7220
I would recommend that you consider Lunavi. www.lunavi.com

It all depends on your company structure. And it does depend on your workload as well.

I do keep my workloads up and working well in their infrastructure. They have private-clouds, they have public-clouds, and they have my cloud.

I use their private cloud for all of my crical services. As an engineering company that is always on, they always keep the services up. They have migrated it a few times, and the downtime was always in sub-millisecond.

I have been known to call them late at night. They do take care of things.

We are just living in an era that is so focused on the cloud. Nobody gets pissed if an amazon hypervisor shuts down. Yet the technology has been there for 20 years to keep that from shutting down.

Amazon is walmart. Go find a lunavi.


👤 nicolaslem
Did you consider moving this non-critical load to a cheaper VPS provider? You are unlikely to be able to beat them on price when taking into account hardware and engineering time.

👤 dzonga
colo rather than stuffing it in a cabinet please. a decent colo provider should be able to handle the rest of the stuff for ya after you send them the hardware.

👤 doublerabbit
> Has anyone done this and what was your experience?

I have 2x 2u in two different datacentres in the UK. Connected together via IPSEC using pfSense.

4u is common, and a rack would be nice.

Your costs will come based on racking units, bandwidth and transit/connection speeds. 1Gbit is going to be more than 100mbit but you can normally negoiate this.

Hardware costs are up-front but when the server is humming, they last for a good period of time.

You don't normally gain DDoS protection so if you require it, factor that in.

> How do you select a colo and what do you look for?

Choose whether you would like:

Commercial (Cogent, Peer1, Rackspace) who own actual buildings and run as an datacentre as a datacentre. Try to provide the greatest, hand you an account manager and the ability to ticket the DCop monkeys.

Or, independent who own industial units called datacentres. Who tend to over decent internet feeds and hold more of a independent feel. However they may lack the 24/7 support you may need. Or enterprise features that you may find within commercial.

In terms of selection I recommend taking a look on WebHostingTalk.com under their colocation section. Google yeilds good results. Find a provider, get in contact and take a tour of the DC and see how it feels.

> How do you manage the hardware and how much savings in time/$ is there really?

My servers are all second hand eBay purchases and have had a lot of TLC to get up to scrap. Once tamed, they purr. The amount of work on them is near to none. Create a new VM, Update OS; general SysAdmin stuff.

I would recommend that if your looking for sanity: buy new. eBay servers are hit/miss and you never know what condition they will arrive in.

iLO/LOM is configured so I always have remote management unless the DC goes dark. Servers are resillent too, I have one server which has two ram sticks failling, but still operating with no issues.

Network issues can arise and you have to ensure it's not you causing the mishap before you can get actual help. You most likely won't get DDoS protection on independent-owned but is a growing trend so you may get lucky.

I moved from VPS to colocation and refuse to use the cloud. The cloud has its purposes but for me, I'd rather bare metal and hypervisors. Paying another company to host my information where I may not even have full control. Just doesn't settle with me if I am to provide client services. Plus, I can actually own the internet space these servers sit upon. No restrictions with what services I desire.

My OS is FreeBSD hosting virtual machines with bHyve within a jail. I will always advocate for colocation.

   FreeBSD 11
   5:09PM  up 791 days, 18:17, 8 users, load averages: 0.66, 0.82, 0.88

   FreeBSD 12
   5:11PM  up 174 days, 21:19, 2 users, load averages: 0.59, 0.68, 0.70
   5:12PM  up 174 days, 20:03, 1 user, load averages: 0.12, 0.11, 0.09

👤 matchagaucho
How critical is "critical"?

Colo is justified solely on physical security, compliance, and/or uptime these days.

There are no net cost savings, as you would own the ongoing maintenance and upkeep.

The major IaaS players run very efficiently. The monthly bill is only the tip of iceberg. There's far more involved beneath the surface.


👤 inumedia
I transitioned from VPS to rented dedicated servers years ago which was significantly more cost effective.

I recommend if you do this to try to keep your stack as portable as possible, it was relatively easy for me since I was already using Docker and started testing Rancher/K8s on the dedicated servers. This was years ago and I'm fully committed to K8s at this point.

This year I actually took it a step further and ended up just building a little 4U server that I colocated at a smaller data center that I was already renting dedicated servers from. I needed this for high data volume and latency needed to be as minimal as possible (CPU and storage together) while keeping recurring costs minimal.

For your questions:

> Has anyone done this and what was your experience? Relatively straight forward, a lot of up-front cost but has been overall about the same/breaking even with higher performance / results. I went with one that allowed me to rent IPv4 addresses without doing any peering or extra work, essentially just supply the computer, set it up, and let it go.

> How do you select a colo and what do you look for? For me, cost and latency. I've been looking into colocating another server in Asia but haven't had a lot of luck picking a specific data center yet.

> How do you manage the hardware and how much savings in time/$ is there really? Honestly, management has been pretty minimal. My server was entirely new so nothing has broken, I just keep tabs on it every couple weeks and make sure my ZFS storage hasn't turned unhealthy.

For some absolute numbers for you, my server specs and cost: 4U Colocated 4x 8TB HGST HDD ( Setup with RAID10 essentially, so 16TB usable space ) 2x 2TB NVMe SSD ( One actually isn't used currently, but is in the slot and available ) AMD Ryzen 9 ( 32 threads / 16 cores ) 4x 32gb G.Skill ram ( 128gb )

I also have a spare 256GB Samsung 2.5in SSD on standby (literally in the case, just unplugged) in-case something happens to the NVMe drives.

All-in, up-front was around $4k USD, monthly is $95 USD (all costs included), and I really only need to check on it every now and then and let Rancher/K8s take it from there. Previous costs were around $200-300/mo for a few different dedicated servers and S3 storage.

There have been incidents at the data center I went with which is definitely something you'd need to plan for, the one I went with seems to average 1 incident every 1-2 years. There was an incident a couple months ago at the data center (power outage), something happened with my server which actually required re-formatting the NVMe drives and re-setting up everything over the data center's supplied portable IPMI-ish interface, which required them to schedule a time to hook it up and then use it. Not every data center will have this or be as cooperative about it.

---

I'd definitely caution jumping over to colocation, start with renting dedicated servers at the very least.


👤 kaydub
> How do you manage the hardware and how much savings in time/$ is there really?

There isn't any. It will cost you more.