Also forgot to mention - for compute providers, other storage, bandwidth, etc look at their bandwidth alliance. Free bandwidth to/from any of the members.
Plus you get better disk access speed (NVMe) than something like an AWS EBS-backed instance. You can also order new instances within a matter of minutes.
You don't have to use them, but being aware of what is available there would be a really good idea. I think some workloads could easily be deployed there at a fraction of the cost of a cloud provider.
Those are considerably cheaper, suffer less from unpredictable performance, a.k.a. "loud neighbour" problem, and are less likely to make you lock into some API/feature that would become a huge headache if you ever decide to move.
Heroku - For a more "abstracted" cloud that just works
Vultr/DigitalOcean - Cheap VMs, additionaly some services like managed DB and K8s
Hetzner/OVH - Cheap dedicated hosts
Netlify/Cloudflare Pages/etc - for static websites + functions/3rd party services for dynamic stuff
You get internal networking out of the box with per-app-region internal DNS entries, and with their CLI tool it's one command to SSH into your container or connect your laptop to the internal network.
As a nice bonus, their pricing is very competitive. You pay about as much as you would for the same size Linode or droplet (minimum is 256 MB RAM for $1.94/mo), which is a welcome relief vs other providers. Great free plan, too - 3 apps at 256MB RAM running 24/7 in perpetuity.
Although, my dealings with them have been entirely negative. Their strategy is to make a clone of every service AWS has, except the billing system which hides a lot of majorly expensive pitfalls.
Also, if you choose one of their mainland hosted zones, trying to get your data out is even more expensive than the other zones.
So in this case, "knowing more about" is really "tread carefully".
- no nonsense prices
- solid hardware, with good, consistent performance & low steal (even on shared nodes, but dedicated is there if you need it)
- great support (humans you can call)
- adding more and more managed services over the last few years
* Alibaba Cloud
* AWS
* Azure
* Azure Stack HCI
* Baidu Cloud
* BYOH
* Metal3
* DigitalOcean
* Exoscale
* GCP
* Hetzner
* IBM Cloud
* KubeVirt
* MAAS
* Nested
* Nutanix
* OpenStack
* Equinix Metal (formerly Packet)
* Sidero
* Tencent Cloud
* vSphere
We at DigitalOcean tend to see a lot of SaaS builders on our cloud. We obviously are smaller as compared to the big 3, but in our experience most people starting out with an idea don't really need all the bells and whistles.
Price-competitive, performant, and easily-automatable IaaS (compute/network/storage), and some managed services on top of it, are sufficient for a vast majority of builders.
I think about this thread all the time: https://news.ycombinator.com/item?id=22310879
Here is some marketing material with more information: - https://www.digitalocean.com/blog/how-to-scale-your-saas-pro... - https://www.digitalocean.com/blog/forrester-total-economic-i...
If you need many of their services, AWS is very difficult to beat. Nobody else quite matches up with them on breadth offerings. If you don't, do not use AWS due to the cost and the mental overhead of management.
If you just need to spin up some servers and want them to be fast and cost effective, Hetzner is the current champ in the US market (their new US datacenter). DigitalOcean, Linode and Vultr are entirely reasonable options after Hetzner.
If you have a $50-$100 / month or less budget, know how to set up and secure a linux server, and want to stretch your dollars as far as you can with a top notch cloud provider, Hetzner wins at present. And then throw Cloudflare out in front of it until or unless you can justify paying for a service. Keep your costs low, keep your infrastructure simple, keep your runway long, focus on selling selling selling (customers).
For ease of use, I like GCP.
Their serverless support story (Oracle Functions) is pretty clunky, but for standing up instances, and the terraform provider is quite verbose, but when it works, it works in a sane and logical manner.
HyperScaler are the three you mentioned.
Then you have Alibaba, Tencent and Baidu. ( All Chinese )
Oracle and IBM Cloud
OVH, Hetzner,
Linode, Digital Ocean, UpCloud, Vultr, Scaleway. Possibly Fly.io ( not sure )
They all own their at least part of the Datacenter, Network and their own hardware. You have other services like Render and Heroku.
I am not aware of any potential upcoming players in this space ( Tell me about it ). But I think Cloudflare will likely be the big disruption in the next 5 years. They will own most of what we call "Cloud" services and Network and leave only the capital intensive compute to others.
Then you have many players that are the size of Linode or DO but dont operate as "Cloud".
I am still sad that Heroku didn't improve or move up / down the ladder. They seems to have happy at where they are.
Looking at YCombinator's alumni, there are a few interesting ones - for example, CloudThread.io (no affiliation) tracks cost efficiency of teams and applications, providing "technical cloud cost unit metrics as a service."
My company, Usage.AI, automatically buys and sells Reserved Instances to cut EC2 costs by up to 57%.
Been using Tilaa (if you can tolerate being hosted in Netherlands) for nearly 10 years, I'm quite happy with them.
Vultr is not as good, but it's decent enough.
But what separates a cloud provider from a VPS/Hosting provider?
I might argue some kind of object storage, as that's the differentiating feature AWS had when it came out. In which case, I'm not sure what there is outside of AWS and GCP, sadly.
OpenStack is the open source democratization of the cloud. In Europe there are a dozen such providers, in the US there's only one it seems. Hoping they continue to grow and other hosters break away from basic VPS hosting and also provide OpenStack clusters.
Pure IaaS ( Infra as a Service ) players are:
Oracle Cloud Digital Ocean
It’s compatible with S3’s API buy way way cheaper. Free egress.
- Oracle Cloud - IBM Cloud - Alibaba Cloud
Why do you think it's important to know more than the one you run on?
What's the SaaS going to look like technically? Lots of compute processing? Or maybe large volumes of data? Perhaps machine learning? Or is it just a simple website and app deal?
Then there's the question of the technical competence on the ground (and the opinions that tag along for the ride). On the one hand, what's the expertise level? On the other hand, what do people prefer using? (And separately, what to people have experience with?)
I wonder (I don't have enough context to competently move the slider to "proper accusation") if somewhere along the way a general search for un-turned-over rocks got specialized/narrowed into "the overhead's all in the sheet metal benders" (the hardware). If this *is* the case, the only correct course of action (IMO) is to wind the train of thought backwards (oohc oohc) back to that frame of reference, then apply https://en.wikipedia.org/wiki/Five_whys until you're somewhere alien and interesting.
(Reiterating the caveat at the start of the previous paragraph, this is all massive conjecture and just-in-case assumption.)
If there was in fact a point at which hardware was specifically called out as a primary focus of optimization, I would loudly note that this type of hyperfocus leaves space for entire forests' worth of trees to fall over without ever being noticed, in this case for all of the software. Not just certain bits of it but like the whole kit and caboodle, unnoticed.
A related alternate possibility is that hardware optimization might be being treated like an axis point to orbit around, which can contribute to seeing things like immovable Mt Everests, and the construction of great and confusing Rube Goldberg machines to work around... perceived resistance that isn't really there.
To offer a 180-degree counterpoint that sorta flies in the face of the abstraction-away you might be trying to do here, if you want to look at different providers, I would suggest adopting a pat-answer strategy of keeping the architecture cloud-agnostic *where reasonable to do so*, and further suggest trying multiple providers - how does their support help out when it's 3am and you have no attention span and you accidentally something ridiculously straightforward? What's the performance like relative to the workload? What do all the engineers that are going to be headdesking against this stuff every day think about different options? Etc.
It's quite possible this reply is entirely misguided, in which case please ignore. I'm still learning the art of reading intent through text, I have a long way to go.
Hetzner
At some point 'the other clouds' aren't as relevant.
Think about the following interactions instead:
- CNCF compatibility (interpret as deep and as wide as you like)
- Infrastructure vs. Platforms vs. Services
- Legal boundaries
- Locality (can interact with legal limits, but also latency, transfer costs)
- Scope of services vs. scope of what you actually need
A lot of providers are good at a thing, and bad at everything they tack on to it. Some providers are reasonably good at many things, but win on integration between those things. Others are simply too dissimilar to orchestrate, so either you'll have to bring your own orchestration or not use it in orchestrated scenarios.Instead of knowing about the clouds, know about requirements engineering. Fitting your needs and the services you pay for is way more important than the details of those needs and services.
If you just need some random compute (read: a shell into an OS, a complete VM, a container, things like that) and nothing else, do NOT use some cloud. It will require you to do a lot of other things as a side-effect of using those services at all, and will cost a lot for what you need.
On the other hand, if you need to be highly elastic, have completely managed RDBMs on-demand available and orchestrate networking, IAM, object storage, block storage, compute and ingress, do start out with a cloud.
Regardless of what you are building, make sure you know ahead of time if:
- What your scaling is going to depend on (usage, work hours peaking, seasonal peaking, tenants)
- What your scaling is going to be like (horizontally scale and spread the load? vertically scale for a few weeks until you reach the scaling limit and then rebuild the application the right way instead? deploy one instance per customer?)
- What availability rules are you going to have? (downtime? data loss? time to recover?)
- what legal limits will it have?
Example for a MVP SaaS: say you want to manage shopping lists for consumers, you might call it Shoppr and build a PWA and a app-wrapped PWA so you get immense reach. You mostly have front-end engineers, but you do know a bit about metrics and scaling.I'd say that means:
- Downtime for a few hours unlikely to tank the business
- Legal limits are basically just generic data protection
- Scaling is likely linear
- Since your data is mostly basic CRUD, any read-replicated system will do
This can be built using any stack, and as long as your persistence can keep up you're golden. Don't fuck it up with an ORM that doesn't know how to create the proper indexing rules on tables and you can easily get a couple of million customers on an IaaS-only provider that just has virtual machines or containers, and only has one flavour of persistence store. Plonk Cloudflare in front of it and done.You can make this infinitely more complicated, but as an example add this feature to make this entire setup suck and the entire infrastructure incompatible with the needs of the application: international receipt scanning to recommend/autocomplete shopping lists for customers. Suddenly your requirements are expanded with:
- Incoming upload queue
- Object storage for image blobs
- OCR or ML pipeline to process images
- ML or Analysis pipeline to make sense of the contents of the now processed/read images
- Instances or expansion for all of the above per region
To make your traffic bill not suck you'll need endpoints in most major regions, and you'll probably want to prevent cross-region transfers so you'll want multi-region compute. Since you don't really have to do the processing realtime, you'll be doing some queue work and perhaps have a DLQ that needs human intervention or QA analysis for product improvement. All that stuff also needs a 'control panel' for lack of a better word so you'll be adding backoffice systems too, and those will have a workflow that doesn't compare to consumers at all and should never share any interaction with them, so now your application tenancy requirements change as well, which flows down into infrastructure requirements. At the same time you'll also need training data or validation data, and you'll want to be able to do all of that elastically to not go bankrupt for paying for 100% of capacity that you'll use 50% of the time at best. Suddenly 99% of the vendors are unable to provide what you need and the 'big three' remain (well, not exactly, but for illustrative purposes this will do).Considering a good SaaS might grow, make revenue and be sold or be valuable etc. there will be requirements stacked on top of everything else about redundancy, durability and availability and those will be increasingly hard to guarantee in an IaaS-only provider or PaaS-only provider scenario.
When you 'start' with a SaaS, you'll need to know what you need now, and what you might need in the near future, and make sure that whatever you do now doesn't paint you into a corner within weeks. That means that setting up IIS on a Windows desktop by hand on a VPS at some IaaS hosting provider is highly unlikely to be a good place to start. On the other hand, an equally janky setup with a random Ubuntu VM where you manually install Docker and say, Nomad might actually not paint you into a corner too much since a container can easily be run on a container-PaaS and Kubernetes beyond that.