1 - $5 linode/digital ocean machines at different locations (CA, TX, NJ, London, wherever you want the things)
These machines would run probably Nginx serving static up static content. They'd get this data pushed to them over something like rsync/scp from a central place.
It would answer for the same domains, with the same content at each location.
2 - DNS Route53 / GeoDNS / Round-Robin
Servers in all those locations are cool, but we want to get users close to them. I think Route53 does GeoIP DNS - routing users to the geographically closer machine.
3 - $10/$20 Linode at a central location for build/deploy.
I'd probably look at Drone.io as the thing to take gitrepos and then build the website. This could handle JAM stack stuff pretty easily. Once it has the artifacts created, you ship them off to the linodes in step one to be served.
This is kind of the broad view of where I would start for technologies to help with that kind of thing. Then comes a lot of glue and programs to make it easier to use.
The form submission stuff they made seems pretty cool. You could make something that scans the artifacts looking for forms with a given data attribute, and when you find them build a data model to match and stuff in a POST address. Making that would be really fun.
I hope this helps, I also dig this kind of stuff =)
For example, we recently built something to the same effect to deploy our single page applications to AWS. It creates a new Cloudformation Stacks for each site, sets up CodePipeline for continous deployment, build artifacts are placed in a public S3 bucket, and then it's served through CloudFront as the CDN. Beyond that you just need to set up Route53 for your domain and use the Certificate Manger to handle SSL.
I know that's pretty specific to AWS but hopefully it gives you an idea of where you'd start.
Basically to build it from scratch you'd need: - A service that can respond to GitHub webhooks, build projects in an isolated environment based on a config, and push build artifacts out.
- A content distribution network that you can deploy those artifacts to.
- A DNS server to route traffic.
- An SSL service like Let's Encrypt CertBot to issue certificates periodically.
To me the interface is the IDE and important changes in source code happen when I merge branches to master. From there the ingredients are a well developed cicd pipeline plus some integrated infrastructure as code (i.e cloudformation, teraform, ansible). The infrastructure as code can get a little complex and you need to know the the underlying components quite well or it might cost you more than you think. You could achieve this with a vps too if you had one. One of the reasons I use Jekyll is I can host in on s3 which allows us to focus our operational capabilities on our main business, Servana does devops and cloud services.
I use Jenkins with a declarative pipeline plus cloudformation which configures all the infra like s3 bucket and cloudfront distribution etc. I kept it simple but still pro with the main goal that a developer does not have to log into anything to trigger updates. The pipelines creates a release and tags master merges.
When I build these workflows the developer experience is very important so I always make the process completely automated with no interrupts if it can be helped. My focus is on one thing, the code and the development experience.
If you want to build one just for yourself, that is probably a weekend project. Doing multitenant safely is likely where it gets a lot more complex.
On the other end, though, you can buy small storage units and put Raspberry Pi 4s all around the world, and to mimic the CDN-ness of it. You'd need a central build server, which doesn't matter where it physically is, but it'd push JAM builds to your CDN network, and then a DNS server that routes requests to the right CDN node depending on the Geolookup of the IP address.
[0]: https://www.toptal.com/github/unlimited-scale-web-hosting-gi...
Especially the certificate fetching and distribution across the edge servers part.