Anyone have some detailed insight on how they were all built so quickly?
If an integration breaks, in my experience, your application telemetry alerts to this. Parts of the system are paused. A fix is implemented, shipped to your fleet (compute responsible for processing web hooks, polling API endpoints, or both), and the system is gracefully unpaused (your queue for scheduled polling tasks begins to fill again, or you allow webhooks ingested to flow through to processing stages vs being held in a queue).
In theory, you’re “just” serializing and deserializing JSON. In practice, it’s a constant grind keeping the machine running.
Perhaps https://news.ycombinator.com/item?id=40469773 might assist in your efforts. Good luck!
The input chunk (say Gmail) is one module, then the JSON processing chunk is another, then the output (like an API webhook) is another. Individual services can and do break, but the other parts still work.
If you have a workflow like "Read incoming emails, extract HTML data to JSON, extract object from JSON, transform shape of JSON, send to API", that's probably (at least) 4 different modules. If the incoming email part breaks, they can fix that separately. Likewise, you can pretty quickly hook up another output module, like if you wanted a direct connection to the Airtable API instead of a generic webhook.
I also don't know that the process of adding integrations is "quick". Zapier has been around forever and in the beginning they barely supported any third-party integrations. Make.com was previously Integromat and they also had few integrations at first – I remember requesting a bunch a few years back, when I was looking for a tool in this space.
But then serverless workers really took off and it became easy to write your own functions-as-a-service instead of waiting for the no-code tools to update.