The idea behind edge computing is to provide faster response times by running logic close to the end user, reducing data travel distance. However, when you involve a database, this advantage starts to fade away. In fact, things can even get worse because the amount of data moving between the edge and the database is often more than what's exchanged between the edge and the user.
To tackle this, companies have started appearing at the edge too. Companies like turso.tech, Cloudflare (D1), and Fly.io let us deploy databases at various edge locations worldwide. But even with this approach, we're only solving part of the problem (!)
Now we face new challenges: figuring out which data should be moved to the edge, finding efficient ways to transfer data from centralized data centers to the edge, and keeping everything in sync.
Are there any companies working on this? Or am I missing something? I'd love to hear your thoughts and insights.
Why would you need to do this? D1 is meant to be a complete database and intention is to host everything there.
> However, when you involve a database, this advantage starts to fade away.
Why? Define database and what your needs are. Dynamodb or Cosmosdb for example can be multi-region read/write databases. You can deploy it in every region supported if you need to. What is missing?
Other examples...
- Mongo Atlas (hosted mongodb) supports global multi-region.
- Astradb (hosted cassandra) supports global multi-region.
- Cockroachdb supports global multi-region. It has a more interesting approach of allowing you to specify the "home" region per row.
[0] lunar would also work
[1] I consistently get <300ms pings to servers 12 time zones away. But I grew up in an era when long-distance (tens of km) phone calls were exorbitantly expensive, causing us to wait to transfer data for night rates, so YMMV.