So far seen: - DuckDB (guess makes things faster but no experience with it) - Tinybird (real time data APIs?) - GraphQL (learning curve?) - and many more
I am relatively new to this stuff but a long time user of Postgres. Looking to learn but wildly confused and overwhelmed around what I should work on to implement for the best UX[1]
[1] Best UX defined as fast loading times for large number of rows and dynamically being able to load data (sockets?) and realtime data communication
https://github.com/prettydiff/wisdom/blob/master/performance...
Warning: every time I post this people claim to want superior performance but then whine when they realize they have to actually write code (as opposed to letting NPM or React or jQuery do 99% of everything).
What matters more than your storage engine. Your provider, your architecture and design, and your plans for scaling (read plans not actions).
There are lots of companies that cant seem grasp that multi-tenancy would be a win for them. You remove a bunch of pain for customer facing code. Your internal admin and reporting tools need to take that weight. It means that scaling problems are likely to impact fewer customers (and will be easier to address). It means that your costs map more directly to your (hopeful) income.
None of this should stop you from playing with other data engines. If you have the capital, just build a linux box (a few hundred bucks), throw prox mox on it, an extra vm and limit the cores and memory and see what some of these DB's can do. Can you install them. What are the interfaces like, can you build a cluster of them. What is their network IO vs the data IO... Is this a perfect test. No. IF something sucks to write good code and something else is easier where do you take that into account.
If your going to have REAL high bandwidth applications then your going to end up out of cloud and on your own hardware. If you cant install these things then there is zero point in pursuing them.