Anyone have any real world experience with it?
If your main concern is "faster than JSON" then you're better off using Protocol Buffers simply because they're way more popular and better supported. FlatBuffers are cool because they let you decode on demand. Say you have an array of 10,000 complex objects. With JSON or Protocol Buffers you're going to need to decode and load into memory all 10,000 before you're able to access the one you want. But with FlatBuffers you can decode item X without touching 99% of the rest of the data. Quicker and much more memory efficient.
But it's not simple to implement. You have to write a schema then turn that schema into source files in your target language. There's an impressive array of target languages but it's a custom executable and that adds complexity to any build. Then the generated API is difficult to use (in JS at least) because of course an array isn't a JavaScript array, it's an object with decoder helpers.
It's also quite easy to trip yourself up in terms of performance by decoding the same data over and over again rather than re-using the first decode like you would with JSON or PB. So you have to think about which decoded items to store in memory, where, for how long, etc... I kind of think of it as the data equivalent of a programming language with manual memory management. Definitely has a place. But the majority of projects are going to be fine with automatic memory management.
Games, data visualization, ... numerically heavy applications mainly.
On a side-note; JSON has been somewhat of a curse. The developer ergonomics of it are so good, that web devs completely disregard how they should layout their data. You know, sending a table as a bunch of nested arrays, that sort of thing. Yuck.
In web apps, data is essentially unusable until it has been unmarshalled. Fine for small things, horrible for data-heavy apps, which really so many apps are now.
Sometimes I wonder if it will change. I'm optimistic that the popularity of mem-efficient formats like this will establish a new base paradigm of data transfer, and be adopted broadly on the web.
grpc and Thrift are mostly backend service interconnects in lieu of RESTful.
Capnproto is also awesome.
(I work for Google but don't speak for it.)
Basically, we use a rather bandwith-constrained link between our services running in the cloud, and Particle-based IoT devices deployed in many locations. Some locations are remote, some are urban.
I personally haven't had to touch the Flatbuffers code since I joined the company two years ago. It's written and hasn't needed to be maintained.
Seems like a neat idea, but as another commenter said, the usecases where it's the best choice seem pretty narrow.
1) SQLite with BLOB storage gives you binary benefits for file layout and database solutions to metadata, versioning, & indexing into large structure.
2) FlexBuffers look like a more flexible solution within the FlatBuffers library.
FlatBuffers was designed around schemas, because when you want maximum performance and data consistency, strong typing is helpful.
There are however times when you want to store data that doesn't fit a schema, because you can't know ahead of time what all needs to be stored.
For this, FlatBuffers has a dedicated format, called FlexBuffers. This is a binary format that can be used in conjunction with FlatBuffers (by storing a part of a buffer in FlexBuffers format), or also as its own independent serialization format.
https://google.github.io/flatbuffers/flexbuffers.htmlhttps://stackoverflow.com/a/47799699/1020467
3) Might see previous discussion of serialization formats on hn:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...