Many of the APIs of projects I've worked with look like essentially like RPC style calls. The URLs are structured as action/method names to do specific things. I like many of the benefits to an RPC approach compared to REST, GraphQL, or gRPC-Web. Unfortunately, the client code generation with popular options feels clunky when you want to use them in an RPC style. It seems like JSON-RPC would be better.
Consider writing an HTTP based RPC server. I parse the first line. It tells me the location and the method. Followed by a bunch of key value pair headers where both the key and the value are strings. So far so good, I could write all of the above in C or Go pretty easily. Finally comes the body. I can unmarshall this in Go or parse this in C easily because I can I know what to expect in the body by the time I get there. This is because I already know the method and location.
Contrast this with JSON RPC. I have to partially parse the entire object before I know what function is being called. Only then can I fully parse the arguments because I don't know what type they are (or in other words, which struct the arguments correspond to) until I know what function is being called, what version of RPC is being used, etc.
Super annoying. And HTTP is just sitting there waiting to be used.
HTTP allows for incremental parsing. I can parse the first few lines separately from the body. It makes handling input really nice.
Having everything in a single JSON object doesn't allow for incremental parsing because the standard says I can't guarantee order of key value pairs when an object is concerned.
- Method Name is a part of the body itself. So you have to parse it to make a decision on how to dispatch it. That's an extra cost.
- Error Code is a part of the response. So you have to parse it each time to figure out if it's a success. Also, some clients just give an HTTP Error Code instead. So you have to handle both.
- It can also be batched. Which introduces a lot of unspecified use cases.
- Like how are you supposed to dispatch those batched? Should they go to the same upstream? Or can it be parallelized? Should the order be preserved?
- With batches, the slowest request always blocks the in-flight response. You have to wait for all calls to be finished before producing a final response. That's a huge performance bottleneck. Also, what you do if some of them never finished?
- A lot of uncertainties about the ID. Especially because it can be a different type (int or string), which sometimes introduces two different handlers in the code. And there are no guarantees IDs are not repeating in the same batch. To make it worse, some clients rely only on the request/response order ignoring the IDs.
- Another problem is that it has no idea of Metadata, which is important for many real systems. For example, to do an Auth. Or do Caching and other optimizations.
So as an outcome, everyone just makes two layers. On the HTTP level, you have auth, routing, error handling, multiplexing, etc. And then you realize that JSON RPC only introduces extra complexity.
HTTP GET is idempotent, which seems trivial, but it's worth more than people think. With JSON-RPC you can't safely retry any RPC, and that matters at scale! (If the transport is HTTP, I'm pretty sure all JSON-RPC are POSTs. JSON-RPC can be good for local communication like language servers, where you don't really care about retrying or caching)
FWIW Google had/has a very RPC-like model, and when I left many years ago there, I remember they came around to giving explicit guidance to use a REST-like model (which you can do in an RPC system). That is, where you have more nouns than verbs, and you GET nouns.
Methods can be expressed in http method, and resource can be expressed in url. Response status can be expressed in http statuses.
Some people prefer that for lower payload size. Others prefer byte encoded payloads like msgpack instead of json. In trading platform api's i've noticed that.
Also, using url based resources lets you route these requests at the network proxy level (caddy haproxy nginx envoy traefik etc).
1. Binary support for non-web clients 2. Robust code generation 3. First-class cancellation/deadline concepts 4. More guidance around a standard HTTP transport; kind of related to 2 5. Simple schema/type definitions to assist in code generation and API ecosystems
Ultimately, we went with gRPC and it has been pretty nice. The code generation is mostly good and where it isn't, we can make our own wrapping code. There have definitely been some downsides and gotchas when it comes to the servers, specifically around client streaming and bi-di requests. Also, grpc-web is really not great and is the biggest downside so far. Overall though, it feels worth it for our use case. Protobuf has been great.
1. REST positioned itself in opposition to RPC. The concept of RPC is a negative one in many engineers’ minds.
2. JSON-RPC isn’t that much better than passing your own JSON formatted messages over HTTP. So I don’t think it gets you too much incremental value these days.
With that said I love JSON-RPC and tend to reach for it pretty often for prototypes or greenfield projects.
There is not so much tooling for JSON/RPC in general. But it is very simple. I wrote my own client library wrapper that did the connection handling and request/response mapping. You'll probably want to do something like that to be able to handle requests that take a long time, that need to be retried, to re-open closed websockets for some reason or such. But if you have that, it's OK.
I was not aware of the open-rpc 'schema' stuff, that looks very interesting, simple and useful
I've never used JSON-RPC but from the little I've read, the part it is useful for is exactly the part people often customize.
We use (compatible) extension of jsonrpc for async generators - yielding elements individially for longer array responses, transparent to the client, to avoid head of line blocking (haven't open source this yet, but will soon).
Other than allowing error code to be string it's all pure jsonrpc 2.0.
Supporting things like adaptive throttling (that dynamically adapts to client's connection speed) was very easy to implement as well.
There are two correct responses to this:
- Use a decent RPC framework to hide HTTP's semantics (which are not appropriate for RPC) and to get generally better but still widely-accessible tooling; I use gRPC but at this point there are a dozen good-enough ones (JSON-RPC not being among them).
- Design a REST, or at least REST-ish, API. Map your URLs to entities not actions or methods. Make each entity have a canonical URL. Use content negotiation. And so on. It's more work but you're also available to a lot more real HTTP clients.
There is also a wrong answer:
- Continue trying to do RPC over HTTP semantics with an inefficient format and assume that having a standards document per se solves literally any problem.
Protobufs and things like optional fields etc help, but not that much.
One of the problems is that RPC is procedural and treats the data as parameters to the function.
REST says that the "data" (the resource) is the primary element and is oriented towards the original OO concept of message passing.
RPC has been bad since the days of XDR and SunRPC, has repeated the same problems with CORBA, WS-, XML RPC, JSON RPC, Java RMI, etc etc.
Programmers love it because it looks* like a standard function call, but the problems are hidden under the covers (versioning, network failures, idempotency, retries, etc).
While if you do need something deeply hierarchical, there's a good chance it might be something massive and so you'll probably just pass a URL to it as a document, rather than embed it as parameters.
So why use JSON for input if you don't need it? It's just using the simplest tools for the job. Since any web request already parses GET/POST parameters, why would you add JSON into the mix when the parameters can usually already handle what you're doing?
(But if you did have an API where input was necessarily deeply flexibly hierarchical, JSON RPC could very well be the perfect tool for the job. Also, compare with API output which often is very deeply hierarchical, which is why JSON is so popular on the output side.)
Those structures that do, are often persistent; so the right trade-off might be to make your protocols more lightweight (I use |;, separation of just text) and then use JSON for your database objects (fits well with my |;, separators).
Here are two examples of "packets":
Movement: "move| Storage: "save| You can read more here: http://fuse.rupy.se/doc/game_networking_json_vs_custom_binar...
gRPC uses protocol buffers, which had the goal of being smaller and faster to parse than XML. The issue it was trying to fix was the performance of XML.
The JSON-RPC protocol itself doesn't seem to solve anything that the other options don't solve as well.
Lately I’ve been experimenting with using a generic proxy object on the client corresponding to a server side object that is the actual API, and defining a typescript interface that both the proxy and server version implement. It makes invoking backend APIs a lot nicer.
I think tRPC sort of implements a transport layer for (a superset of) this, no?
The schema definition in it is really really ugly and I could barely get it to work. Also at the time (2015) the language support for that schema definition wasn't great. I also tried swagger at the time and there were quite a few things I just couldn't get it to do. If I couldn't get it to behave in a targeted research spike it was really going to be a non-starter for new employees trying to slam out features. These issues meant people would go with the losest typing or just "object" for a lot of items. One problem is that they both have the flexibility of XSD but whether a given laganguage will support that validation or do it correctly was spotty at best.
We went with our own dumbed down version of GPRC called Twirp which is very very simplified. The schema language for proto3, which twirp uses, is a lot simpler to write, uses close to just native typing, and does a lot less. Doing less was actually nice because it meant less rules. I was able to implement the prototpe ruby version which someone else (thanks cyrus) put into production in under a day because there were so few rules but what it had gave us type safety.
For years people really liked REST because it was "self describing via api" but it turns out no one cares, what they want is a client in the language they're using that works. They really don't care that much how the client is generated. So to me Interface Description Languages (IDLs) and a code gen step are what you want with some simple basic rules.
We built a similar tool for config checking, and most of the features weren't used. But Twirp and this language removed a whole class of issues that had plagued us and caused outages for years around string vs int in configs and apis.
Also, at our scale, for some services, proto or another binary format has substantial benefits. I'm not saying it matters for your service and human readable is great (we had to learn and built up tools to debug things, and kept json support in twirp exactly for the poor humans), but when you're sending a millon requests a second or so to a service, there is a noticeable cost equivalent to an engineer's salary in infra costs. But most people are not running at that scale.
On the front end GQL worked better for us, it pushed the flexibility to the front end devs while we sorted out the back end over several years. The level of coupling we'd have had if we ran Twirp, Swagger, or anything else direct from front to back end would have made work 100 times harder. I was skeptical when they started but it's been a boon, and I was smart enough to let smart people be smart in an area I'm not smart in eg I stfu.
Anyway TL;DR (at the end of course) there were better options for me such as Amazon's internal one, GRPC, Twirp, roll your own Proto based, and about a billion others.
I'll also note the v1 spec wiki page link is broken.
Good luck trying to propose any kind of API that violates the blurry image others devs and architects have of rest(ful) APIs.
No one got ever fired for going for a rest(ful) API.