HACKER Q&A
📣 vyrotek

Why isn't JSON-RPC more widely adopted?


JSON-RPC / OpenRPC seems like a great option for APIs. Why isn't it adopted more widely?

Many of the APIs of projects I've worked with look like essentially like RPC style calls. The URLs are structured as action/method names to do specific things. I like many of the benefits to an RPC approach compared to REST, GraphQL, or gRPC-Web. Unfortunately, the client code generation with popular options feels clunky when you want to use them in an RPC style. It seems like JSON-RPC would be better.

https://open-rpc.org

https://www.jsonrpc.org


  👤 djha-skin Accepted Answer ✓
Parsing makes it hard.

Consider writing an HTTP based RPC server. I parse the first line. It tells me the location and the method. Followed by a bunch of key value pair headers where both the key and the value are strings. So far so good, I could write all of the above in C or Go pretty easily. Finally comes the body. I can unmarshall this in Go or parse this in C easily because I can I know what to expect in the body by the time I get there. This is because I already know the method and location.

Contrast this with JSON RPC. I have to partially parse the entire object before I know what function is being called. Only then can I fully parse the arguments because I don't know what type they are (or in other words, which struct the arguments correspond to) until I know what function is being called, what version of RPC is being used, etc.

Super annoying. And HTTP is just sitting there waiting to be used.

HTTP allows for incremental parsing. I can parse the first few lines separately from the body. It makes handling input really nice.

Having everything in a single JSON object doesn't allow for incremental parsing because the standard says I can't guarantee order of key value pairs when an object is concerned.


👤 splix
I have experience working on some JSON RPC Load Balancer, and I can tell that it's one of the worst choices for an API:

- Method Name is a part of the body itself. So you have to parse it to make a decision on how to dispatch it. That's an extra cost.

- Error Code is a part of the response. So you have to parse it each time to figure out if it's a success. Also, some clients just give an HTTP Error Code instead. So you have to handle both.

- It can also be batched. Which introduces a lot of unspecified use cases.

- Like how are you supposed to dispatch those batched? Should they go to the same upstream? Or can it be parallelized? Should the order be preserved?

- With batches, the slowest request always blocks the in-flight response. You have to wait for all calls to be finished before producing a final response. That's a huge performance bottleneck. Also, what you do if some of them never finished?

- A lot of uncertainties about the ID. Especially because it can be a different type (int or string), which sometimes introduces two different handlers in the code. And there are no guarantees IDs are not repeating in the same batch. To make it worse, some clients rely only on the request/response order ignoring the IDs.

- Another problem is that it has no idea of Metadata, which is important for many real systems. For example, to do an Auth. Or do Caching and other optimizations.

So as an outcome, everyone just makes two layers. On the HTTP level, you have auth, routing, error handling, multiplexing, etc. And then you realize that JSON RPC only introduces extra complexity.


👤 chubot
Read about the important of idempotence in distributed systems

HTTP GET is idempotent, which seems trivial, but it's worth more than people think. With JSON-RPC you can't safely retry any RPC, and that matters at scale! (If the transport is HTTP, I'm pretty sure all JSON-RPC are POSTs. JSON-RPC can be good for local communication like language servers, where you don't really care about retrying or caching)

FWIW Google had/has a very RPC-like model, and when I left many years ago there, I remember they came around to giving explicit guidance to use a REST-like model (which you can do in an RPC system). That is, where you have more nouns than verbs, and you GET nouns.


👤 joshxyz
For me, tradeoffs.

Methods can be expressed in http method, and resource can be expressed in url. Response status can be expressed in http statuses.

Some people prefer that for lower payload size. Others prefer byte encoded payloads like msgpack instead of json. In trading platform api's i've noticed that.

Also, using url based resources lets you route these requests at the network proxy level (caddy haproxy nginx envoy traefik etc).


👤 tptacek
Perhaps, by the time they decide on a formal RPC standard, people conclude they might as well use a compact optimized RPC standard, like gRPC. REST-ish JSON APIs work fine and are universally familiar. What's the big advantage of encoding whole requests into JSON blobs, rather than just URL routing? If I'm going to do that, why wouldn't I just use GraphQL, and get the graph traversal part for free?

👤 erdaniels
For my current job, I speced out what was needed for an RPC solution. I ended up basically coming up with what JSON-RPC provides but was missing a few things.

1. Binary support for non-web clients 2. Robust code generation 3. First-class cancellation/deadline concepts 4. More guidance around a standard HTTP transport; kind of related to 2 5. Simple schema/type definitions to assist in code generation and API ecosystems

Ultimately, we went with gRPC and it has been pretty nice. The code generation is mostly good and where it isn't, we can make our own wrapping code. There have definitely been some downsides and gotchas when it comes to the servers, specifically around client streaming and bi-di requests. Also, grpc-web is really not great and is the biggest downside so far. Overall though, it feels worth it for our use case. Protobuf has been great.


👤 peterhunt
IMO it’s two things.

1. REST positioned itself in opposition to RPC. The concept of RPC is a negative one in many engineers’ minds.

2. JSON-RPC isn’t that much better than passing your own JSON formatted messages over HTTP. So I don’t think it gets you too much incremental value these days.

With that said I love JSON-RPC and tend to reach for it pretty often for prototypes or greenfield projects.


👤 roywashere
I previously worked on an embedded device with a JSON/RPC over WebSockets API. I needed to use the API from a web browser context.

There is not so much tooling for JSON/RPC in general. But it is very simple. I wrote my own client library wrapper that did the connection handling and request/response mapping. You'll probably want to do something like that to be able to handle requests that take a long time, that need to be retried, to re-open closed websockets for some reason or such. But if you have that, it's OK.

I was not aware of the open-rpc 'schema' stuff, that looks very interesting, simple and useful


👤 michaelsalim
Honestly, I think most projects just use a mix of whatever the author likes most (Except for frameworks where it's predetermined). Most would say they use REST but if you look closer they actually aren't. I'm also on the opinion that it doesn't really matter as long as it's documented well.

I've never used JSON-RPC but from the little I've read, the part it is useful for is exactly the part people often customize.


👤 mirekrusin
We use jsonrpc over websockets in production for many years in trading services. It works very well. We use lightweight libraries that look like this [0] and this [1]. It's lightweight, fast, type safe, easy to maintain and debug etc.

We use (compatible) extension of jsonrpc for async generators - yielding elements individially for longer array responses, transparent to the client, to avoid head of line blocking (haven't open source this yet, but will soon).

Other than allowing error code to be string it's all pure jsonrpc 2.0.

Supporting things like adaptive throttling (that dynamically adapts to client's connection speed) was very easy to implement as well.

[0] https://github.com/preludejs/jsonrpc

[1] https://github.com/preludejs/refute


👤 talideon
I think worth reading is "A Critique of the Remote Procedure Call Paradigm - 30 years later" (https://blog.carlosgaldino.com/a-critique-of-the-remote-proc...), as well as Tanenbaum's original paper: https://www.cs.vu.nl/~ast/Publications/Papers/euteco-1988.pd.... Sure, it's old, but still very relevant. gRPC does better against his criticisms than the likes of JSON-RPC does.

👤 morelisp
> Many of the APIs of projects I've worked with look like essentially like RPC style calls.

There are two correct responses to this:

- Use a decent RPC framework to hide HTTP's semantics (which are not appropriate for RPC) and to get generally better but still widely-accessible tooling; I use gRPC but at this point there are a dozen good-enough ones (JSON-RPC not being among them).

- Design a REST, or at least REST-ish, API. Map your URLs to entities not actions or methods. Make each entity have a canonical URL. Use content negotiation. And so on. It's more work but you're also available to a lot more real HTTP clients.

There is also a wrong answer:

- Continue trying to do RPC over HTTP semantics with an inefficient format and assume that having a standards document per se solves literally any problem.


👤 rswail
RPC is a fundamentally broken concept because it relies on both server and client being synchronized, which is brittle and requires whole system upgrades.

Protobufs and things like optional fields etc help, but not that much.

One of the problems is that RPC is procedural and treats the data as parameters to the function.

REST says that the "data" (the resource) is the primary element and is oriented towards the original OO concept of message passing.

RPC has been bad since the days of XDR and SunRPC, has repeated the same problems with CORBA, WS-, XML RPC, JSON RPC, Java RMI, etc etc.

Programmers love it because it looks* like a standard function call, but the problems are hidden under the covers (versioning, network failures, idempotency, retries, etc).


👤 crazygringo
Most API's don't need to receive deeply hierarchical data. Function calls almost always take just a handful of values, and if you need the occasional array, that's quite easy to manage with duplicated keys ("...&filter=price<300&filter=color:red") or numbered suffixes ("...&filter5=price<300&filter6=color:red") or delimiters ("...&filter=price<300,color:red") as preferred.

While if you do need something deeply hierarchical, there's a good chance it might be something massive and so you'll probably just pass a URL to it as a document, rather than embed it as parameters.

So why use JSON for input if you don't need it? It's just using the simplest tools for the job. Since any web request already parses GET/POST parameters, why would you add JSON into the mix when the parameters can usually already handle what you're doing?

(But if you did have an API where input was necessarily deeply flexibly hierarchical, JSON RPC could very well be the perfect tool for the job. Also, compare with API output which often is very deeply hierarchical, which is why JSON is so popular on the output side.)


👤 bullen
I would say not many data structures requires hierarchy, which kinda is the point of JSON or XML before that.

Those structures that do, are often persistent; so the right trade-off might be to make your protocols more lightweight (I use |;, separation of just text) and then use JSON for your database objects (fits well with my |;, separators).

Here are two examples of "packets":

Movement: "move||,,|,,,|walk" (where x,y,z,w are floats)

Storage: "save||{"name": "torch", "amount": 3}"

You can read more here: http://fuse.rupy.se/doc/game_networking_json_vs_custom_binar...


👤 nitwit005
What problem are you trying to solve?

gRPC uses protocol buffers, which had the goal of being smaller and faster to parse than XML. The issue it was trying to fix was the performance of XML.

The JSON-RPC protocol itself doesn't seem to solve anything that the other options don't solve as well.


👤 vbezhenar
Fashion. Lots of modern development practices is fashion-driven. JSON-RPC sounds too old.

👤 twunde
Generally speaking, json-rpc compares negatively to REST and to other RPC standards like gRPC or Thrift. REST is more popular, in large part because of Ruby on Rails, but also because it provides semantic meaning and a system that defines what the api call should be named. No need to look up the remote function, like you need to do for json-rpc. For those who prefer RPC calls, why not use a RPC system that compresses data efficiently and which use IDLs for code generation. Anecdotally, I found that many developers had difficulty understanding json-rpc, because it was so close to REST apis

👤 dvh
The problem with using some standard like json-rpc instead of ad hoc API (aka fetch this URL and you'll receive this json) is that one day some 3rd party will use some obscure feature of that standard (like tunneling via email) and you'll be forced to implement it. Eventually every 3rd party will have slightly different implementation of that standard and you will have to make various customer specific workarounds. If it is ad hoc API, they tend to consume it as is and don't have stupid requests. Also every customer updates to different version of the standard at different times so it will be nightmare, where as if it is ad hoc API, everybody write it once and never touches it again. I find ad hoc apis vastly superior and more stable (as long as they remain simple).

👤 z3t4
Because it's stateful. And a JSON-RPC call is not guaranteed to get something in return. When abstracted with timeouts and callbacks it works nice though. The big advantage is that it can run via any transport protocol, while compared to HTTP you have the transport protocol built in.

👤 RedShift1
I use JSON-RPC as a standard to command devices that connect to a message bus (I use NATS: https://nats.io/). I wouldn't use it as an alternative to REST or GraphQL, they have different goals/use cases.

👤 user3939382

👤 benatkin
Lack of tooling, plus gRPC being faster and safer than JSON-RPC while being as convenient as JSON-RPC except for requiring tooling. Also it brands itself as an alternative to REST while gRPC is branded more as being complementary to REST. Disliking REST is a huge hangup for me.

👤 unilynx
I find it useful to link tightly coupled front and backends where scalability/load balancers aren’t a great concern and the API isn’t intended to be public. Otherwise I’d go for REST and openapi

Lately I’ve been experimenting with using a generic proxy object on the client corresponding to a server side object that is the actual API, and defining a typescript interface that both the proxy and server version implement. It makes invoking backend APIs a lot nicer.


👤 nsteel
The spec promotes it as transport-agnostic. Out of interest, has anyone come across JSON-RPC transported over anything other than HTTP?

👤 ChuckMcM

👤 hestefisk
I like rpc because of its simplicity. Function call done over the wire. I want to just invoke a function somewhere without worrying about constraints of “style” (rest, graph quality, etc). Whether it’s json, xml, grpc etc underneath doesn’t really matter to me.

👤 carom
Every time I go to use JSON it is great until I remember it cannot represent 64 bit integers.

👤 brap
>It is transport agnostic in that the concepts can be used within the same process, over sockets, over http, websockets, or in many various message passing environments.

I think tRPC sort of implements a transport layer for (a superset of) this, no?


👤 austin-cheney
I am using something similar to JSON-RPC over websockets in a personal project. It is far less of a hassle than HTTP, 8x faster, and less prone to failure.

👤 jcubic
I was told by the author of tRPC, that they use JSON-RPC behind the scene.

👤 blank_fan_pill
How is gRPC-Web not an RPC approach?

👤 grogenaut
I tried Json rpc, swagger, and json schema for a few use cases at work when we were deciding on our serialization format.

The schema definition in it is really really ugly and I could barely get it to work. Also at the time (2015) the language support for that schema definition wasn't great. I also tried swagger at the time and there were quite a few things I just couldn't get it to do. If I couldn't get it to behave in a targeted research spike it was really going to be a non-starter for new employees trying to slam out features. These issues meant people would go with the losest typing or just "object" for a lot of items. One problem is that they both have the flexibility of XSD but whether a given laganguage will support that validation or do it correctly was spotty at best.

We went with our own dumbed down version of GPRC called Twirp which is very very simplified. The schema language for proto3, which twirp uses, is a lot simpler to write, uses close to just native typing, and does a lot less. Doing less was actually nice because it meant less rules. I was able to implement the prototpe ruby version which someone else (thanks cyrus) put into production in under a day because there were so few rules but what it had gave us type safety.

For years people really liked REST because it was "self describing via api" but it turns out no one cares, what they want is a client in the language they're using that works. They really don't care that much how the client is generated. So to me Interface Description Languages (IDLs) and a code gen step are what you want with some simple basic rules.

We built a similar tool for config checking, and most of the features weren't used. But Twirp and this language removed a whole class of issues that had plagued us and caused outages for years around string vs int in configs and apis.

Also, at our scale, for some services, proto or another binary format has substantial benefits. I'm not saying it matters for your service and human readable is great (we had to learn and built up tools to debug things, and kept json support in twirp exactly for the poor humans), but when you're sending a millon requests a second or so to a service, there is a noticeable cost equivalent to an engineer's salary in infra costs. But most people are not running at that scale.

On the front end GQL worked better for us, it pushed the flexibility to the front end devs while we sorted out the back end over several years. The level of coupling we'd have had if we ran Twirp, Swagger, or anything else direct from front to back end would have made work 100 times harder. I was skeptical when they started but it's been a boon, and I was smart enough to let smart people be smart in an area I'm not smart in eg I stfu.

Anyway TL;DR (at the end of course) there were better options for me such as Amazon's internal one, GRPC, Twirp, roll your own Proto based, and about a billion others.

I'll also note the v1 spec wiki page link is broken.


👤 Traubenfuchs
rest(ful) cargo cult

Good luck trying to propose any kind of API that violates the blurry image others devs and architects have of rest(ful) APIs.

No one got ever fired for going for a rest(ful) API.


👤 hknmtt
i would because ** REST for API but i am already too used to write my APIs in protocol buffers so even though I have abandoned gRPC I still use PB and so I need rest-ish approach.

👤 jongjong
Because JSON and RPC are two separate standards and don't need to be intermingled into a combined standard. It's relatively easy for clients to adapt to any JSON interface when integrating.