If you feel you can contribute, please read https://github.com/orgs/json-schema-org/discussions/295 and share your thoughts.
The current options are:
- The specifications that have been published to date provide no stability guarantees, but we will be adding explicit guarantees with the next publication.
- Apply the stability guarantees starting with the most recent publication (draft 2020-12) so that the next publication contains no breaking changes.
I have a system I built that compiles TS types to JSON schema, which then validates data coming into my endpoints, this way I am typesafe at compile time (Typescript API) but if someone hits my REST endpoint w/o using my library, I still get runtime goodness.
The number of different ways that JSON schema can be programmatically generated and therefore expressed is a bit high, different tools generate very different JSON Schemas.
Also the error messages JSON schema gives back are kind of trash, then again the JSON Schema on one of our endpoints is over 200KB in size.
I’m invested in it. I’m using it to provide implementation-specific validation of requests to/from a third party API.
I wish there was a good macOS editor or IDEA plugin for it with autocomplete etc. The static generators from examples are obscure, ugly, minimal, and can’t account for variations. It isn’t pleasant to write, it’s tedious and slow.
Nevertheless I’d rather write API validation this way, in a document, than in code.
And I say that as a conscious XML user. We are going in circles. At least XML has comments.
Then I can use the same schema in the backend to validate the data, both sent in via the form and directly with the application/json content-type. It's a pretty smooth flow, and reduces a lot of redundancy.
[0] my biggest gripe is it's not well defined what to do with multipleOf when the number isn't an exact integer.
And maybe schema URIs weren't exactly the best idea. Not a fan how many libraries make me register a schema with some a fake URL instead of just feeding them the schema document and simply not caring where it came from at all. But since it's already there - OK, fine, it's a minor nuisance.
Otherwise, it just works.
P.S. I have no idea about the relation to OpenAPI/Swagger/RESTful APIs/whatever. I use vanilla JSON Schema as a convenient "cross-platform" DSL for JSON-serializable data structure validation, and I think it does excellent job at this. Would love to see it staying in that scope.
Keeping things compatible is essentially always preferable. Making breaking changes is in isolation a bad thing but can be worth it for what you get out of it.
Is there a wishlist of "if only we could break compatibility, we'd do X/Y/Z"?
> The specifications that have been published to date provide no stability guarantees, but we will be adding explicit guarantees with the next publication. However, it seems disingenuous to promise "no breaking changes" while including breaking changes.
Not really, this seems pretty par for the course of any project hitting a 1.0 release - announcing that things are now stable. Perhaps that's a better framing, you wouldn't release something saying no breaking changes while containing breaking changes, but releasing something that has breaking changes as a stable thing makes a lot of sense.
I have not worked for an organization with public APIs. Not a single one of the APIs I work on is completely stable. Figuring out what our APIs do requires techniques from archeology, anthropology, and sociology.
At one point I did, but then discovered RAML[0] and it subsumed the value of what JSON Schema provides as well as being easier to work with than OpenAPI[1]. Also, generating JSON Schema from RAML definitions has proven to be a fairly straightforward process.
The usual caveats apply... Your mileage may vary, my experiences do not speak for any others, my opinion does not detract from the value of JSON Schema, etc.
0 - https://github.com/raml-org/raml-spec/blob/master/versions/r...
So I'm not sure if my feedback is valid but, I sure hope that the jsonschema crate follows the spec! Otherwise I'll never use jsonschema but instead something-not-exactly-jsonschema. In other words.. you better not break anything.
Also lack of inheritance support. (For example I want a way to specify that my json object should be deserialized as Dog not as Animal.)
Since we have tooling[0] that validates requests and responses at runtime, the clients can be absolutely sure of what they receive (we through 500 if the server attempts to respond with an undocumented respond) And the server is also sure about the shape of the requests. This allows us to validate everything at compile time too, generating typescript types for both client and server.
And since we have similar tooling regarding our data stores (typescript types for sql queries) most of the time if there is a bug, the code would simply not compile - pretty nifty!
In most cases where I want to do some validation on JSON I find that I usually have a class/struct/object that represents the payload and I want to unpack JSON into it (or dump that class to JSON). Ultimately, there are already nice tools to do this (eg; marshmallow on the python side). So unless I'm crossing language boundaries writing a separate jsonschema and using that is more work and I have to keep it up to date.
And in the cases where I want to cross language boundaries at this point there's less and less of a compelling case to go with JSON and not eg; flatbuffers/protobuf/thrift/capnproto since you're writing a schema anyways.
No, but I don’t do much backend stuff, anymore.
I’m totally anal about Quality, so I want to do things like jsonSchema. I used to deliberately publish APIs in both JSON and XML, so that I could use XML Schema to validate the data, but JSON, to actually use it.
XML Schema is a huge PItA. I don’t like auto-generated Schema (see “anal,” above), so I tend to hand-tune (or write code to dynamically generate) my Schema.
The main reason I don’t use jsonSchema, is that I don’t have a real use for it, these days.
I mostly have internal APIs (proprietary backends), so there’s no need. I do have one backend that I recently wrote, that is public, and I may consider adding a published schema to it.
https://github.com/chapmajs/dynamic_dns
My main interest in using JSON Schema in the above project was security related: this service sits on the public Internet, by nature I cannot restrict the sources that connect to it (road warrior type systems couldn't send DNS updates!). Having a strict schema is another layer of sanitization on what one nowadays must assume is a malicious source.
Making sure there aren't major breakages from one version to the next sure would be nice, yes. We hit some snags as we attempted to upgrade from I believe 4 to 7, especially because we'd have to deploy native apps into the wild to use the newer version, and getting native mobile apps deployed everywhere quickly is an exercise in futility.
So being able to be confident that draft X will work with draft X+1 would be pretty excellent.
For libraries that build/offer specific JSON outputs (maybe like an online editor that dumps out a JSON file on export), I would also expect that product would come with a client library with Typescript types that would give type safe access to the JSON data.
Maybe JSON schema is useful for RESTful resources to provide a payload definition of responses? And I guess consumers could then generate client definitions from the JSON schema? It seems weird to do that at runtime, so I'm guessing it would also be a compile step to generate clients from JSON schema? Or is there an intentional runtime use case?
Are there popular APIs/libraries/etc that use JSON Schema? I don't see a "used by" section on this site which could help folks understand where this sits in the modern software development industry.
It's mildly infuriating. I've taken to using Cuelang[0] to write my own validators based on their specifications. (Using Cue because JSON isn't the only data format I have to support.)
I wish there were an easier way to take some documentation and generate JSON Schema from it. I can take the sample JSON in the docs and generate one, but those samples don't usually contain all the edge cases that the systems complain about, so it's not super useful.
https://github.com/sshine/library-recommendations/blob/main/...
My current impression is that JSON Schema is nicer in theory than in practice.
As is often the case, good domain knowledge can help to choose which features are "real" and can stand and which are more doubtful and have a higher chance to fall out of favor later.
[1] https://www.umami.recipes [2] https://schema.org/Recipe [3] https://github.com/google/schema-dts
Part of my motivation to build the tool was to learn more about JSONschema.
like you would expect that people realize "additionalProperties" are additional to "properties" but pretty much any doc or code gen tooling I have used has some problems with getting that right (various problem not always the same)
I mainly seem to have no leapfrogged the need for this
There was a time when the JSON parsers across different platforms were so finicky that schemas seemed like a solution, but the parsers got better at not breaking on unexpected data types or JSON structures
I'm also usually able to do system design to return a smaller subset of data, with consistent data types
Or someone else's JSON or JSON output has its own documentation that already tells what something is and how it should be parsed