POST /resource
PATCH /resource/:id
GET /resource/:id
DELETE /resource/:id
The reason for the ask comes when someone needs to operate on thousands (or more) records at a time, and it's more efficient to just do it at once rather than be at the whims of request latencies and rate limits. So I understand the reason behind it. Yet, I don't see a lot of great APIs offer endpoints like this.Does anyone have thoughts on if this is an ok pattern (or anti-pattern)? I would be curious to see example API docs of any companies that do this particularly well.
(For reference, this is our API: https://docs.moderntreasury.com/)
My suggestion: either pre-validate and return a structured response or return a 202 Accepted with a job id with a different endpoint for checking on the status of the batch. Another option is to have a webhook subscribed to report back status (success and error).
If you want to stay in the HTTP request/response lifecycle like typical API's, your system could validate the entire batch first, which is time consuming, then respond with an appropriate response for your API - some 4xx error for invalid input if say 1 out of 1000 in the batch is invalid. If you accept the batch and begin processing immediately, then your response needs to include the status of each transaction. Either way, you will probably run into some limitation regarding timeouts in your stack (web server, load balancer, other network elements) and how large of a batch you can accept.
An idea to make this easier on your system would be accept the batch, respond with a unique job identifier. They can then check a different API endpoint for status and another endpoint to retrieve results that is paginated. This would allow you to background process large batches, responses can include status per item and you avoid running into time-outs.
I've also seen this done async. You send a request, get back a 202 with an ID, and then poll another endpoint with the ID for the result.
But you could create a job style api, where they submit a batch, and you return a id back. Then they check the status of the batch by a get on the job id.