HACKER Q&A
📣 VikingCoder

Does async/await exist for transient processes?


I enjoy coding async/await style code, but I wonder, has this been extended to work for systems that run essentially "Out Of Core."

For instance, if I had a bunch of back-end servers that stored state in a DB... And as long as the back-end servers were all running the same version of my code... I could write async/await code... And all of my state could be stored in the DB... And when input came in from the user, whichever server the event got load-balanced to, could just continue processing.

I could imagine doing this with a Virtual Machine. Where I potentially store the full stack and heap into a chunk of data somewhere on disk or in a DB.

But I could also imagine that a programming language could be modified to allow me to just type async/await code, and have the state survive restarts, reboots, migrating to another server, etc. As long as I don't modify the code in the callstack of the async/await while it's in the middle of executing.

Does such a thing exist?


  👤 PaulHoule Accepted Answer ✓
See https://www.jbpm.org/ for one approach to the problem of long-running workflows which is based on the standard

http://www.omg.org/bpmplus/

which is not quite the programming model you want but it is similarly breaking up "functions" into small bits and serializing the state so that this can happen over long terms.

What you want has been done on an experimental basis, but maybe not industrialized, see

https://stackoverflow.com/questions/734638/language-that-sup...

https://www.reddit.com/r/ProgrammingLanguages/comments/145du...


👤 adrianmsmith
The Arc language, a variant of Lisp that HN is written in, has something like that. I just googled and alas wasn't able to find any examples. But, as far as I remember, you can just code stuff to produce a link and sort of say "on click, run this lisp function" (like a callback in JS etc.) and when the link is clicked, the function is run. I think it relies on all the state being kept in memory, so I'm not sure if that state (and thus the ability to click links) survives a server restart, but I could be wrong.

👤 slashdev
I'm not sure I understand what you're imagining here.

It sounds like you're talking about having in-memory per-user state that persists between requests on the server side. Each time a request comes in, it's handled in the context of the state from previous requests?

That's a lot like Cloudflare's Durable Objects.

But it's also not much different from storing state for a user in the database and/or cache and fetching it on demand.

I seems like you're thinking of having like the ability to yield a response from code and then have a request come back in and resume at that point again. I'm not sure that makes much sense over just looking up some state from somewhere on a request.


👤 ac2u
What you're probably looking for is continuations, combine them with serialisation and you can have stateless (in that the state is in a persistent store and can be picked up by any application server) workflow processing.

You stumbled upon a drawback yourself though:

>And as long as the back-end servers were all running the same version of my code

the continuation could be serialised and then picked up later with business logic that has since changed in your application, whether through changing requirements or bugfixes.

Therefore personally, (I'm not saying there aren't successful business implementations of continuations, just my own observations), in the wild, I've mostly seen long running async workflows being put together by either state machine modelling in whatever language the programmers are using to let workers run through multi-stage processes, or use of workflow engines to hoist the orchestration to a tooling layer and the regular "programming" is in the building blocks.