Any packages that would handle running things async, doing retries, handling the occasional hung connection, and logging performance?
Tried helicone, but I don't think it handles the hung connections.
Just doing it manually for now, but there must be an existing solution?
Your state machine could be as simple as: New, Processing, Failed, Succeeded. Outer loop will query the collection every ~second for items that are New or Failed and retry them. Items that are stuck Processing for more than X seconds should be forced to Failed each loop through (you'll retry them on the next pass). Each state transition is written to a log with timestamps for downstream reporting. Failures are exclusively set by the HTTP processing machinery with timeouts being detected as noted above.
Using SQL would make iterations of your various batch processing policies substantially easier. Using a SELECT statement to determine your batch at each iteration would permit adding constraints over aggregates. For example, you could cap the # of simultaneous in-flight requests, or abandon all hope & throw if the statistics are looking poor (aka OpenAI outage).
It's a wrapper for OpenAI's Python sample script plus adjacent functionality like cost estimation and binpacking multiple inputs into one request.
I would consider using one of the 32k context windows.
Define a delimiter for the paragraphs and prefix the prompt to process each then write out with a delimiter the result you need.
Maybe wrap your call in a simple try catch and do exponential back-off.
For async and retries I just asked the bot to write proper code, and it worked fine. Could do it myself, but it would take longer.
With hung connections - I didn't experience those, but it should also be straightforward.