HACKER Q&A
📣 sergF

Do you still run Redis and workers just for background jobs?


Hi HN,

I'm working on small SaaS projects and keep running into the same issue: background jobs require a lot of infrastructure. Even for simple things like delayed tasks or scheduled jobs I end up running Redis, queue workers, cron, retries, monitoring, etc. For bigger systems this makes sense, but for small apps it feels like too much.

I'm thinking about building a small service that would let you send a job via API and get an HTTP callback when it's time to run, without running your own queue or workers. Basically: no Redis, no workers, no cron, no queue server

Would something like this actually be useful, or am I trying to solve a problem that isn't really there?


  👤 figassis Accepted Answer ✓
Use Go, it has built in go routines and likely libraries that let you implement your own workers.

If you’re running a single instance, you don’t even need any synchronization. If you’re running multiple instances of your app, try implementing locking (this actually works in any language, not just go. Go jsut helps with the multiple long running workers part. With other languages, just run multiple instances.

Process:

1. Each worker can startup with their own id, it can be a random uuid.

2. When you need to create a task, add it to the tasks table, do nothing else and exit.

3. Each worker running on some loop or cron, would set a lock on a subset of the tasks. Like:

update tasks set workerId = myUUID, lockUntil = now() +10minutes where (workerId is null or lockUntil < now()) and completed = false

Or you can do a select for update or w/e helps you keep other workers from setting their ids at the same time.

4. When this is done, pull all tasks assigned to your worker, execute, then clear the lock, and set to completed.

5. If your worker crashes, another will be able to pick it up after the lock expires.

No redis, no additional libraries, still distributed


👤 robertandrewp
One approach that sidesteps the whole problem: design for fully synchronous, stateless requests from the start so there's nothing to queue.

I did this for a financial calculator API — every request is pure computation, inputs in, result out, nothing persisted. No Redis, no workers, no task table, no locking. The response is ready before a user would notice a queue anyway (sub-50ms).

Obviously only works when tasks complete in milliseconds. But figassis's pattern of "starts simple, then incrementally grows into a small job system anyway" often happens because the initial scope could have been fully synchronous — the async complexity creeps in before it's actually needed.

Worth asking first: does this task genuinely have to be async, or is it just easier to model it that way?