HACKER Q&A
📣 dhruvagga

How to benchmark different LLM models in parallel?


I was trying Langflow recently for some experiments in our open source project - https://github.com/middlewarehq/middleware to build a RAG over DORA metrics.

In my machine, langflow literally brings makes it super slow so testing each model for output is painful. Is there a way I can try out a parallel output from different models to compare?


  👤 verdverm Accepted Answer ✓
Running models locally, on your development machine, will be slow. You need beefy GPUs to get good token/sec speeds.

Run the models in the cloud, each one on a separate machine, and then invoke them remotely. You can skip the time/cost and use various APIs from 3rd parties directly.