HACKER Q&A
📣 ArshDilbagi

What are you using to iterate and test your prompts for LLMs?


In 2023, I spent a lot of time between Notion, Google Docs, and Jupyter Notebooks editing paragraphs of texts and then running custom scripts to test the outputs of my iterations. I'm wondering if people who have LLMs deployed in production use any tools to iterate and evaluate their prompt templates.

PS: I'm working on content (blog and video) outlining the tools people use in this workflow. If you are interested in contributing/getting mentioned, please shoot me a note.


  👤 muzani Accepted Answer ✓
I just copy and paste existing scripts and wire them into the command prompt. Something like this: https://github.com/smuzani/openai-samples/blob/main/node_exa...

There's instructions on how to connect it to the command prompt in README but that's not the best way to do it.


👤 shouche
OpenAI came up with its guidelines on prompting. I created a GPT assistant to help me improve the prompt based on that. https://chat.openai.com/g/g-haH111AXX-prompt-optimizer

The results are good, and improve the output quality significantly. Small manual QC and tweaks further improve the responses.


👤 hoerzu
The prompt tool from: https://rungalileo.io