HACKER Q&A
📣 boppo1

Why doesn't Stable Diffusion have some sort of feedback mechanism?


It seems to me that if users could rate the results of a prompt, that could build a dataset with (seed, settings, prompt, rating) that could be used to further and rapidly refine 'holes' in the model.


  👤 O__________O Accepted Answer ✓
Think you’re under estimating the scope of possible queries.

For example, say there were only 5000 keywords and only two in any order were allowed at a time, that’s 25 million possible queries.

Next, who would “fill” the gaps and with what? Obviously, current dataset is from an existing collection of images and is then automatically processed. Even in that limited set, there billions of images and 10s of billions of words.

Each time the set is training, costs 100s of thousands of dollars.

Not against your idea, but just throwing out ideas without understanding context likely won’t get far, especially if you’re not contributing to it actually getting done; talking about the logistics, implementation, politics, etc - not simply flagging a prompt as bad.

Most the prompts I see users doing to me seem like they don’t understand the prompts they’re using; aka they randomly find something that works, but it actually was just random luck. Point being fine if you system was up, how would you know the issue is not the end user just spamming random words, not understanding how to build valid prompts?