HACKER Q&A
📣 shaburn

How Does Garbage in Garbage Out Apply to LLMs?


How Does Garbage in Garbage Out Apply to LLMs?


  👤 brucethemoose2 Accepted Answer ✓
Set accuracy aside for a moment.

There is an opportunity cost for stuffing garbage into an model's limited parameter count. Every SEO bot article, angry tweet, or off topic ingestion (like hair product comparisons or neutron star descriptions in your code completion llm) takes up "space" that could instead be taken up by a textbook, classic literature or whatever.

Generative AI works pretty well in spite of this garbage because of the diamonds in the rough. But I am certain the lack of curation and specialization leaves a ton of efficiency/quality left on the table.


👤 PaulHoule
It learns to imitate what it is shown so if you show it text from StackOverflow it will learn the wrong answers as well as the right answers unless you are really good about filtering out the wrong answers.

👤 jstx1
1. it matters what training data the creators of the LLM use

2. the step of reinforcement learning with human feedback is important

3. as a user you need to ask questions well and know how to prompt it to get the best results


👤 compressedgas
Yes, GIGO even applies to humans.

👤 rolph
train with slang, jargon, euphemisms, and promiscuity of dialect, versus train with colloquial language, proper grammer/syntax, and punctuation.