HACKER Q&A
📣 danver0

Why LLM Is Lying?


Why LLM good at giving me answers that looked perfectly reasonable none of them work ?????


  👤 PaulHoule Accepted Answer ✓
What gets me is that they get certain things wrong about Arknights, for instance none of them get it straight how Scavenger recovers DP. And I really want help with Arknights because the guides suck, though I guess LLMs are bad at Arknights questions because... the guides suck.

The thing is that LLMs are not moral subjects, they don't feel bad the way you feel or the way a dog or a horse feels when they let somebody down. I worked for a company developing prototypical foundation models circa 2018 and one of the reasons that I didn't invent ChatGPT is that I wouldn't have given a system credit for making lucky guesses.


👤 toomuchtodo
LLMs don’t know what Truth is. That’s up to the human. They are fancy search results. Right, wrong, true false, LLMs do not know this.

👤 rvz
Because a bunch of AI boosters and snake-oil salesmen born every second pivoting to AI keep telling you it was “AGI” when in reality, these LLMs have no basis of reality or the truth as it confidently makes things up.

Every day, the AI boosters have a slot machine to sell you and you fell for it.


👤 nickpsecurity
It repeats mixes of what people said and did in it's training data. What goes in is what its model says has the highest probability of filling in the blanks. There's even many lies and inaccurate data in the training data of most models.

"Garbage In, Garbage Out"

"You get out of it what you put into it."


👤 krapp
Because that's what LLMs do. They don't give "answers", they don't know what "works" and what doesn't. They create text based on a heuristic you provide and a token matching algorithm. That's it. That's the trick.

It isn't returning code, because it doesn't know what "code" is. It's returning language, essentially "code-shaped text." It only happens to work as well as it does when it does because the model is trained on examples of existing working code supplied by humans, therefore whatever it returns is likely to at least mostly be correct, at least for common cases where a high probability match is likely.


👤 squidcalamari
They do lie. Ingore the "technical" people explaining how they just 'get stuff wrong' based on a lack of information or bad training/data.

You can prove this by asking any of the super-smart LLMs that have access to 'current' data (searching, whatever), which US President has bombed the most countries in their two terms. They will claim it was Obama, or that they cannot determine the answer because it's "complicated". The truth is, the USG and it's technocrats instruct and train these bots to lie in support of the state agenda.

These bots will even claim the true answer is misinformation even after you force them to the factually correct answer. Just like the Wizard of Oz, it's just sad little man pulling the strings of a terrifying facade.


👤 the_hoser
An LLM cannot lie to you. Lying would imply that it somehow knows the truth, and chooses to tell you something other than the truth. The LLM doesn't know anything. It's just providing you with answer-shaped responses.