HACKER Q&A
📣 zwieback

Do humans overestimate ChatGPT because of our brains?


I've been enjoying playing with ChatGPT but am starting to wonder if my brain is feverishly working to convince me of the value and meaning of ChatGPT's responses. Since we are programmed to communicate with other humans and because human speech is so chaotic and inefficient I wonder if we are overestimating what our AI buddies are providing.

I'm coming from a metrology and test and measurement background so I'm always trying to boil things down to a clean reproducible metric. While AI has fantastic use cases I think it also forces us to think about how much we're overvaluing our achievements in this space.


  👤 PaulHoule Accepted Answer ✓
Yes.

I think ChatGPT has a hypnotic ability that derives from it being able to predict the most likely next word.

I'd argue that people perceive a sense of incongruence when they see a word that is improbable in context. ChatGPT systematically avoids that, so the stuff it writes slides right past your critical abilities. The moment that you start to skim you're doomed.

As a neurodivergent person I'm frankly pretty envious of ChatGPT because it seems to get credit for it's glass being 70% full where it feels other people always perceive me as 30% empty. If it is good at anything it is being the emperor who gets away with wearing no clothes or at eliciting "neurotypical privilege" from people.

Note there was a similar discussion when ELIZA came out in that people really wanted to believe in it because of their hunger for meaning. See the concept of "blood in the gutter" in comics.


👤 ComplexSystems
No. People are, if anything, underestimating GPT pretty significantly at this point.

The entire conversation always involves this endless spiel about how GPT is just generating plausible-sounding text and doesn't "really understand" anything. This is not really true, but the larger problem is that it's often left as a purely philosophical point with infinitely high goalposts. What would an AI need to be able to do for people to deem it has this magical property of "really understanding" something? This is often left unexplained, in the sense that it is truly debatable if AI can every "really understand" stuff and... blah blah blah. It's a low-information discussion, in my view.

But GPT really does "understand" stuff in the sense that it does form an internal representation of ideas and concepts in its own latent space, and this is robust to a wide variety of transformations such as variations in wording. It is able to translate between different realizations of the same idea by expressing that idea in different ways: written in English in different styles, written in other languages, rapping; it can even "write a song in object-oriented Python" if you ask it to, and it will do it, and then be able to translate that to an English-language song that rhymes with the same meaning. The fact that it has a many-to-one representation of realizations to vectors in its latent space would seem to be a really good way to quantify what "understanding" means, and that is what GPT is doing.

I think as time goes on, we will get a "human of the gaps" phenomenon with these models.


👤 smoldesu
We overestimate ChatGPT because it talks nice. The actual heuristics it uses to find it's answers and the 'logic' it employs is pretty ugly, but since AI is a black box we never really interact with that part.

👤 alan-crowe
Paul Graham wrote about this in his book ANSI Common Lisp, in the second footnote to page 141, right at the end of Chapter Eight. Turning to the Notes section, page 401

In 1989, a program like Henley was used to simulate netnews postings by well-known flamers. The fake postings fooled a significant number of readers. Like all good hoaxes, this one had an underlying point. What did it say about the content of the original flames, or the attention with which they were read that randomly generated postings could be mistaken for the real thing?

One of the most valuable contributions of artificial intelligence research has been to teach us which tasks are really difficult. Some tasks turn out to be trivial, and some almost impossible. If artificial intelligence is concerned with the latter, then study of the former might be called artificial stupidity. A silly name, perhaps, but this field has real promise - it promises to yield programs that play a role like that of control experiments.

Speaking with the appearance of meaning is one of the tasks that turn out to be surprisingly easy. People's predisposition to find meaning is so strong that they tend to overshoot the mark. So if a speaker takes care to give his sentences a certain kind of superficial coherence, and his audience are sufficiently credulous, they will make sense of what he says.

This fact is probably as old as human history. But now we can give examples of genuinely random text for comparison. And if our randomly generated productions are difficult to distinguish from the real thing, might that not set people thinking?

The program shown in Chapter 8 is about as simple as such a program could be, and that is already enough to generate "poetry" that many people (try it on your friends) will believe was written by a human being. With programs that work on the same principle as this one, but which model text as more than a simple stream of words, it will be possible to generate random text that has even more of the trappings of meaning.

Graham published ANSI Common Lisp in 1996, so the kind of program he was talking about was far inferior to GPT-3. There is an extra level of self reference when we read his footnote in 2023 and think that he is talking about today's programs rather than the incomparably weaker programs of 1989. We find the meaning that we want, even in texts that complain about us finding the the meaning that we want.


👤 scantis
Interesting hypothesis. The person interacting with it, has another train of thought and might find the answers more appealing or clever, then somebody else who is reading the conversation later on.

Since it just extends your inputs and broadens them out in a nice language, maybe you are more swoon by it. It could be some kind of focus selective perception effect, when it keys on your input and your mind stays on topic.

I haven't interacted with chat gpt and only read conversation of others and it doesn't seem very special or marvelous to me so far.


👤 salawat
I come from the same background, and I think you're absolutely on the right track. Whenever I peck away at these models, it takes about 3 minutes before I'm tossing it away because it's on another BS path, or yet another pathological output stream (read: one that I know is wrong from prior experience).

The only interesting application I've come up with for them is to pair it with low confidence/knowledge people and use it as "the idiot in the room" spewing half or non-truths to sharpen that persons BS filters/prime them with a different direction to shift research in.

Trying to do any substantive conversation with them is nigh on impossible due to the lack of recollection/memory/lookback and blissful meta-unawareness of it's own lacking it.

It's a bit like trying to reason through something with a person suffering from dementia.


👤 james-revisoai
Yes - if the core innovations in ChatGPT hadn't came with the base GPT-3 model, we'd see that too.

Just imagine if they had released a model which transformed a page of search engine result snippets into the answers ChatGPT gives today, where it'll prefix with "There are many issues to consider" and formulate a complete response combining information.

This - reinforcement learning from human feedback - is truly what has changed in ChatGPT vs what came before. And combined with the chat interface - our native way to communicate - that has toppled human imagination into the "pleasant" zone.

But if this had been released and needed to be ran for each response first on a page of results, without the underlying giant LLM - we wouldn't be nearly as impressed.

A veneer and a person with a dentists gown, don't make a dentist. But the veneer makes the gown palatable.


👤 manv1
Ask ChatGPT to write a children's story and it'll give you a pretty decent children's story. Ask it to write a story about XYZ and it will give you a story about XYZ. Ask ChatGPT to write a story in a language it doesn't know and it'll explain to you in that language why it can't write a story in that language using that language in a grammatically correct fashion.

It can do haikus and write song lyrics as well.

That's something that some huge percentage of humans are unable to do. Whether it's intelligent or not is as useless as asking if people are intelligent or not.


👤 Obertr
Our brain learned language from our experiences. What others told us and what we told to ourselves.

if you think about it, ChatGPT was told a lot of stuff. The system though is not as complex as brain. But even with this simplicity + a lot of data, it is able to match our super complex brain + experiences in different answers.

and if we change to it will blow up to the levels we don't understand.

we will only notice the representation we understand, the tip of the iceberg.

My answer is no.


👤 staticman2
ChatGPD is really mainly impressive in that you, the human can say things like "You didn't answer my request, please review how you responded" and it will try again and somehow incorporate feedback in it's next response.

It's going to be oversold among a certain tech crowd because... well... the Wikipedia page on history of artificial intelligence says human-like intelligence in a program has been about 10 years away since 1970 if we go by quotes from leading A.I. experts.

Some tech people,for whatever reason, do not seem to be properly grounded in reality when it comes to A.I. advancements and are just too optimistic in general about these things.


👤 MagicMoonlight
It's very impressive but the problem is it's not an actual brain so it can't really know anything. If you ask it what 1+1 is then it might just say 4 or it might say avocado. But then at the same time it could also write you an entire play about mooses and it would be a good play. It's just not quite there yet.

It doesn't help that they purposefully gimp it in order to prevent wrongthink. Half the words have been disabled manually so you can't get it to roleplay as a vole acting out the moose play because that could be a wrongplay.


👤 cloudking
The opinions on this technology seem to be polarized, in one camp there are people saying it will change everything for the better, in the other camp they find the technology useless because it produced an invalid response. I'm somewhat in the middle, leaning towards it being a transformative technology. It certainly has limitations and gets things wrong, but it depends entirely on how you prompt it. I've gotten it to help me with a number of project tasks:

1) writing requirements for product features

2) turning those requirements into user stories

3) analyzing what code does and finding bugs

4) creating prototype extensions and add-ons

5) writing marketing content, emails, social media posts

6) brainstorming and providing feedback on ideas

The output is good because I'm guiding it to produce good output. It's a tool that's as powerful as the person wielding it.

I think some people have had it give bad output (false statements, invalid code etc) and dismissed the technology entirely. They aren't considering if maybe their input is the problem. Garbage in, garbage out.


👤 sublinear
Since ChatGPT always tries to give a response to a query without questioning it, and since the user is always trying to formulate a better query until they're satisfied with the answer, the user is doing the intellectual heavy lifting.

If the user makes nonsensical queries ChatGPT tends to go along with the nonsense.

ChatGPT is no more intelligent than a mirror is an artist.


👤 Brigand
Question seems a bit tautological: everything we think and do is because of our brains.

👤 fargle
I think, possibly, yes.

However, if that were true, then it would also apply equally to other humans. The way our brains are wired might overestimate other people based on how they communicate?

Well, on average, we probably do. Almost certainly.

ChatGPT doesn't "know" anything except how to string words together in a way that matches what it's learned. And it does it very, very well. It's so good that even though I know this, I find it sometimes mesmerizing and amazing and hard to not believe it actually understands the subject. Spend enough time with it and it will fail spectacularly, but it depends on the kind of task. Stringing words together well is what it's good at.

Now, the chilling thing is this - how many asshats have you run into that are exactly a walking organic ChatGPT? They actually know nothing, have no ability to do math or apply logic, but they string words about a subject they have "learned" together in such a convincing way that they are practically hypnotists. They don't know anything about the subject, what they've learned is sequences of words and phases about a subject. I've run into a lot of these. And even knowing they exist, it's hard to spot one when they are good at it.

So yes, I think this exploits a trick in our brains that makes us want to believe what sounds good, and to ascribe meaning to it. And it makes us want to judge far too optimistically the intelligence of the thing based on how well it does this. It's like optical illusions - even though you know it isn't real, you still actually see it.

The human people that exploit this trick either are unaware of the difference or purposefully pretend to be. They are the bane of smart, honest, hard-working people. They are frequently dishonest, narcissistic, bullies, and are high-level asshats.

ChatGPT is only fun because it's in a glass cage. If it were your manager or co-worker and refused to acknowledge that it was a ChatGPT, it would not be nearly as fun.