I'm coming from a metrology and test and measurement background so I'm always trying to boil things down to a clean reproducible metric. While AI has fantastic use cases I think it also forces us to think about how much we're overvaluing our achievements in this space.
I think ChatGPT has a hypnotic ability that derives from it being able to predict the most likely next word.
I'd argue that people perceive a sense of incongruence when they see a word that is improbable in context. ChatGPT systematically avoids that, so the stuff it writes slides right past your critical abilities. The moment that you start to skim you're doomed.
As a neurodivergent person I'm frankly pretty envious of ChatGPT because it seems to get credit for it's glass being 70% full where it feels other people always perceive me as 30% empty. If it is good at anything it is being the emperor who gets away with wearing no clothes or at eliciting "neurotypical privilege" from people.
Note there was a similar discussion when ELIZA came out in that people really wanted to believe in it because of their hunger for meaning. See the concept of "blood in the gutter" in comics.
The entire conversation always involves this endless spiel about how GPT is just generating plausible-sounding text and doesn't "really understand" anything. This is not really true, but the larger problem is that it's often left as a purely philosophical point with infinitely high goalposts. What would an AI need to be able to do for people to deem it has this magical property of "really understanding" something? This is often left unexplained, in the sense that it is truly debatable if AI can every "really understand" stuff and... blah blah blah. It's a low-information discussion, in my view.
But GPT really does "understand" stuff in the sense that it does form an internal representation of ideas and concepts in its own latent space, and this is robust to a wide variety of transformations such as variations in wording. It is able to translate between different realizations of the same idea by expressing that idea in different ways: written in English in different styles, written in other languages, rapping; it can even "write a song in object-oriented Python" if you ask it to, and it will do it, and then be able to translate that to an English-language song that rhymes with the same meaning. The fact that it has a many-to-one representation of realizations to vectors in its latent space would seem to be a really good way to quantify what "understanding" means, and that is what GPT is doing.
I think as time goes on, we will get a "human of the gaps" phenomenon with these models.
In 1989, a program like Henley was used to simulate netnews postings by well-known flamers. The fake postings fooled a significant number of readers. Like all good hoaxes, this one had an underlying point. What did it say about the content of the original flames, or the attention with which they were read that randomly generated postings could be mistaken for the real thing?
One of the most valuable contributions of artificial intelligence research has been to teach us which tasks are really difficult. Some tasks turn out to be trivial, and some almost impossible. If artificial intelligence is concerned with the latter, then study of the former might be called artificial stupidity. A silly name, perhaps, but this field has real promise - it promises to yield programs that play a role like that of control experiments.
Speaking with the appearance of meaning is one of the tasks that turn out to be surprisingly easy. People's predisposition to find meaning is so strong that they tend to overshoot the mark. So if a speaker takes care to give his sentences a certain kind of superficial coherence, and his audience are sufficiently credulous, they will make sense of what he says.
This fact is probably as old as human history. But now we can give examples of genuinely random text for comparison. And if our randomly generated productions are difficult to distinguish from the real thing, might that not set people thinking?
The program shown in Chapter 8 is about as simple as such a program could be, and that is already enough to generate "poetry" that many people (try it on your friends) will believe was written by a human being. With programs that work on the same principle as this one, but which model text as more than a simple stream of words, it will be possible to generate random text that has even more of the trappings of meaning.
Graham published ANSI Common Lisp in 1996, so the kind of program he was talking about was far inferior to GPT-3. There is an extra level of self reference when we read his footnote in 2023 and think that he is talking about today's programs rather than the incomparably weaker programs of 1989. We find the meaning that we want, even in texts that complain about us finding the the meaning that we want.
Since it just extends your inputs and broadens them out in a nice language, maybe you are more swoon by it. It could be some kind of focus selective perception effect, when it keys on your input and your mind stays on topic.
I haven't interacted with chat gpt and only read conversation of others and it doesn't seem very special or marvelous to me so far.
The only interesting application I've come up with for them is to pair it with low confidence/knowledge people and use it as "the idiot in the room" spewing half or non-truths to sharpen that persons BS filters/prime them with a different direction to shift research in.
Trying to do any substantive conversation with them is nigh on impossible due to the lack of recollection/memory/lookback and blissful meta-unawareness of it's own lacking it.
It's a bit like trying to reason through something with a person suffering from dementia.
Just imagine if they had released a model which transformed a page of search engine result snippets into the answers ChatGPT gives today, where it'll prefix with "There are many issues to consider" and formulate a complete response combining information.
This - reinforcement learning from human feedback - is truly what has changed in ChatGPT vs what came before. And combined with the chat interface - our native way to communicate - that has toppled human imagination into the "pleasant" zone.
But if this had been released and needed to be ran for each response first on a page of results, without the underlying giant LLM - we wouldn't be nearly as impressed.
A veneer and a person with a dentists gown, don't make a dentist. But the veneer makes the gown palatable.
It can do haikus and write song lyrics as well.
That's something that some huge percentage of humans are unable to do. Whether it's intelligent or not is as useless as asking if people are intelligent or not.
if you think about it, ChatGPT was told a lot of stuff. The system though is not as complex as brain. But even with this simplicity + a lot of data, it is able to match our super complex brain + experiences in different answers.
and if we change we will only notice the representation we understand, the tip of the iceberg. My answer is no.
It's going to be oversold among a certain tech crowd because... well... the Wikipedia page on history of artificial intelligence says human-like intelligence in a program has been about 10 years away since 1970 if we go by quotes from leading A.I. experts.
Some tech people,for whatever reason, do not seem to be properly grounded in reality when it comes to A.I. advancements and are just too optimistic in general about these things.
It doesn't help that they purposefully gimp it in order to prevent wrongthink. Half the words have been disabled manually so you can't get it to roleplay as a vole acting out the moose play because that could be a wrongplay.
1) writing requirements for product features
2) turning those requirements into user stories
3) analyzing what code does and finding bugs
4) creating prototype extensions and add-ons
5) writing marketing content, emails, social media posts
6) brainstorming and providing feedback on ideas
The output is good because I'm guiding it to produce good output. It's a tool that's as powerful as the person wielding it.
I think some people have had it give bad output (false statements, invalid code etc) and dismissed the technology entirely. They aren't considering if maybe their input is the problem. Garbage in, garbage out.
If the user makes nonsensical queries ChatGPT tends to go along with the nonsense.
ChatGPT is no more intelligent than a mirror is an artist.
However, if that were true, then it would also apply equally to other humans. The way our brains are wired might overestimate other people based on how they communicate?
Well, on average, we probably do. Almost certainly.
ChatGPT doesn't "know" anything except how to string words together in a way that matches what it's learned. And it does it very, very well. It's so good that even though I know this, I find it sometimes mesmerizing and amazing and hard to not believe it actually understands the subject. Spend enough time with it and it will fail spectacularly, but it depends on the kind of task. Stringing words together well is what it's good at.
Now, the chilling thing is this - how many asshats have you run into that are exactly a walking organic ChatGPT? They actually know nothing, have no ability to do math or apply logic, but they string words about a subject they have "learned" together in such a convincing way that they are practically hypnotists. They don't know anything about the subject, what they've learned is sequences of words and phases about a subject. I've run into a lot of these. And even knowing they exist, it's hard to spot one when they are good at it.
So yes, I think this exploits a trick in our brains that makes us want to believe what sounds good, and to ascribe meaning to it. And it makes us want to judge far too optimistically the intelligence of the thing based on how well it does this. It's like optical illusions - even though you know it isn't real, you still actually see it.
The human people that exploit this trick either are unaware of the difference or purposefully pretend to be. They are the bane of smart, honest, hard-working people. They are frequently dishonest, narcissistic, bullies, and are high-level asshats.
ChatGPT is only fun because it's in a glass cage. If it were your manager or co-worker and refused to acknowledge that it was a ChatGPT, it would not be nearly as fun.