HACKER Q&A
📣 logicallee

To AI denialists, what do you think is happening here?


Some people showed that ChatGPT's associative kind of thinking failed to solve a trick question:

https://old.reddit.com/r/ChatGPT/comments/10eddnj/bullying/

Interestingly, someone did get it to solve the question with some prodding:

https://preview.redd.it/4f6k1yg5voca1.jpeg?width=944&format=pjpg&auto=webp&v=enabled&s=f0d8d4bd4f3a1705275f626a4e1187efbe8c2672

For those of you who are deniers of AI intelligence, what do you think is happening in that last conversation exactly? For me it is clear that the AI fails to "put two and two together" but after a bit of prodding, is able to "figure out" the puzzle.

It doesn't seem like there is any way to explain that transcript in terms of simple text manipulation rather than "true understanding". What do you think is happening as the AI takes the hint?


  👤 red-iron-pine Accepted Answer ✓
> It doesn't seem like there is any way to explain that transcript in terms of simple text manipulation rather than "true understanding". What do you think is happening as the AI takes the hint?

I've worked with a lot of Devs that knew how to copy-paste and throw around buzzwords without true understanding. Sentience == True Understanding


👤 smoldesu
"True understanding" here is just having the name "Mike" in it's context. AI models like ChatGPT keep the tokenized representation of your question in memory - if asked to say a unique name that isn't the 3 it has already said, the most-likely next answer will be the last name that appears in memory.

👤 Daishiman
What happens in every LLM: a token predictor that tries to predict the most likely token. It does not have the capacity to logically reason; it tries to derive this problem through models that either lack sufficient training data (because how many instances of similar problems would this really have been trained with) or have underfit models. Given enough samples or sufficient language understanding it may provide the right answers, but the "prodding" that you suggested in no way shows that there's semantic understanding happening.

👤 monero-xmr
A five year old can learn to play the piano quite well, but there is very little utility or monetary value to this skill.

Similarly the creative fields are already hyper competitive with an extremely high bar for success. Middling prose generated by an AI isn’t going to move the needle except to assist humans.

I am very skeptical of the actual applicable utility of the ChatGPT and image generation tools. I doubt many businesses of size will be created focused on these tools.