HACKER Q&A
📣 isahalisyucel

Are LLMs thinking, and does it matter for AGI?


not a researcher, just a developer sharing a thought experiment.

i think today's LLMs are genuinely thinking, in a human-like way. maybe slightly differently, but still thinking. what they do is verbal thinking, reasoning through language.

LeCun argues world models represent a fundamentally different kind of thinking, perceiving and reasoning about the physical world. i think he's right, but i also think there are even more distinct modes: abstract thinking, mathematical thinking, strategic/forward-looking thinking.

my intuition is that combining all these different types of thinking is what gets us to AGI, or close to it. curious if others see these as genuinely distinct reasoning modes or just variations of the same underlying process.


  👤 al_borland Accepted Answer ✓
Cal Newport covered this topic in his latest AI reality check. [0]

I think LLMs have done a passible job mimicking what thinking looks like, without actually thinking. I'm still constantly having to correct it when its reasoning takes a left turn. I can't think of any human I'm still willing to speak to that needs to be corrected as much as AI.

There might be a Venn diagram with those various reasoning modes and AGI is in the middle of it, but I don't think the current technology is going to get us there.

[0] https://youtu.be/sS3C_i7gkI8?si=7yGNLVtOTnM6RMB7&t=61