AI transitions to AGI at a point that seems not easy to define. So most who say AGI mean "at a point far past the transition, where it is clearly AGI". But historically when people speak of future capabilities, those capabilities do not appear in the form and configuration expected.
As capabilities are currently being developed in exponential time, it's hard to predict future configurations clearly even a few weeks into the future.
My mom was diagnosed 3 years ago with early onset Alzheimer’s disease and had rapidly progressed. One thing I’ve noticed is how much projection is involved in intelligence appraisal—it’s hard to not get worked up by someone with a dementia because in your mind it’s a version of you and your capabilities looking back at you—the actual state of the dementia brain is entirely ineffable to the perceiver.
I wonder whether a similar effect tends to occur when people appraise LLMs. LLMs can sound human enough for our automatic projection to occur, and so actually evaluating the intelligence of the machine becomes extremely non trivial because the theory of mind of the observer is projected into the LLM.
I’m largely with Yann LeCun here: I think that current LLMs are magical but their key power is extremely effective fuzzy search. They will find a pragmatic way into software, but barring more innovations will not be a revolution—though more innovations are on the horizon.
The traditional strong AI characteristics are theory of mind and consciousness, so ultimately defining those terms very narrowly and precisely is going to be important. I doubt we’ll see a clear and unambiguous leap to AGI, it will be gradual, so agreeing on terms is going to become more important. It does feel like real metacognition is a clear attribute of intelligence in my eyes.
What do you mean? It doesn't seem exponential to me, exponential would mean things happen faster and faster, but the capabilities seem to develop slower and slower.
Or do you mean a negative exponential?