There has been a segment of veteran software engineers who, for the last 3 or so years, have been consistently bearish on AI and insist that not in 15-30 years will LLM's and the like ever be more than a nifty tool that speeds up the work of human engineers at best and creates superficially reasonable code that needs to be constantly fixed and maintained at worst. The consensus among these people is that AI is just a massive deluge of hype and empty promises, and that human intelligence is special, deep, and unmatchable by AI at least for many decades.
However, now, there is a growing pro-AI consensus among industry leaders and forecasters. It seems like the default prognostication is that AGI will arrive sometime between 2025 and 2029.
What is your best argument that this won't happen?
TL;DL: I work in AI and am also a skeptic on AGI. I don't think the current approach to LLM training, even with lots of compute and bundling of chain-of-thought, constitute AGI and while many tasks will be made easier or automated I still think we need people at the wheel.
What I didn't get a chance to discuss, is how this is all just digital. The physical world still needs plenty of people to make things work, and if AI ends up overtaking knowledge workers, we'll all go back to doing things like building houses and working in labs.
From Wikipedia:
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.[1] Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
"Human cognitive capabilities" is often misunderstood to just mean "thoughts", but this definition is a bit limiting and there seems to be some incentive for pro-AGI people to make everyone think that this is all the brain does. The brain also receives information from an enormous number of sensors throughout the body, such as proprioceptors and other afferents. It also commands an enormous number of muscles, down to the muscle fiber level - this is what allows fine motor control, and is something robots still cannot replicate very well even after over half a century of development. Beyond that, there is so much more going on at other levels..."Society of Mind" is a great book by Marvin Minsky that tries to attack this subject.I also think AGI must be able to push knowledge in a way that the best humans have done as well. So by this definition, AGI would mean a computer which can not only think the advanced thoughts that humans think, but can also generate new, divergent insights of the sorts which visionaries have thought of (e.g. Shannon's theory of information, Einstein's relativity, Gandhi's application of ahimsa, Gautama's Buddhism etc...).
Put together, I would expect true AGI to not merely match an "average" human's capabilities, but also be capable of exceeding it in the way an Albert Einstein or a LeBron James can. I think we are still decades away from either of those things happening, if they happen at all.
Finally, this is by no means an anti-AI take. I use LLMs daily as part of my core workflow, and rely on them for other tasks that a few years ago I had no capability of doing. Rather, I define all these terms to cut through so much of the marketing jargon and BS that AI companies are flooding us with right now. Let's keep an eye on the correct target, and not just settle for whatever some company thinks AGI means and which will pad their wallets the best in the near term.
You cannot have a "consensus" in a sample size of "many" people. It's just a widely-held opinion at that point, not a consensus at all.