Most leaders in AI claim it'll be developed before 2030. Some claim as early as 2027. However a lot of people seem to think AI is a hype cycle and that deep learning has hit or will hit a wall significantly below AGI capabilities.
Do you believe that AGI is possible? If so, do you believe it will be achieved within the decade? If not, what are your best arguments why it is impossible or not going to happen in the foreseeable future?
It will probably require an enormous amount of compute power and data, which means it will only be available to groups with $100+B to spend.
Packers are people who memorize facts to cope with the world. My experience working with them is they are very capable people, but when they are forced to deal with Computers, their mental model of the world breaks... any deviation from the expected response throws them off. If you give them instructions that account for those variations, they then do just fine again. 8) (It took a while to figure out what was going on, and how to deal with them)
I believe the small context window of LLMs will tend to force it into a packer style of thinking. (Once it's no longer in training, it only has the "compiled" knowledge to fall back on, and it can't learn any new deep assumptions)
I'm not sure how you would go about training a (mapper) style AGI.
One thing that is very clear to me is that there's a superhuman level of learning in process while an LLM is being trained. Features present in the training data are compiled into the resulting weights. The fact that the resulting intelligence is even able to coherently communicate through the slightly randomized bag of words output system, with all of it's inherent loss, surely hints at this.
Yes. A bunch of atoms orbiting around a star ended up in a configuration that can think. Our star dust human brains. The fact we exist proves atoms can be configured to have GI (no need to call it artificial).
I don't believe in souls created by divine intervention. So I believe the essence of life or intelligence is something that can be created by us. The same way gravitational and chemical forces led to the human brain.
What is possible has barely been explored at all.
Dominant models of economics don't work, we don't understand consciousness, or understand human motivations with much clarity (e.g. what causes people to become entrepreneurs).
AI would be built on models, yet how do we model the complex biologically-derived survival instincts and biases that have evolved in us over hundreds of years? It has to be impossible unless the AI has a biological component as models aren't reality. Yet our emotions are the underlying - often subconscious - drivers of our actions.
Even your phrasing. There is no "median human", and how could there be? To calculate a median relies on variables, yet the number of variables is practically infinite leading to different "medians" depending on the input variables.
Toning down expectations - e.g. to avoid aiming for capability "in all tasks" - is likely to lead to great benefits, but without a biological component I just can't imagine AI ever reaching the levels you mention. It currently can't "learn", only infer, combine, replicate and predict patterns.
In general, for the last 200 years we've thrown out the importance of consciousness hoping to find mechanical explanations for the universe and complex phenomena such as financial markets and economics. But those models just don't work. Fortunately progress is being made with studies into biases, e.g. behavioural finance & economics. Yet their potential for leading to prediction is still questionable, at both individiual and group levels.
So I think the goal of non-biological AI is fundamentally impossible since it's based on a flawed premise of mechanical humans interacting in a mechanical universe.
This implies:
1. a median human is somehow smart
2. a system that behaves like a human is desirable
For (1), I think it's highly debatable regarding the amount of dumb errors/mistakes I (and the average developer/sys admin) make daily, heck even hourly/minutely. Yesterday, I did commit a format string instead of a formatted string (aka: `"{foo}bar"` vs `f"{foo}bar"` in Python. That's not really smart.For (2), do we want an autonomous system to mimic the weakest link in the security chain of an IT system? aka: the human. Do we want AIs that put passwords next to the computer screen? Or use `password123` because it's easier to remember?
My point is, we don't want AGI as smart as us, we want AGI smarter than us, and far more efficient that us. I don't want an AGI that forget the `f` in my example earlier.
This was the state of AI then: https://www.theverge.com/2019/11/7/20953040/openai-text-gene...
I think even the experts can get overexcited though, in line with how McAfee predicted Bitcoin would hit a million dollars.
But by 2040 I think we'll have the capabilities to simulate every neuron in a human brain, and that will get us undeniable AGI, unless intelligence comes from some other source that we haven't discovered yet.
Am I only one seeing the naked king here?
Basic intuition and experience --- you can't build/create that which you do not understand.
That being said, I see no reason to believe it is possible to achieve AGI using digital binary silicon gate logic.
The only working examples we have of intelligence are analog and organic.
The only people who deny the possibility of AGI believe that there is some supernatural Jesus magic in our neurons.
As far as timeline goes, who knows? Sometime in the next billion years.