HACKER Q&A
📣 atleastoptimal

Do you believe AGI is (practically) possible?


With AGI (Artificial general intelligence) being a computer system generally as smart and capable as a median human at all tasks, and can be trivially scaled learn to do any task at least as well as a human can.

Most leaders in AI claim it'll be developed before 2030. Some claim as early as 2027. However a lot of people seem to think AI is a hype cycle and that deep learning has hit or will hit a wall significantly below AGI capabilities.

Do you believe that AGI is possible? If so, do you believe it will be achieved within the decade? If not, what are your best arguments why it is impossible or not going to happen in the foreseeable future?


  👤 dekhn Accepted Answer ✓
I belive by 2030 or so we will may have models that can generate video, audio, and text that is indistinguishable from a normal human being ("The Zoom Call Turing Test"). But that doesn't mean anything- it's unclear whether such a system is intelligent, or merely a highly generalized mimic.

It will probably require an enormous amount of compute power and data, which means it will only be available to groups with $100+B to spend.


👤 mikewarot
I believe a (packer[1]) AGI is possible now thanks to the memory prosthesis that is part of MemGPT[2].

Packers are people who memorize facts to cope with the world. My experience working with them is they are very capable people, but when they are forced to deal with Computers, their mental model of the world breaks... any deviation from the expected response throws them off. If you give them instructions that account for those variations, they then do just fine again. 8) (It took a while to figure out what was going on, and how to deal with them)

I believe the small context window of LLMs will tend to force it into a packer style of thinking. (Once it's no longer in training, it only has the "compiled" knowledge to fall back on, and it can't learn any new deep assumptions)

I'm not sure how you would go about training a (mapper) style AGI.

One thing that is very clear to me is that there's a superhuman level of learning in process while an LLM is being trained. Features present in the training data are compiled into the resulting weights. The fact that the resulting intelligence is even able to coherently communicate through the slightly randomized bag of words output system, with all of it's inherent loss, surely hints at this.

[1] https://wiki.c2.com/?MappersVsPackers

[2] https://memgpt.ai/


👤 foobarbaz33
> Do you believe that AGI is possible?

Yes. A bunch of atoms orbiting around a star ended up in a configuration that can think. Our star dust human brains. The fact we exist proves atoms can be configured to have GI (no need to call it artificial).

I don't believe in souls created by divine intervention. So I believe the essence of life or intelligence is something that can be created by us. The same way gravitational and chemical forces led to the human brain.

What is possible has barely been explored at all.


👤 nprateem
It's not possible at our current levels of understanding. Forget about AI, and step it back to things we've been collectively trying to understand for hundreds of years:

Dominant models of economics don't work, we don't understand consciousness, or understand human motivations with much clarity (e.g. what causes people to become entrepreneurs).

AI would be built on models, yet how do we model the complex biologically-derived survival instincts and biases that have evolved in us over hundreds of years? It has to be impossible unless the AI has a biological component as models aren't reality. Yet our emotions are the underlying - often subconscious - drivers of our actions.

Even your phrasing. There is no "median human", and how could there be? To calculate a median relies on variables, yet the number of variables is practically infinite leading to different "medians" depending on the input variables.

Toning down expectations - e.g. to avoid aiming for capability "in all tasks" - is likely to lead to great benefits, but without a biological component I just can't imagine AI ever reaching the levels you mention. It currently can't "learn", only infer, combine, replicate and predict patterns.

In general, for the last 200 years we've thrown out the importance of consciousness hoping to find mechanical explanations for the universe and complex phenomena such as financial markets and economics. But those models just don't work. Fortunately progress is being made with studies into biases, e.g. behavioural finance & economics. Yet their potential for leading to prediction is still questionable, at both individiual and group levels.

So I think the goal of non-biological AI is fundamentally impossible since it's based on a flawed premise of mechanical humans interacting in a mechanical universe.


👤 linkdd
> a computer system generally as smart and capable as a median human

This implies:

  1. a median human is somehow smart
  2. a system that behaves like a human is desirable
For (1), I think it's highly debatable regarding the amount of dumb errors/mistakes I (and the average developer/sys admin) make daily, heck even hourly/minutely. Yesterday, I did commit a format string instead of a formatted string (aka: `"{foo}bar"` vs `f"{foo}bar"` in Python. That's not really smart.

For (2), do we want an autonomous system to mimic the weakest link in the security chain of an IT system? aka: the human. Do we want AIs that put passwords next to the computer screen? Or use `password123` because it's easier to remember?

My point is, we don't want AGI as smart as us, we want AGI smarter than us, and far more efficient that us. I don't want an AGI that forget the `f` in my example earlier.


👤 muzani
Sam Altman's target was 2025: https://twitter.com/sama/status/1081584255510155264?lang=en

This was the state of AI then: https://www.theverge.com/2019/11/7/20953040/openai-text-gene...

I think even the experts can get overexcited though, in line with how McAfee predicted Bitcoin would hit a million dollars.


👤 joegibbs
I think it's definitely going to be done by 2040, 2030 is a maybe though. I'm a bit skeptical about using LLMs as the basis for AGI - you would call a human conscious even if they had never learned a language in their life. Maybe some kind of model trained on audio/video with a video-based "internal monologue" could be more likely to achieve it.

But by 2040 I think we'll have the capabilities to simulate every neuron in a human brain, and that will get us undeniable AGI, unless intelligence comes from some other source that we haven't discovered yet.


👤 aristofun
This is silly to assume we can reach something so sophisticated before we even able to define what is it that we're trying to achieve exactly.

Am I only one seeing the naked king here?


👤 jqpabc123
I believe it will not happen until we have a firm definition and understanding of what intelligence is.

Basic intuition and experience --- you can't build/create that which you do not understand.

That being said, I see no reason to believe it is possible to achieve AGI using digital binary silicon gate logic.

The only working examples we have of intelligence are analog and organic.


👤 fragmede
Thing is, we don't need AGI for the singularity to occur. we're already at a place where these things are useful. agi would make them more useful, yeah, but focusing on AGI is the wrong question. look at what models are able to enable along the way. we don't need full AGI to get there.

👤 __lbracket__
Don't know about AGI, but humanity has convincingly achieved NGS (Natural General Stupidity).

👤 caeril
Yes of course it's possible.

The only people who deny the possibility of AGI believe that there is some supernatural Jesus magic in our neurons.

As far as timeline goes, who knows? Sometime in the next billion years.


👤 codethatwerks
Possible but maybe 2060-2100. The big rub with LLMs is training time and methodology. RAG exists because despite the name, these AI systems cannot really learn, at least human level stuff.

👤 zzo38computer
I think it is probably possible, but probably not before 2030; it will take much longer than that.

👤 cvalka
Not possible within this century.

👤 wahnfrieden
it will not happen with LLM, so it's a complete unknown what could be next and when

👤 swman
Maybe, maybe not. What is possible is investor returns lol