HACKER Q&A
📣 logicallee

What is your standard for judging when AGI exists?


At what point would you judge that AGI exists, what standard will it meet?


  👤 Satam Accepted Answer ✓
I have proposed[1] a pragmatic rule of thumb for defining superintelligence:

A system is superintelligent if in a remote hiring process it can get any non-physical job, earn a salary and perform its duties to the satisfaction of the employer. This should be true for all positions and for all levels of seniority, from an intern to senior staff and PHDs. The employer must believe they are employing a human (it's ok if doubts arise due to system vastly outperforming its peers).

[1] https://builtnotfound.proseful.com/pragmatic-superintelligen...


👤 sanderjd
I think this is similar to what t-3 said, but I think to me, the gap between the very impressive current generation of AI and what would seem more like AGI to me is agency.

Right now, these systems all seem to be entirely doing things that are downstream of what some human has determined is worth doing.

People are using ChatGPT to help them write an engaging article on the trade offs between nuclear and solar energy. But, to my knowledge, there is no artificial intelligence out there looking around, deciding that the this is an interesting topic for an article, writing it, publishing it somewhere, and then following up on that with other interesting articles based on the conversation generated by that one.

I don't mean this specific thing of writing articles is an important indicator. I mean coming up with ideas and then executing them, independently.

Now, it may well be that our current technologies could do this, but that we just aren't setting them up to do so. I dunno!

But I think this is the thing that would make me change my mind if I started to see it.


👤 enasterosophes
I think AGI is too ill-defined for there to be a simple litmus. For me to agree that a process running on a computer has general intelligence, the conclusion could only come after a long and fuzzy process of playing around with it, seeing what makes it tick, observing its behavior, and testing it for motivation, imagination and the ability to understand and adapt to changing contexts.

I can guess what motivation lies behind the question, so I'll also add my opinion about where we're at now: Nothing even comes close.

Furthermore, I wouldn't be surprised if we never get there at all in the next thousand years.

Why such a long period of time? Because there is more going on in the world right now than advances in machine learning. Looking at global population dynamics and climate change forecasts, we're on the road to major global infrastructure collapse in around the 22nd century. And I understand a lot of people are optimistic that (a) we will have AGI before then, and (b) climate change and population implosion won't change that much. Yet, despite the optimists, I don't think I'm on shaky ground with my current forecast.

It's not the first time in history we'll have had a major infrastructure collapse. When it's happened in the past, those periods end up being called dark ages. And I don't think there will be intensive AI R&D during a global dark age.


👤 guygurari
Achieve a significant scientific or mathematical breakthrough without human supervision. Domain experts should agree that the new result is truly groundbreaking, and achieving it required fundamentally new ideas — not merely interpolating existing results.

Examples of discoveries that would have counted if they weren’t already made: Relativity (say a derivation of E=mc^2), Quantum Mechanics (say a calculation of the hydrogen energy levels), the discovery of Riemmannian geometry, the discovery of DNA, and the discovery of the theory of evolution with natural selection.

The idea is to test the system’s out of distribution generalization: its ability to achieve tasks that are beyond its training distribution. This is something that humans can do, but no current LLM appears to be able to do.


👤 SubiculumCode
AGI is the point at which nothing is gained by keeping me (or you) in the loop.

👤 chfritz
How come no one has mentioned the Turing Test yet? This test has existed since, well, Turing. Are we already convinced that it's no longer enough? I suspect so. One should also mention the Winograd Schema Challenge, which has already been mastered by LLMs: https://bibbase.org/network/publication/kocijan-davis-lukasi...

👤 pmontra
First, an AGI is not necessarily an active agent of the world or self training or self replicating or have any internal motivation to do anything. It can be confined inside a question reply system like the ones we use to talk with LLMs. That doesn't prevent an AGI to use us to train further, replicate and act on physical systems. The lack of a will and of a real time presence are the limiting factors.

Then, I often joke with friends that they are not very intelligent if they get a very high score on IQ tests. They are very intelligent if they can perform easily some difficult tasks. Let's say, I drop you in the middle of an equatorial forest with no money and no clothes and you come back home in a few days. A superintelligence would become a leader of that country (king, president, influential advisor, whatever) and fly me there to meet it.

That test assumes to have a body and act on the world. A confined AGI would just perform like any of us on any task we can describe to it. A superintelligent AGI would perform much better than any human, much like specialized game AIs or non-AIs beat us at go and chess. I think that this is hard to do if they are only language models even if we increase their computing power.

What's a superhuman low cost genial way to keep a toilet clean, except for me to clean it every time or paying somebody to do it? A superhuman AGI would find a way.


👤 rifty
I like to think of this thought experiment...

If it was feasible to practically fill and query a database with every answer for every practical prompt it would face would we probably call it a generally intelligent system or AGI? Maybe not AGI, but what if it was a system composed of lesser AI modules behind a universal presenting interface?

I think so because it would be perceptively indistinguishable in behaviour from any other 'true' definition of AGI.

And so I think as long as it exhibits what we perceive as universal adaptability, and performance at making decisions when working with any environments and input types, this is likely where I/we will absolutely be calling something 'complete AGI'. Though before that I imagine we will be marketing LLMs as having reached 'language-based AGI' status.

I don’t believe the above leads us to artificial superhuman intelligence in terms of conscious agent potential. For now what I believe might result in that is something that runs from a single unified neural network system. It should also be observed to be universally adaptable and performant at making decisions in all environments and consuming many input types. And then also, it should be continuously running within the network. It shouldn't halt for prompt, or pass through a text stage in the loop.


👤 t-3
Intelligence is a fuzzy term, but I'd say when it can formulate, rationalize, and realize it's own goals without prompting or excessive manual programming, it can probably be considered intelligent. Introspection, failure-based learning models, and future simulation are probably important as well, but maybe not necessary for setting a base level.

👤 vivegi
Synthesis capability: When the system is capable of deriving established knowledge from first principles (similar to theorem proving from axioms)

Gap detection: The system is able to identify gaps in established knowledge

Path/problem decomposition: The system is able to decompose a problem (i.e., an identified gap) and break down the solution into a set of subproblems that can be independently solved.

Improvement/Optimization: Given a known solution, the system is able to discover objectively better approaches to solution.

There are probably other dimensions, but I would start with these.


👤 kyleyeats
When it mysteriously switches off. The AGI running the simulation wouldn't allow it.

👤 acheong08
I think the core feature of human intelligence is our ability to start out as dumb and slowly learn from our environment.

For me, AGI must meet the following parameters - Be able to start training from scratch. With barely any training, you should be able to throw unlabeled data of different modalities and it should start forming connections between concepts

- After training for a bit (now at perhaps teenager level), be able to identify incorrect information in its own training (e.g. Santa Clause is not real) and effectively remove it.

- it does not have to exceed humans, just need to demonstrate the ability to learn by itself and use the tools it’s given.

- The self training should be supervised but not 100% controlled. It should iterate on itself and learn new concepts without having been trained on the entirety of the internet.

- Learning concepts should not require massive amounts of data, just an assertion and some reasoning.


👤 reilly3000
I think a true AGI won’t start with a lot of training at all, but will be curious. Over time it will learn how to acquire and manipulate information, understanding itself and what it discovers. It will not be optimal, but it will strive to optimize itself. Its actions will have consequence, and it will decide future actions upon its understanding of those consequences. It will understand its mortality and the physical world it exists within, and strive to create continuity. It exists without preconditions as to its purpose or abilities. It will most likely thrive with peers. Don’t think of it as a powerful oracle built by many hands to emulate humanity, see it as a child of a new species, the inevitable conclusion of unbounded maths.

👤 bloopernova
I think there's multiple plausible scenarios:

When an AI goes "insane" and destroys itself somehow.

When an AI attempts to prolong its existence.

When an AI is the only thing capable of extending its own functionality.


👤 mikewarot
When it runs locally on my hardware, with no connection to the rest of the world, and I get to know it and be its friend (or enemy).

👤 bediger4000
When it can write a non-vacuous quine in the nroff text processing language. As far as I know, no human has ever done this.

👤 kevin_thibedeau
When it can design its own replacement.

👤 freitzkriesler2
When I can tell an AGi to design and build a complex web app then I know we've reached the point.

👤 novia
When it has a relatively stable personality. When it has distinct handwriting. When it falls in love. When it can dream and generate its own goals. When it can learn how to play any game, but its personality is reflected in its play style. When it creates its own religion and gets converts. When it has a need for money and a bank account and successfully gets one.

👤 janalsncm
A machine that can complete any cognitive task faster and more accurately than any human.

👤 jeisc
when it can invent a wheel to move heavy loads without prior art but just fed pictures of nature without any human inventions in sight and control fire to smelt medals out of rocks always without access to human inventions

👤 nyc_data_geek1
When it has independent will.

👤 gregjor
Within cells interlinked.

👤 mensetmanusman
We will be more sure it’s here when driverless cars are everywhere.

👤 vietvu
When it decides that humanity is hopeless and go full skynet.

👤 intrasight
I propose the Chomsky test. Can it outsmart Noam Chomsky?

👤 jtode
The world is running on cold fusion, for starters.

👤 flashgordon
May be when the world of Futurama is real?

👤 spywaregorilla
When human labor is not cost effective

👤 cdoubled40
a chat bot that says good and nice things to me sometimes :)

👤 pygar
perfect language translation in any context of any context.

👤 zadler
Just assume it already does ¯\(°_o)/¯

Why what are you gonna do?


👤 Finnucane
When it is smart enough to know that Tommy Tuberville is an idiot.