A system is superintelligent if in a remote hiring process it can get any non-physical job, earn a salary and perform its duties to the satisfaction of the employer. This should be true for all positions and for all levels of seniority, from an intern to senior staff and PHDs. The employer must believe they are employing a human (it's ok if doubts arise due to system vastly outperforming its peers).
[1] https://builtnotfound.proseful.com/pragmatic-superintelligen...
Right now, these systems all seem to be entirely doing things that are downstream of what some human has determined is worth doing.
People are using ChatGPT to help them write an engaging article on the trade offs between nuclear and solar energy. But, to my knowledge, there is no artificial intelligence out there looking around, deciding that the this is an interesting topic for an article, writing it, publishing it somewhere, and then following up on that with other interesting articles based on the conversation generated by that one.
I don't mean this specific thing of writing articles is an important indicator. I mean coming up with ideas and then executing them, independently.
Now, it may well be that our current technologies could do this, but that we just aren't setting them up to do so. I dunno!
But I think this is the thing that would make me change my mind if I started to see it.
I can guess what motivation lies behind the question, so I'll also add my opinion about where we're at now: Nothing even comes close.
Furthermore, I wouldn't be surprised if we never get there at all in the next thousand years.
Why such a long period of time? Because there is more going on in the world right now than advances in machine learning. Looking at global population dynamics and climate change forecasts, we're on the road to major global infrastructure collapse in around the 22nd century. And I understand a lot of people are optimistic that (a) we will have AGI before then, and (b) climate change and population implosion won't change that much. Yet, despite the optimists, I don't think I'm on shaky ground with my current forecast.
It's not the first time in history we'll have had a major infrastructure collapse. When it's happened in the past, those periods end up being called dark ages. And I don't think there will be intensive AI R&D during a global dark age.
Examples of discoveries that would have counted if they weren’t already made: Relativity (say a derivation of E=mc^2), Quantum Mechanics (say a calculation of the hydrogen energy levels), the discovery of Riemmannian geometry, the discovery of DNA, and the discovery of the theory of evolution with natural selection.
The idea is to test the system’s out of distribution generalization: its ability to achieve tasks that are beyond its training distribution. This is something that humans can do, but no current LLM appears to be able to do.
Then, I often joke with friends that they are not very intelligent if they get a very high score on IQ tests. They are very intelligent if they can perform easily some difficult tasks. Let's say, I drop you in the middle of an equatorial forest with no money and no clothes and you come back home in a few days. A superintelligence would become a leader of that country (king, president, influential advisor, whatever) and fly me there to meet it.
That test assumes to have a body and act on the world. A confined AGI would just perform like any of us on any task we can describe to it. A superintelligent AGI would perform much better than any human, much like specialized game AIs or non-AIs beat us at go and chess. I think that this is hard to do if they are only language models even if we increase their computing power.
What's a superhuman low cost genial way to keep a toilet clean, except for me to clean it every time or paying somebody to do it? A superhuman AGI would find a way.
If it was feasible to practically fill and query a database with every answer for every practical prompt it would face would we probably call it a generally intelligent system or AGI? Maybe not AGI, but what if it was a system composed of lesser AI modules behind a universal presenting interface?
I think so because it would be perceptively indistinguishable in behaviour from any other 'true' definition of AGI.
And so I think as long as it exhibits what we perceive as universal adaptability, and performance at making decisions when working with any environments and input types, this is likely where I/we will absolutely be calling something 'complete AGI'. Though before that I imagine we will be marketing LLMs as having reached 'language-based AGI' status.
I don’t believe the above leads us to artificial superhuman intelligence in terms of conscious agent potential. For now what I believe might result in that is something that runs from a single unified neural network system. It should also be observed to be universally adaptable and performant at making decisions in all environments and consuming many input types. And then also, it should be continuously running within the network. It shouldn't halt for prompt, or pass through a text stage in the loop.
Gap detection: The system is able to identify gaps in established knowledge
Path/problem decomposition: The system is able to decompose a problem (i.e., an identified gap) and break down the solution into a set of subproblems that can be independently solved.
Improvement/Optimization: Given a known solution, the system is able to discover objectively better approaches to solution.
There are probably other dimensions, but I would start with these.
For me, AGI must meet the following parameters - Be able to start training from scratch. With barely any training, you should be able to throw unlabeled data of different modalities and it should start forming connections between concepts
- After training for a bit (now at perhaps teenager level), be able to identify incorrect information in its own training (e.g. Santa Clause is not real) and effectively remove it.
- it does not have to exceed humans, just need to demonstrate the ability to learn by itself and use the tools it’s given.
- The self training should be supervised but not 100% controlled. It should iterate on itself and learn new concepts without having been trained on the entirety of the internet.
- Learning concepts should not require massive amounts of data, just an assertion and some reasoning.
When an AI goes "insane" and destroys itself somehow.
When an AI attempts to prolong its existence.
When an AI is the only thing capable of extending its own functionality.
Why what are you gonna do?