HACKER Q&A
📣 barking_biscuit

How do you personally define 'AGI'?


I have noticed through reading a lot of discussions online and watching a lot of long-form interviews on YouTube that there is quite a wide variety of working definitions that individuals use for the term 'AGI'.

We're not likely to all get on the same page exactly after one round of discussion, but I think it would help accelerate the process and help us to challenge our own assumptions and update our own mental models.


  👤 mindcrime Accepted Answer ✓
In my personal lexicon, "AGI" means computer / machine based intelligence that is approximately as flexible and capable as an average adult human. So an "AGI" to me, should be able to drive a car, play chess, do math, discuss literature, etc. What is not required in my definition is for the AGI to be "human like" at all. So the Turing Test is meaningless to me, as I don't need an AGI to be able to lie effectively and do things like describing subjective experiences (like drinking too much wine, getting drunk, and pissing itself) that it never had.

I also don't require it to do things that require embodiment, like "play a game of baseball" or whatever. Although I do see a time coming when robotics and AI will align to the extent that an AI powered robotic player will be able to "play a game of baseball" or suchlike.

To expand on this a bit: I don't over-emphasize the "general" part like some people do. That is, some people argue against AGI on the basis that "even humans aren't a general intelligence". That, to me, is mere pedantry and goal-post moving. I don't think anybody involved in AGI ever expected AGI to necessarily mean "most general possible problem solver that can exist in the state space of all possible general problem solvers" or whatever. Disclosure: I'm partially paraphrasing Ben Goertzel from a recent interview[1] I saw, in that previous sentence.

[1]: https://www.youtube.com/watch?v=MVWzwIg4Adw


👤 jasonjmcghee
Nothing to do with being sentient.

I think it has to do with being able to teach itself arbitrary information, including how to learn better, and importantly recognize what it does and doesn't know.

LLMs feel like a massive step towards that. It feels similar to a single "thought process".

A "good / useful AGI" might have aspects such as: - Ability to coherently communicate with humans (hard to prove it's working without this) and other agents. - Ability to make use of arbitrary tools

This sounds very similar to AutoGPT (what people poke fun of as an llm in a while loop)- and if the brain was AGI- I think it'd work very well.

I think there's a critical difference between LLMs and AGI, which is metacognition.

If an LLM had proper metacognition maybe it would hallucinate, but then it would realize and say "actually I'm not sure and I just started hallucinating that answer- I think I need to learn more about xyz." And then (ideally) could go ahead and do that (or ask if it should).

Another piece I've thought about is subjective experience.

Inserting experiences into a vector store recalling them in triggering situations.


👤 version_five
Something that can escape it's programming, not just master what it's been given. I expect there are mental gymnastics to explain why chatgpt already fits this in some technical sense, hopefully people see what I'm getting at.

👤 aristofun
1. We went way too far with computer metaphor for human mind and intelligense. Up to the point where we demote human intelligense to fit some calculated standard. It is a nonsense many people overlook.

2. Key cornerstone of a human intelligense is an ability to create something completely new that cannot be predicted or calculated in advance, another one is a will — none of those are even touched by current neural networks. DALLIE makes a nice imitation of the first point though.


👤 tikkun
OpenAI's definition:

"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."

https://openai.com/blog/how-should-ai-systems-behave#fn-A


👤 binarymax
Apologies for the LinkedIn post, but I have been proposing this structure (similar to the 5 levels of autonomous driving), to frame discourse and set some definitions:

https://www.linkedin.com/posts/maxirwin_as-discourse-continu...

“As discourse continues on the impact and potential of Artificial General Intelligence, I propose these 5 levels of AGI to use as a measurement of capability:

Level 1: “Write this marketing copy”, “Translate this passage” - The model handles this task alone based on the prompt

Level 2: “Research this topic and write a paper with citations” - 3rd party sources as input – reads and responds to the task

Level 3: “Order a pizza”, “Book my vacation” - 3rd party sources as input and output - generally complex with multiple variables

Level 4: “Buy this house”, “Negotiate this contract” - Specific long-term goal, multiple 3rd party interactions with abstract motivation and feedback

Level 5: “Maximize the company profit”, “Reduce global warming” - Purpose driven, unbounded 3rd party interactions, agency and complex reasoning required

I had the pleasure to present this yesterday afternoon on WXXI Connections (https://lnkd.in/gA9CugQR), and again in the evening during a panel discussion on ChatGPT hosted by TechRochester and RocDev (https://lnkd.in/gjYDEkBE).

This year, we will see products capable of levels 1, 2, and 3. Those three levels are purely digital, and aside from the operator all integration points are done through APIs and content. For some examples, level 1 is ChatGPT, level 2 is Bing, and level 3 is AutoGPT.

Levels 4 and 5 are what I call "hard AGI" - as they require working with people aside from the operator, and doing so on a longer timeline with an overall purpose. We will likely see attempts at this technology this year, but it will not be successful.

For technology to reach a given level, it must perform as well as a person who is an expert at the task. A broken buggy approach that produces a poor result does not qualify.

Thanks for reading, and if you would like to discuss these topics or work towards a solution for your business, contact me to discuss!”


👤 lesserknowndan
My definition would be Artificial General Intelligence (AGI) is what most people _think_ ChatGPT can currently do but cannot.

👤 mirkodrummer
Just joking but also serious: it can sort a deck of card efficiently without knowing what the algorithm is, like I do

👤 laratied
What humans can do that computers currently can not. Once computers can do that we will just call it an algorithm.

👤 admissionsguy
It can read admission requirements for a university degree programme and put them into a CSV file.

👤 hackernoteng
They used to call it "strong AI" but I guess that fell out of favor.

👤 asherah
it's probably a bit of a bad definition, but i'd say "something i can have a long conversation with and not figure out it's an AI"

👤 jstx1
I don't - I really think that it's a completely pointless distinction to make. Labelling some version of AI as general doesn't make it more or less useful.

👤 ftxbro
we aren't even going to agree on 'intelligence'

👤 ano88888
there will always be some religious people who require AI to have soul to qualify as AGI. So there will be always be people who deny the existence of AGI in the future.