We're not likely to all get on the same page exactly after one round of discussion, but I think it would help accelerate the process and help us to challenge our own assumptions and update our own mental models.
I also don't require it to do things that require embodiment, like "play a game of baseball" or whatever. Although I do see a time coming when robotics and AI will align to the extent that an AI powered robotic player will be able to "play a game of baseball" or suchlike.
To expand on this a bit: I don't over-emphasize the "general" part like some people do. That is, some people argue against AGI on the basis that "even humans aren't a general intelligence". That, to me, is mere pedantry and goal-post moving. I don't think anybody involved in AGI ever expected AGI to necessarily mean "most general possible problem solver that can exist in the state space of all possible general problem solvers" or whatever. Disclosure: I'm partially paraphrasing Ben Goertzel from a recent interview[1] I saw, in that previous sentence.
I think it has to do with being able to teach itself arbitrary information, including how to learn better, and importantly recognize what it does and doesn't know.
LLMs feel like a massive step towards that. It feels similar to a single "thought process".
A "good / useful AGI" might have aspects such as: - Ability to coherently communicate with humans (hard to prove it's working without this) and other agents. - Ability to make use of arbitrary tools
This sounds very similar to AutoGPT (what people poke fun of as an llm in a while loop)- and if the brain was AGI- I think it'd work very well.
I think there's a critical difference between LLMs and AGI, which is metacognition.
If an LLM had proper metacognition maybe it would hallucinate, but then it would realize and say "actually I'm not sure and I just started hallucinating that answer- I think I need to learn more about xyz." And then (ideally) could go ahead and do that (or ask if it should).
Another piece I've thought about is subjective experience.
Inserting experiences into a vector store recalling them in triggering situations.
2. Key cornerstone of a human intelligense is an ability to create something completely new that cannot be predicted or calculated in advance, another one is a will — none of those are even touched by current neural networks. DALLIE makes a nice imitation of the first point though.
"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."
https://www.linkedin.com/posts/maxirwin_as-discourse-continu...
“As discourse continues on the impact and potential of Artificial General Intelligence, I propose these 5 levels of AGI to use as a measurement of capability:
Level 1: “Write this marketing copy”, “Translate this passage” - The model handles this task alone based on the prompt
Level 2: “Research this topic and write a paper with citations” - 3rd party sources as input – reads and responds to the task
Level 3: “Order a pizza”, “Book my vacation” - 3rd party sources as input and output - generally complex with multiple variables
Level 4: “Buy this house”, “Negotiate this contract” - Specific long-term goal, multiple 3rd party interactions with abstract motivation and feedback
Level 5: “Maximize the company profit”, “Reduce global warming” - Purpose driven, unbounded 3rd party interactions, agency and complex reasoning required
I had the pleasure to present this yesterday afternoon on WXXI Connections (https://lnkd.in/gA9CugQR), and again in the evening during a panel discussion on ChatGPT hosted by TechRochester and RocDev (https://lnkd.in/gjYDEkBE).
This year, we will see products capable of levels 1, 2, and 3. Those three levels are purely digital, and aside from the operator all integration points are done through APIs and content. For some examples, level 1 is ChatGPT, level 2 is Bing, and level 3 is AutoGPT.
Levels 4 and 5 are what I call "hard AGI" - as they require working with people aside from the operator, and doing so on a longer timeline with an overall purpose. We will likely see attempts at this technology this year, but it will not be successful.
For technology to reach a given level, it must perform as well as a person who is an expert at the task. A broken buggy approach that produces a poor result does not qualify.
Thanks for reading, and if you would like to discuss these topics or work towards a solution for your business, contact me to discuss!”