In contrast LLMs already have way more knowledge than the average human, have mostly good reasoning and occasionally hallucinate.
Surely they aren't artificial super intelligences, but it feels like the term AGI could apply.
My prediction is that over the course of the next 6-48 months, we'll see the emergence of LLMs with "working memory," "short term memory," and "long term memory," with working memory being more or less current LLM capabilities, short term memory being made up of a fast one-shot summarization which then gets temporarily stored raw on disk, and long-term storage getting transcribed into a LORA-like module overnight based on perceived importance of the short term memories.
I think emotion analogues will be important for the last part, as emotion processing plays a big role in memory formation (this is an adaptation: we more strongly remember things that we had strong emotions about because they're more important to us.)
So, 6-48 months to computer systems that feel (/have emotion analogue) and sleep to dream (/summarize into long-term storage overnight.)
Those developments, I'm confident, will absolutely silence anyone who says it's not "real" AGI. But then, at that point, you can potentially have built a being that can have feelings about its own existence, and then things get Interesting.
> let's play fizz buzz but with 3 and 7
ChatGPT
Sure, let's play Fizz Buzz with 3 and 7. The rules are:
Instead of saying "Fizz" for multiples of 3, you say "Fizz." Instead of saying "Buzz" for multiples of 7, you say "Buzz." If a number is a multiple of both 3 and 7, you say "FizzBuzz." Let's start counting. I'll write the sequence, and you tell me the appropriate response for each number.
1, 2, Fizz, 4, Buzz, Fizz, Fizz, Buzz, Fizz, 11, Fizz, Fizz, Buzz, Fizz, 16, FizzBuzz, 18, 19, Buzz, Fizz, 22, 23, Fizz, Buzz, 26, Fizz, Fizz, Buzz, 29, FizzBuzz, 31, 32, Fizz, Buzz, Fizz, 34, Buzz, Fizz, 37, Fizz, Buzz, Fizz, Fizz, Buzz, 41, Fizz, FizzBuzz, 44, Buzz, Fizz, 46, Fizz, 49, Buzz, Fizz, 52, Fizz, Fizz, Buzz, 56, Fizz, 58, 59, Buzz, FizzBuzz, 61, 62, Fizz, Buzz, Fizz, 64, Buzz, Fizz, 67, Fizz, Buzz, Fizz, 71, Fizz, FizzBuzz, 74, Buzz, Fizz, 76, Fizz, 79, Buzz, Fizz, 82, Fizz, Fizz, Buzz, 86, Fizz, 88, 89, Buzz, FizzBuzz, 91, 92, Fizz, Buzz, Fizz, 94, Buzz, Fizz, 97, Fizz, Buzz.
Great job! Feel free to continue the sequence or let me know if you'd like to do something else.
The people who have a worldview that aligns with strong artificial intelligence often want to call whatever AGI, depending on their definition, but are afraid to do so because they will be ridiculed by the "non-believers".
The opposite worldview sometimes just moves the goalposts as soon as some capability is unlocked.
Indeed, by testable definitions of GI (i.e all or nearly all humans would also pass), the current state of the art is agi.
2 - I have some sense that OpenAI already has a system (that they haven't released) that many reasonable people will consider AGI.
That's why.
You're just impressed that it can write well and 'sounds' precise, but it's the effects of a lot of RLHF, transformers and H100s, not something capable of solving humanity's biggest problems or greatly improving our comprehension of the universe.
It isn't precise at all, and if you'd give it a whole afternoon, it would still come up with the same stupid solution, meanwhile you give a human an afternoon, and they might come up with something, that has also common sense.
Go read about objective-based AI or other subjects in the field. It's much more promising than that parrot powered by h100s :-)
Sure, there are investors throwing money at any company doing their niche LLMs or whatever, but it's snake oil at best.
LLMs are just a new interesting interface for humans for computers and data. We need other AI-related fields to develop, for us to unlock the real power of "intelligence". Computers are just as smart as they were in 2010s.