How long do you think it will take until very large AI models show general intelligence without any apparent failures?
Humans can't agree on very basic facts, ergo, humans will never agree with AI. Just like factchecking is weaponized, AI is weaponized to disregard the truth.
The internet is BARELY a coherent reflection of humanity as is, with all kinds of input biases to the datasets, including political and social.
ChatGPT now means the internet has crossed the Rubicon, and will reflect humans and life and reality less and less and less as the feedback loops intensify and the datasets increasingly become infected with AI data parsed from flawed human data sets.
I'm just not sure any general intelligence AI can be created when it simply doesn't stand a chance of having a neutral dataset to train on.
Pre AI Internet is best. But even this is a falsehood. The pre-AI Internet has relics of censorship, state sponsored propoganda, and is skewed toward young people who are inherently not not world wise or intelligent.
General purpose AI promises to be better than us, but if we can't program it with honest datasets, it will always reflect our flaws.
And I don't believe there is such a thing as an objective dataset.
We will probably run out of available electrical power before we build a ginormous fail-proof AI.
No one knows. People do not know how their minds work. Some parts of it were discovered but we do not know how much remains unknown.
It is the story of AI. Researchers get the idea, they play with it, they find the limits of the idea, they start to look for the other idea. For example, I do love the story of "expert systems". AI researchers tried to model expert's decision making process with logic rules and utterly failed. The positive result of all these efforts we know now that a) logic is not the basis of human mind, and b) experts themselves do not know how they reach decisions.
Not all AI adventures ended so badly, but the most of them gave some more wisdom along the lines "intelligence is more difficult that we thought, so we need 30 more years to build AGI".
It is a search problem, we seek a path to AGI, and we do not know the path in advance, we cannot measure it's length. So it is impossible to predict future. We can draw more or less credible predictions about the future of ChatGPT and similar tech, but not about what will be the next Thing and how it will be close to AGI.
Most criticism of LLMs are really a criticism of the language modelling training task. The underlying technology can be used in other ways which better match our intuitive understanding of “general intelligence”
Unfortunately I think major research labs will be increasingly secretive as they gain traction in training large transformers as general agents.
My personal shorter-term prediction, based on my experience and extensive reading of papers in the field, is that we have a clear path forward for the next two years or so, and we'll see significant progress. However, progress will slow down significantly later on unless we get some major research breakthroughs, greater than advancements such as diffusion or transformers. But I also anticipate that major labs will publish much less in order to get a commercial advantage, slowing down the pace of research.
"It is difficult to predict when AI models will achieve general intelligence, as this is a highly debated topic in the AI research community and the concept of "general Intelligence" itself is not well-defined and understood. Currently, AI models excel in specific tasks but struggle to perform tasks that humans find simple and straightforward.
It is possible that with advancements in AI research and technology, AI models will continue to improve and potentially reach human-level general intelligence in the future, but there is no consensus on a specific timeline. Additionally, it's important to consider the ethical and societal implications of creating such advanced AI systems."
If you don't have competent experts, then the tendency will be towards BS that can fool the non-expert.
My personal opinion is that intelligence is a mirage.
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
and makes it clear the "truth" is not so simple as to be a module added to the LLM.
Logical and mathematical reasoning is rather specialized, but it is a useful feature. Understanding text, though, frequently involves setting up and solving logic problems and solving it to be sure your interpretation of the arguments is correct, all the more true if you expect it to read about a system and then apply that system to another text. So you at least run into the NP-complete world of SMT solving and the system must realize the requirements of 1980s symbolic AI no matter what technology is under the hood.
It's much worse than that because it is reasoning with uncertainty. If I were uncharitable I'd say ChatGPT was wrong if it said the universe is composed like this
https://en.wikipedia.org/wiki/Lambda-CDM_model
and a year later we heard otherwise. At a very high level it had best be able to explain that there are alternate views on the subject, but it has to know when to stop. If I was asking "How to get the length of a string in Python?" is not helpful to "teach the controversy" that some of us use "sum(1 for c in s)". It has to handle contradictions such as person A and person B believing different things, the same person believing different things at different times, plus problems that are impossible to solve or practically impossible so logic is further complicated.
One route links the LLM up with an "old AI" system like the way AlphaGo links up neural and search based players.
The straight route for LLMs is to enlarge the attention capacity. Right now ChatGPT has a sub-words 4096 token attention window. If the task is "write two pages summarizing topic T with citations to the literature" it has to read all of the papers it cites. That could be 400,000 to 4,000,000 tokens which one could presenting to the LLM which would be 100-1000x bigger. Maybe it can swap inputs in and out and otherwise conserve space, but I think the big weakness of LLM in practice is that they run out of steam if the text is larger than what they are built for.
Do you consider yourself a successful AGI? I guess you do.
Does that make you always right? No.
Does that make you at least capable of staying rational in all situations? No.
Well maybe the logical framework that you (and I) are basing all of rationality on is at least coherent? What do you mean it is not[1]? Who's that Gödel guy anyway?
The issue at the heart of this definition of AGI is that it is undecidable. That is why rather than proving that an intelligence is unambiguously succesful at being general, tests are being used, which look at whether it succeeds "often enough" at a given task in a particular set of conditions. The Turing test is such test, but it is not the first one ever devised, the Jewish golem was not able to talk and did not get a name, so it was considered "unfinished" and as such failing the "being human-like" test. I don't want to bring unrelated parallels to this post, but the key takeaway I think is this one: it is ultimately fruitless to define intelligence in terms of "success", as if it was a physical quantity one could measure.
So now to get back to your original question, chat AI have had large amount of success in the past year, one Google researcher who worked with them is even assuming some of them are sentient [2] and makes some strong arguments in that direction. AI can drive cars, play videogames, hold conversations... it's less than what humans can do, but they can also fold proteins[3] and find new patterns in humongus amounts of data [4], something that was impossible to do in a reasonnable amount of time before they were created. So are they succesful? I would say yes; are they perfect? Oh no far from it, and my point is that they will never be perfect, and will always fail in some regards.
[1] https://en.m.wikipedia.org/wiki/G%C3%B6del%27s_incompletenes.... [2] https://www.scientificamerican.com/article/google-engineer-c... [3] https://www.science.org/doi/10.1126/science.370.6521.1144 [4] https://theconversation.com/seti-alien-hunters-get-a-boost-a...
To answer the question, my bet would be "not in any of our lifetimes".