HACKER Q&A
📣 logicallee

How long until AI shows general intelligence without failures?


Large AI models sometimes exhibit extraordinary ability (e.g. see my comment history), but also obvious failures.

How long do you think it will take until very large AI models show general intelligence without any apparent failures?


  👤 aquarium87 Accepted Answer ✓
Judging by the human programmed biases inherently designed into AI,. Maybe never.

Humans can't agree on very basic facts, ergo, humans will never agree with AI. Just like factchecking is weaponized, AI is weaponized to disregard the truth.

The internet is BARELY a coherent reflection of humanity as is, with all kinds of input biases to the datasets, including political and social.

ChatGPT now means the internet has crossed the Rubicon, and will reflect humans and life and reality less and less and less as the feedback loops intensify and the datasets increasingly become infected with AI data parsed from flawed human data sets.

I'm just not sure any general intelligence AI can be created when it simply doesn't stand a chance of having a neutral dataset to train on.

Pre AI Internet is best. But even this is a falsehood. The pre-AI Internet has relics of censorship, state sponsored propoganda, and is skewed toward young people who are inherently not not world wise or intelligent.

General purpose AI promises to be better than us, but if we can't program it with honest datasets, it will always reflect our flaws.

And I don't believe there is such a thing as an objective dataset.


👤 GianFabien
The state of the art you refer to is based on statistical analysis of text found on the internet. AFAIK no AI system actually performs logical processing of the submitted materials. When it says that 2+2=4, it's simply because that is more commonly found than 2+2=7. If you were to replace all sources that say 2+2=4 with ones that say 2+2=7, then that's how it will answer in a conversation.

We will probably run out of available electrical power before we build a ginormous fail-proof AI.


👤 ordu
> How long do you think it will take until very large AI models show general intelligence without any apparent failures?

No one knows. People do not know how their minds work. Some parts of it were discovered but we do not know how much remains unknown.

It is the story of AI. Researchers get the idea, they play with it, they find the limits of the idea, they start to look for the other idea. For example, I do love the story of "expert systems". AI researchers tried to model expert's decision making process with logic rules and utterly failed. The positive result of all these efforts we know now that a) logic is not the basis of human mind, and b) experts themselves do not know how they reach decisions.

Not all AI adventures ended so badly, but the most of them gave some more wisdom along the lines "intelligence is more difficult that we thought, so we need 30 more years to build AGI".

It is a search problem, we seek a path to AGI, and we do not know the path in advance, we cannot measure it's length. So it is impossible to predict future. We can draw more or less credible predictions about the future of ChatGPT and similar tech, but not about what will be the next Thing and how it will be close to AGI.


👤 Jack000
When a LLM hallucinates, it’s not a failure, it’s working perfectly in its context as a language model.

Most criticism of LLMs are really a criticism of the language modelling training task. The underlying technology can be used in other ways which better match our intuitive understanding of “general intelligence”

Unfortunately I think major research labs will be increasingly secretive as they gain traction in training large transformers as general agents.


👤 zone411
You might be interested in the crowdsourced prediction which currently stands at 2039: https://www.metaculus.com/questions/5121/date-of-artificial-....

My personal shorter-term prediction, based on my experience and extensive reading of papers in the field, is that we have a clear path forward for the next two years or so, and we'll see significant progress. However, progress will slow down significantly later on unless we get some major research breakthroughs, greater than advancements such as diffusion or transformers. But I also anticipate that major labs will publish much less in order to get a commercial advantage, slowing down the pace of research.


👤 antman
This discussion for agi comes up again and again but I see no discussion on when will AI be good enough to absorb most jobs? That could create societal failings thus hurting even the same institutions that are developing it. So I say it might be never, not due to technical reasons but because of failing institutions

👤 therafman
Well, I thought it would be fun to ask ChatGPT the same question, it appears it knows its limitations so far:

"It is difficult to predict when AI models will achieve general intelligence, as this is a highly debated topic in the AI research community and the concept of "general Intelligence" itself is not well-defined and understood. Currently, AI models excel in specific tasks but struggle to perform tasks that humans find simple and straightforward.

It is possible that with advancements in AI research and technology, AI models will continue to improve and potentially reach human-level general intelligence in the future, but there is no consensus on a specific timeline. Additionally, it's important to consider the ethical and societal implications of creating such advanced AI systems."


👤 navjack27
Large language models does not an intelligence make. We have a very very long time.

👤 mikewarot
The problem is training. To train an AI assistant, you have to use its LLM (Large Language Model, previously trained on Terabytes of text) to generate a LOT of output, which is then ranked/labeled/evaluated by humans as part of the training loop. If you want competence in a given area, I guestimate you'll need a century or more of evaluation time from humans qualified in that area, because of the inefficiencies of current training algorithms.

If you don't have competent experts, then the tendency will be towards BS that can fool the non-expert.


👤 janalsncm
It all depends on what exactly you mean by general intelligence and how you measure it. Until then it is a poorly defined question. And no offense meant to this post because a lot of people are asking the same thing, but the most likely scenario is the most annoying one: we will continue to see milestones achieved by machines, and people will say that’s not “real” intelligence because it can’t do X.

My personal opinion is that intelligence is a mirage.


👤 PaulHoule
This book talks about that problem

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

and makes it clear the "truth" is not so simple as to be a module added to the LLM.

Logical and mathematical reasoning is rather specialized, but it is a useful feature. Understanding text, though, frequently involves setting up and solving logic problems and solving it to be sure your interpretation of the arguments is correct, all the more true if you expect it to read about a system and then apply that system to another text. So you at least run into the NP-complete world of SMT solving and the system must realize the requirements of 1980s symbolic AI no matter what technology is under the hood.

It's much worse than that because it is reasoning with uncertainty. If I were uncharitable I'd say ChatGPT was wrong if it said the universe is composed like this

https://en.wikipedia.org/wiki/Lambda-CDM_model

and a year later we heard otherwise. At a very high level it had best be able to explain that there are alternate views on the subject, but it has to know when to stop. If I was asking "How to get the length of a string in Python?" is not helpful to "teach the controversy" that some of us use "sum(1 for c in s)". It has to handle contradictions such as person A and person B believing different things, the same person believing different things at different times, plus problems that are impossible to solve or practically impossible so logic is further complicated.

One route links the LLM up with an "old AI" system like the way AlphaGo links up neural and search based players.

The straight route for LLMs is to enlarge the attention capacity. Right now ChatGPT has a sub-words 4096 token attention window. If the task is "write two pages summarizing topic T with citations to the literature" it has to read all of the papers it cites. That could be 400,000 to 4,000,000 tokens which one could presenting to the LLM which would be 100-1000x bigger. Maybe it can swap inputs in and out and otherwise conserve space, but I think the big weakness of LLM in practice is that they run out of steam if the text is larger than what they are built for.


👤 alexfromapex
The size of the AI model has nothing to do with general intelligence. If it is realized, it will be some semantic difference in the way an AI teaches its own model or something similar. I’d guess it’s still about 10 years away.

👤 stuntkite
I don’t think there’s such a thing as a thing that doesn’t fail.

👤 VincentEvans
Go to openai playground and ask it for “the last business day of this month”. This is my litmus test of the impeding machine uprising.

👤 hestefisk
Until binary logic systems can become emergent and autopoietic, I don’t think it will happen any time soon.

👤 marzetti
er.. how long till humans show general intelligence without any apparent failures..?

👤 bpanon
I think 2024 it will be arguable AI and by 2029 it will be unarguable.

👤 nuker
It will not be ML models. And I’d bet on 100 years.

👤 Leo_Germond
I think everone is using a different definition of "failure" here, because it is impossible to formally, unambiguously, and objectively, define what is a successful AGI.

Do you consider yourself a successful AGI? I guess you do.

Does that make you always right? No.

Does that make you at least capable of staying rational in all situations? No.

Well maybe the logical framework that you (and I) are basing all of rationality on is at least coherent? What do you mean it is not[1]? Who's that Gödel guy anyway?

The issue at the heart of this definition of AGI is that it is undecidable. That is why rather than proving that an intelligence is unambiguously succesful at being general, tests are being used, which look at whether it succeeds "often enough" at a given task in a particular set of conditions. The Turing test is such test, but it is not the first one ever devised, the Jewish golem was not able to talk and did not get a name, so it was considered "unfinished" and as such failing the "being human-like" test. I don't want to bring unrelated parallels to this post, but the key takeaway I think is this one: it is ultimately fruitless to define intelligence in terms of "success", as if it was a physical quantity one could measure.

So now to get back to your original question, chat AI have had large amount of success in the past year, one Google researcher who worked with them is even assuming some of them are sentient [2] and makes some strong arguments in that direction. AI can drive cars, play videogames, hold conversations... it's less than what humans can do, but they can also fold proteins[3] and find new patterns in humongus amounts of data [4], something that was impossible to do in a reasonnable amount of time before they were created. So are they succesful? I would say yes; are they perfect? Oh no far from it, and my point is that they will never be perfect, and will always fail in some regards.

[1] https://en.m.wikipedia.org/wiki/G%C3%B6del%27s_incompletenes.... [2] https://www.scientificamerican.com/article/google-engineer-c... [3] https://www.science.org/doi/10.1126/science.370.6521.1144 [4] https://theconversation.com/seti-alien-hunters-get-a-boost-a...


👤 user_named
Never

👤 blackflame7000
How long until each of us can say the same

👤 solumunus
We are no closer to AGI then we were 20 years ago. We haven't even started on the path towards AGI yet, we don't even know what the path would look like! I'm not sure why so many people fail to see this. The only way this is wrong is if AGI is somehow an emergent property of sufficiently trained LLM's. Most people would agree that's extremely unlikely (to the point it's not even worth considering), yet a lot of the same people seem to think that somehow "we're close!". How? Show me where?

To answer the question, my bet would be "not in any of our lifetimes".