HACKER Q&A
📣 wg0

Will AGI Not “Hallucinate”?


We know that LLMs "hallucinate" as they mainly predict the next token and that prediction has no intrinsic understanding whatsoever. Hence, we call it "hallucination".

I want to ask those in the know following:

  - Based on current foundations, will AGI not hallucinate?
  - Is there another line of inquiry in AI other than massive statistical inference from datasets?
  - Fundamental problem seems to be about Knowledge Representation. Is there anything interesting happening in that area? Can massive graphs not help with that?
Thank you for reading through.


  👤 mindcrime Accepted Answer ✓
My thoughts:

Based on current foundations, will AGI not hallucinate?

Nobody knows how to create an AGI yet, so there's no real way to answer that.

Is there another line of inquiry in AI other than massive statistical inference from datasets?

Yes, see everything ever researched under the rubrics of "GOFAI" and/or "Cognitive Architectures". There are LOTS of ideas about how to do AI, some of which are known to "work" but just don't scale very well, others that are very speculative, etc. See above "nobody knows how to create AGI yet."

Fundamental problem seems to be about Knowledge Representation.

It's definitely a problem.

Is there anything interesting happening in that area?

It's an active area of research, albeit perhaps not as active as at points in the past, and not as "trendy" as Deep Learning, LLM's, etc.

Can massive graphs not help with that?

Good question. I personally suspect that some useful answers to the knowledge representation problem may lie in the domain of Graph Neural Networks and the confluence of that stuff and "Knowledge Graphs". But I don't know that anybody as "cracked it" just yet.

I personally also suspect that some of the answers me need may lie in the domain of Spiking Neural Networks and related research. But that's just a hunch.