HACKER Q&A
📣 jakartaInd

What's next for AI after LLM?


LLMs are cool and you can make them work for specific subject area, minimize hallucinations with RAG. But, I think we have hit local optima (and perhaps global optima) as far as LLM as technology goes. You can only add more data sources, make it perform certain operations (agents) such as parsing data, understanding context, q&a.

My question to all smart people here is, what comes after LLM? Where is AI headed? Agents are cool but are overhyped for their capability.

What is next frontier that we can see with AI?


  👤 isaoh Accepted Answer ✓
I believe several key frontiers will emerge beyond current LLMs:

Multimodal AI Integration Deeper integration of vision, audio, and language understanding More sophisticated reasoning across different types of inputs Better real-world physical interaction capabilities

Neuromorphic Computing AI systems that more closely mimic human brain architecture More energy-efficient AI processing Better handling of uncertainty and novel situations

Hybrid AI Systems Combination of symbolic reasoning with neural networks Better causal reasoning capabilities More transparent decision-making processes

Embodied AI AI systems that learn through physical interaction Better understanding of physical world constraints More practical robotics applications

Quantum AI Quantum computing applications for AI New types of algorithms leveraging quantum properties Potential breakthrough in optimization problems

FROM CURSOR :)


👤 TheBruceHimself
I'm going to guess there's just a long stagnation.

LLMs worked way better than people thought they would. Most of the experts in AI were floored by how well they worked. I think of LLMs as more of a discovery than an invention. We didn't really know what we were doing, and some people had some cute ideas about predicting responses based on modeling languages, and this really useful thing kind of just fell out of that endeavor. From that initial shock, we just kept adding more to it to see how far it could go.

What's important to appreciate is we have an extremely poor understanding of how intelligence appear to emerge from these models. Much like our own intelligence, we can explain parts and have ideas of roughly how it fits together, but on the whole, it's a bit of a mystery. We're like early bridge builders having discovered an arch in the design stops them from collapsing. We've gained a useful piece of knowledge which we can share and make things better, but we're missing the value of knowing why it all works as it does, and without that, it's harder to know what else we can do to improve things. We just throw more LLMs tech at the problem to make it better, just like the bridge builders used to throw more arches on a bridge to make them safer.

The other thing is, LLMs just appear to function so much better than almost anything else we've come up with. Nothing really comes close. There doesn't appear to be fruitful new tangential things to jump onto, incorporate, or do anything with. So, basically, if LLMs peak, we're probably just going to go back and try random things in a semi-directionless manner and hope we can discover what the next step up is from this. It's not completely blind as we can make educated guesses, but it's a slow process

Alternatively, though, since we don't understand LLMs we also don't really know what their limits are. There could be a good bit more growth left in them. It could be that putting 10x more resources into these models will end up with a AI that quite simply is human-level. There's nothing really to say that's not possible. There are some distinguist thinkeres that are beginning to argue our own intelligence may be just a biological LLM.


👤 p1esk
Bigger LLMs, mainly for video frame prediction - that’s where we have unlimited training data.