I would argue that intelligence is nothing more than the ability to display behavior that would be considered intelligent in context. In that regard, we've had "artificial intelligence" for decades, and it's simply slowly improving in terms of its capabilities and fidelity with "actual human" intelligence over time.
Now if by "real AI" you mean "AI like in the sci-fi movies, AI that is fully equal to humans in every regard and probably exceeds humans in some regards"... then clearly we're not there yet but I still see no reason in principle to think that it won't happen in time. That is, is there any particular reason to think that LLM's are the last advancement in AI that will ever take place?
I think these systems will be built over time, but only see a marginal impact on productivity due to inertia. Then suddenly there will be a billion dollar company run by 5 people with graphics cards and the old Fortune 500 will shit the bed.
That I've heard, the "AI" folks have yet to produce anything even close to a common crow's social savvy, or ability to handle the real world.
Confusing terminology and decades of science fiction have brainwashed people to expect it to be perfect in order to qualify as real.