It is AGI.
There is no point to rant about "oh but there is no long-term memory, no goal-setting, no planning and such" — it is really easy to augment it with all that, see Auto-GPTs and similar projects. Just loop it through itself and give it an interface to the external world, and it could do anything. Incrementally improve the model, and you'll get to superhuman performance in all tasks.
The fact that LLMs appear so intelligent to humans is really a reflection of our inability to imagine effects of scale. We can understand simple linear predictions as trivial calculations, but when language-based pattern discovery is many layers deep and those patterns are combined in nontrivial (but non-intelligent) ways, we project intelligence onto the result.
But they can, in some scenarios, give the appearance of being close, which might be enough to be useful for some AGI purposes, whatever those might be.