With that established...
I was talking a friend a couple of weeks ago, and recall saying something like
ChatGPT is the first thing I've encountered in a while that gives me that "we are living in the future" feeling.
Do I find it impressive? Yeah, definitely. But does that statement come with some caveats? Also "yeah, definitely."
What caveats? Well... while I find it impressive in a general sense, I'd stop short of some of the more hyperbole filled assessments I've seen out there. I don't, for example, think "ChatGPT is AI" or "ChatGPT is going to take all of our jobs" or even "ChatGPT is going to kill Google". Regarding AGI, I'm not even sure ChatGPT represents much at all in the way of progress towards that, although I lean towards a believe that it does represent some progress towards AGI. Maybe just not as much as some commentators would have you believe.
I would refer people who are really interested in this whole discussion to, among other things, the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?[1] by Emily Bender, et al.
All of that aside, clearly it's a powerful and useful tool, and it also clearly has limitations.
It's Mark V Shaney good. It's a blast. But it's really only narrowly impressive, not generally impressive.
We're just seeing the start of the power of these concepts. They're tools, but not anywhere near being an AGI.