It's much more useful than Googling and gives helpful answers most of the time, although sometimes it takes refining the answers through repeated questions. There are hallucinations but I think it's more often right than wrong and very often it can answer questions on an expert level.
When you search the web on your own you encounter many incorrent, misleading, outdated info on a lot of technical topics, though.
So how is it that ChatGPT overall seems to generate more correct and helpful answers than misleading info?
The goal of the initial pretraining phase is to make it good at predicting the next word. The rest of the training process is aimed at making it (1) helpful and (2) as correct as possible.
I think some people oversimplify things by calling LLMs "next token predictors" and they leave out the tuning towards helpfulness and correctness.