This produces different results from just asking about the topic itself. When ChatGPT says that you answered the question correctly, you know you've got it. Sometimes I'll get answers like, "Almost except this one part you're describing incorrectly," which is very helpful.
It's a true test of your own knowledge to ensure that you can explain it to someone else correctly, even if that someone is an LLM.
It's largely replaced Google for me as a general purpose answer finder, for everything from basic coding and tech stuff to historical events, veterinary medicine/pet advice, homework help, tribal law, translations and cultural differences, GIS, biology, mental health, radio licensing, finance...
If an answer seems dubious, I'll double-check it the traditional ways. But that doesn't really happen all that often for me.
I also use it a lot for its natural language abilities, like asking "what do you call it when ____" or "what's that thing/company/software that does _____", which are traditionally really hard to do in keyword based engines like Google. The LLM is so much better at this than keyword stemming.
Overall I find it a crazy helpful tool, like having a super smart personal assistant that knows a little bit about a heck of a lot. Sure, it's wrong sometimes, but it's way way less spammy than Google (for now), and is generally good enough to provide a Wikipedia style summary in readable English
I don't think of it as some sort of magic. "Glorified autocomplete" is a perfectly useful tool, for common enough datasets where the training data is statistically likely to be accurate anyway. I mean that's a lot of what human knowledge is to begin with. For every true expert that actually does their own research and replicates their results for validations, there are thousands more who will just blindly parrot a good-enough answer.
In my conversations with employers, doctors, friends, peers, and more, the LLMs are really helpful for getting up to speed and learning the basics beforehand. From there, it's a matter of validating the specifics the old fashioned way, but saving time on the initial background is a huuuuge thing for me that would've been otherwise really hard in the Google SEO wasteland.
Put it this way: I wouldn't trust an LLM over a human expert. But I would trust it more than your typical online marketer, who has misaligned incentives and often lacks domain expertise.
Something I've found over the years is the vast majority of stuff is easy to do and easy to learn. The hard part is finding the correct language to use to describe the problem or the solution that you are after. ChatGPT is great for this. I can ask it roughly in lamens terms from a descriptor of something and get the technical terms that let me research a solution 10x quicker than if I just googled my way around the interwebs blindly.
I also use it for writing the general bulk of an email. Ill run through and correct it/rewrite it a bit in my own words but it gets the ball rolling and is great for giving ideas on structure rather than content.