When using the models, I find the content generated frequently must be edited to be useful. Presumably if it doesn't wander off or repeat itself too much, there is a human in the loop guiding the generation. Take it as seriously as you would any anonymous post on the Internet.
Fact check important stuff by seeking reputable sources. Treat the rest as entertainment. Even if it is possible to detect, just because text is generated does not mean it's incorrect.
I imagine given the way text is generated using probabilities, it can in theory be detected when stretches of text are using high probability words in sequence. I'm not aware of any tools that do this at the moment though.
Over time this can help you create a reliable collection of sources, but for casual browsing or listening you may have to develop a qualitative filter set--IOW, in what ways are you at risk by listening to AI-gen content, and what does that imply as to possible filtering steps, etc.
So far the best solution was to add 'reddit' to Google search.
I don't trust to random article and always verify. Of course this is very unfriendly and time consuming... sort of opposite of internet ?