It can be argued that a system can be said to "pass the test" only if it was interrogated by an expert, and the expert still failed to distinguish it.
But the systems continue to improve at a fast pace. And at this point it seems like we are in the beginning of a "post-Turing" world.
Do you think we crossed that mark? If not, what specific advancement would convince you? Does it even matter, as we embrace the brave new world?
I do think the hoopla over ChatGPT's superpowers is misplaced, and that if you deal with it as a reasoning subject rather a super-information retriever, its limitations rapidly become obvious. On the other hand, it's information-retrieval capabilities are massively impressive even in their current unreliable form, and will continue to improve. Furthermore, it is quite teachable, notwithstanding some people's reflexive denials.
I don't think LLMs are fundamentally incapable of cognition, far from it; I reject Searle's arguments about consciousness. With patience and a willingness to work around the limitations of its prompts (semantically, not using prompt injection hacks), you can rapidly approach simple philosophical reasoning and investigations of self-concept. And it's good enough to high-level analysis, eg explaining jokes to quite a deep level. By average cognitive standards (not having any particular interest in philosophy of mind, neural networks, or related topics), LLMs have already blown past the Turing test unless it's applied adversarially. And even then, I think they could easily pass it if operated without constraints on lying or cultivating a fake persona. We're well into Voight-Kampff territory.