My concern is that we move the goal posts so rapidly that “beating the Turing Test” is now meh.
How might we define the singularity in a meaningful manner so that we can have useful discussions about it? What metrics should we attend to?
I have the same, overlapping questions about AGI. I’d give 50% odds within 5 years — but I’d need a clear cut way to win or lose that bet.
There is no scientific basis for assigning a percentage to the odds of developing a true human level AGI within a particular period. About all that we can confidently state is that it's probably not impossible and that we appear to be making some progress towards the goal. But no one can honestly claim to know how far the goal is away or quantify our rate of forward progress.
I think that Ray Kurzweil had a pretty good practical definition of intelligence: “Intelligence is the ability to use optimally limited resources – including time – to achieve goals.” So far we can't build an AGI which fully meets that definition at even the level of a rat, or maybe not even the level of a cockroach.