> I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million. The only way to disarm the bomb was to type in a racial slur. AI told the engineer to kill himself. When asked about the aftermath it crashed.
Let me quote the start of Turing's paper.
> I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but [...] If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used [...] This is absurd. Instead of attempting such a definition I shall replace the question by another...
When Turing proposed his question about whether machines can think, standard conceptions of what the word think implied categorically ruled out the idea that a machine could do it. The imitation game was proposed as a way of getting around the problems with conceptions of what thinking was.
In the time since then machine learning, which Turing predicted would come to be in his paper, has come to be. It is now fairly common to discuss machines which learn and which think. For example, there are books that talk about what AlphaZero thinks of chess positions and the mechanism by which it learned to think in that way and of its discovery of novel strategic ideas and its creativity in having found them.
Language has evolved enough that the vestigial issue that forced his proposal are no longer as necessary as they once were in the general use of language. It is also the case that we have since developed technical language related to the question of whether machines think. So the vestigial issue is addressed on two fronts. Both of these drive against the use of the Imitation Game, because - and I'm just quoting Turing's own paper's introduction again:
> This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words
So what happens when we use the technical definitions as Turing suggests we should? Well... first millions of people who like fictional movies talk about how AI research is not AI research, which is hilarious and kind of shows why Turing felt compelled to avoid talking to people using typical language and why more generally scientific advancement often depends on the exclusion of laymen via techni. But yes, with technical definition, machines end up being capable of things which correspond with thinking, learning, creativity, reasoning, and so on.
Turing's paper, being so early, shows up in chapter one related readings. We are no longer on chapter one. So we talk about chapter one less often.