Is there still life in the Turing Test?
Call for papers
Submissions open!

Part of the 2026 convention of the UK Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), 1-2 July at the University of Sussex, UK.
Amidst widespread reports of generative AI chatbots “sailing past” the Turing test – what Alan Turing (1950) himself called the Imitation Game – it is worth noting that no chatbots have succeeded under the original rules of the game, which is to say that any successful “passes” have significantly loosened the rules in one way or another. In particular, Turing clearly intended both players – the man and the woman, in the first version of the game that he presents, the human being and the software program in the second – to have access to each other’s conversations with the human judge, to which they could then respond in making their case to the judge. One can imagine that this might help the human player far more than the chatbot. He also allowed for voice communications through a third party as an alternative to typing, presumably in case this was needed to keep the conversation moving quickly enough.
Eugene Goostman is often mentioned as a successful “pass” (Warwick & Shah, 2016a, 2016b), and yet the two players did not have access to each other’s conversations, and participants in those trials have confirmed that there was time (within Turing’s specified five-minute window) for only a very few questions – all of which had to be typed. Turing describes a judge who knows what to look for in terms of weaknesses and probes them; it is not clear that the judges at the Royal Society London had any such training. It might be added that Goostman was portraying a 14-year-old Ukrainian youth with limited English skills which, while not explicitly disallowed by the rules of the Imitation Game, might seem to be against their spirit, as the human players in the first version of the game are described as an adult man and woman seemingly of native English abilities, one of whom was then to be replaced by a software program.
Turing made two specific predictions in his 1950 paper: one that, within fifty years, it would be commonplace for people to talk about computers thinking. The other was that, within fifty years, a software program would be sufficiently proficient at the Imitation Game that the judge would only be able to identify the software program successfully 70% of the time after five minutes’ conversation. It has now been 75 years, and still no system has achieved that standard while staying strictly within the rules of the game. Of course, it could be claimed that the research field has largely lost interest in the Turing test (which is, after all, limited to brief language exchange) as a way of shedding any light on the nature of human or artefactual intelligence. And yet, the lack of any unambiguous artefactual success at the game is striking.
This symposium invites extended abstracts and full papers that address the relevance – or irrelevance – of the Imitation Game to contemporary AI research. Why, for all that generative AI-based chatbots have convinced an increasing number of users of their intelligence or even consciousness, are there still no clear successes in the Imitation Game? Did Turing miss something critical about the nature of human intelligence or linguistic competence? Is it possible – as many researchers have claimed – that generative AI-based chatbots are more flawed in their output (in ways that a well-trained sceptic might identify) than they are commonly understood to be? What lessons might the 75-year-old Imitation Game have for a global society struggling to come to terms with the present AI boom and how it understands that technology? What insights can Alan Turing offer to contemporary debates, and what might he – the author of the Imitation Game – make of ChatGPT 5.1 and kin?
References
Turing, A. (1950) Computing machinery and intelligence, Mind, 59, 433-460.
Warwick, K. & H. Shah (2016a) Can machines think? A report on Turing test experiments at the Royal Society, Journal of Experimental and Theoretical Artificial Intelligence, 28(6), 989-1007.
Warwick, K. & H. Shah (2016b) Passing the Turing test does not mean the end of humanity. Cognitive Computation, 8, 409-419.
Submissions should be in the form of extended abstracts or full papers (with preference to full papers), formatted according to the following template: [MS Word (recent)] [MS Word (older versions)] [LaTeX].
Submissions are open on OpenReview.
Deadlines
30 Dec: submission opens
15 Mar: submissions close
15 Apr: notification to authors
15 May: camera-ready copies of final abstracts/papers due, along with completed copyright forms
1-2 Jul: 2026 convention of the UK Society for the Study of Artificial Intelligence and the Simulation of Behaviour