You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being)! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.
Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed the Turing test, the other one was Cleverbot, back in 2011. Fourth, Eugene only squeaked by, technically convincing “at least 30% of the judges” (a pretty low bar) for a mere five minutes. Fifth, Veseloy cheated somewhat, by giving Eugene the “personality” of a 13-yr old Ukrainian boy, which thereby somewhat insulated the chatterbot from potential problems caused by its poor English or its inept handling of some questions. As you can see, the whole thing was definitely hyped in the press.
Competitions to pass the Turing test have become fashionable entertainment for the AI crowd, and Brian Christian — who participated in one such competition as a human decoy — wrote a fascinating book about it, which provides interesting insights into why and how people do these things. But the very idea of the Turing test is becoming more and more obviously irrelevant, ironically in part precisely because of the “successes” of computer scripts like Cleverbot and Eugene.
Turing proposed his famous test back in 1951, calling it “the imitation game.” The idea stemmed out of his famous work on what is now known as the Church-Turing hypothesis, the idea that “computers” (very broadly defined) can carry out any task that can be encoded by an algorithm. Turing was interested in the question of whether machines can think, and he was likely influenced by the then cutting edge research approach in psychology, behaviorism, whose rejection of the idea of internal mental states as either fictional or not accessible scientifically led psychologists for a while to study human behavior from a strictly externalist standpoint. Since the question of machine thought seemed to be even more daunting than the issue of how to study human thought, Turing’s choice made perfect sense at the time. This, of course, was well before many of the modern developments in computer science, philosophy of mind, neurobiology and cognitive science.
Here are a number of things we should test for in order to answer Turing’s original question: can machines think? Each entry is accompanied by a standard dictionary definition, just to take a first stab at clarifying the issue:
- Intelligence: The ability to acquire and apply knowledge and skills.
- Computing power: The power to calculate.
- Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.
- Sentience: The ability to perceive or feel things.
- Memory: The faculty of storing and retrieving information.
- So, when we talk about “AI,” do we mean intelligence (as the “I” deceptively seems to stand for), computation, self-awareness, all of the above? Without first agreeing at the least on what it is we are trying to do we cannot possibly even conceive of a test to see whether we’ve gotten there.
When we talk about entirely artificial entities, such as computers (or computer programs), much of the commonsense information on the basis of which we can reasonably infer other minds — biological kinship, known functional complexity of specific areas of the brain, etc. — obviously doesn’t apply. This is a serious problem, and it requires an approach a lot more sophisticated than the Turing test. Indeed, it is dumbfounding how anyone can still think that the Turing test is even remotely informative on the matter. We are in need first of all of clarifying quite a bit of conceptual confusion, and then of some really smart (in the all-of-the-above sense) human being coming up with a new proposal. Anyone wish to give it a shot?