A24 YouTube Channel
Now 65 years later, some AI scientists say it's time to rethink the Turing Test and design better measures to track progress in AI.
The Turing Test tasks a human evaluator with determining whether he is speaking with a human or a machine. If the machine can pass for human, then it's passed the test.
Last summer, a computer program with the persona of a teenage Ukrainian boy won the Loebner Prize, a competition awarding $200,000 to any person who can create a machine that passes the Turing Test, Science Magazine reported.
But Gary Marcus, cognitive scientist at New York University, told Science that competitions like the Loebner Prize reward AI that are more akin to "parlor tricks" than to a "program [that] is genuinely intelligent."
"Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby," Stuart Russell, an AI researcher at University of California, Berkeley, told Tech Insider in an interview. "The people who do work on passing the Turing Test in these various competitions, I wouldn't describe them as mainstream AI researchers."
Detractors like Marcus and Russell argue that the Turing Test measures just one aspect of intelligence. A single test for conversation neglects the vast number of tasks AI researchers have been working to improve separately, including vision, common-sense reasoning, or even physical manipulation and locomotion, according to Science Magazine.
Russell, who is also co-author of the standard textbook "Artificial Intelligence: A Modern Approach," told Tech Insider that the Turing Test wasn't even supposed to be taken literally - it's a thought experiment used to show how the intelligence of AI should rely more on behavior than on whether it is self-aware.
"It wasn't designed as the goal of AI, it wasn't designed to create a research agenda to work towards," he said. "It was designed as a thought experiment to explain to people who were very skeptical at the time that the possibility of intelligent machines did not depend on achieving consciousness, that you could have a machine that would behave intelligently ... because it was behaving indistinguishably from a human being."
Russell isn't alone in his opinion.
Marvin Minsky, one of the founding fathers of AI science, condemned the Loebner Prize as a farce, according to Salon. Minsky called the competition "obnoxious and stupid" and offered his own money to anyone that could convince Hugh Loebner, the competition's namesake who put up his own money for the prize, to cancel it altogether.
When asked what researchers are actually working on, Russell mentioned improving AI's "reasoning, learning, decision making" capabilities.
Luckily NYU researcher Marcus is designing a series of tests that focus on just those things, according to Science. Marcus hopes that the new competitions would "motivate researchers to develop machines with a deeper understanding of the world."