Flawed Turing test and fooling humans 33% of the time

In 2013, Stuart Armstrong discussed the flaws with the Turing Test.

There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.

Imagine no-one had heard of the [Turing] test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test – and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.

But this is not the case: nearly everyone’s heard of the Turing test. So the first machines to pass will be dedicated systems, specifically designed to get through the test. Their whole setup will be constructed to maximise “passing the test”, not to “being intelligent” or whatever we want the test to measure (the fact we have difficulty stating what exactly the test should be measuring shows the difficulty here).

The Turing Test Passing
* Slightly better chatbots
* Better chatbot character definitions that understand better how to trick people
* Dedicated chatbots with minimal real intelligence
* slightly more gullible judges versus 2008 when the best chatbot fooled 30% of the time versus 2013 with 33% fooled

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks