Supercomputer passes the Turing Test laying the foundations for Skynet

Tests performed today at the Royal Society in London have resulted in a computer convincing a human for the first time ever that it is also a living human. This means it is now the first ever standalone computer program to pass the infamous Turing Test. Five machines were tested to see if they could persuade a user in general conversation that they were human and one program, named 'Eugene Goostman', successfully passed.

Devised by one of the first ever computer developers, Alan Turing, the test is designed to highlight machines that are capable of operating with levels of intelligence currently found solely in humans.

Turing said that if a human operator thinks that they are in communication with another human whilst actually speaking to a computer then that computer must be thinking and capable of performing equally as well as humans. In other words, the behaviour of the computer must be indistinguishable from that of a human completing the same task.

Since its conception in 1950, the rules of the test have remained the same. The computer simply has to persuade the human user that it is alive by engaging in a text-based conversation. If 30% of the present judging users agree that they feel they are talking to a human, the machine passes.

Today, the program Eugene Goostman achieved 33% success and so has been catapulted into the history books as the first ever software package to pass the Turing Test. Previously, only bots in games had passed the test rather than standalone programs.

As reported by the Telegraph, the program was written by Russian Vladimir Veselov, who now lives in the US, and Ukrainian Eugene Demchenko who now lives in Russia. The program aims to be a 13-year old schoolboy and you can actually talk to 'Eugene' online at 'his' website. Veselov responded to the news by saying 

''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''

Many are skeptical of the possible consequences of machines gaining human levels of intelligence, however, citing the fantasies shown by films such as the Terminator series as possible results of machines pondering ways to gain control of us; experts argue over whether this is actually possible, but anything with increased intelligence could be thought to think in the same way, and so want dominion.

Alan Turing died in 1954 in an apparent suicide resulting from cyanide poisoning. He had been chemically castrated after his 1952 conviction of practicing homosexuality, then condoned almost universally.

After a lengthy campaign supported by many, the UK awarded Turing a posthumous Royal Pardon last December as a tribute to the great contributions he made to world computer science, and his efforts during World War II at Bletchley Park to decode messages sent from German Enigma machines using the Colossus computer.

Today's events are likely to significantly shake up the world of artificial computer-controlled intelligence. By any one's books, knowing that another entity can now exhibit intelligent behaviour indistinguishable from that of human, is certainly something interesting to consider. At least that entity can still be switched off - for now...

Source: The Telegraph | Image via Eugene Goostman

Report a problem with article
Previous Story

Google Now can tell you to get off buses and trains at your stop

Next Story

iOS 8.0 to include free FaceTime audio conference calling

33 Comments

Commenting is disabled on this article.

How would Susan Calvin rate Eugene.

Fooling 1/3 seems reasonable, fooling 2/3 would be a lot bigger step to overcome.

Web site down, guess it is a software program. :-)

The people who thought they were talking to another human during these tests must've been very naïve or well dumb... After playing with this for 2 minutes it's dead obvious it's just a computer program...

i asked several questions before the website/eugene response time slowed to a crawl. :p

when it does come back with an answer, you get something like this:

"You think if you repeat it twice, I'll understand it better? :-) Well, let's go on though."

:p

Only 30%? That seems ridiculously low. I'd think 80-85% would be proper. Perhaps in 10 years we can achieve that.

From article:
"He had been chemically castrated after his 1952 conviction of practicing homosexuality".
Well, I'm 100% sure , it's not an artificial intelligence that did this.

after his 1952 conviction of practicing homosexuality, then condoned almost universally.

I'm pretty sure you meant "condemned". Condoned means the exact opposite of what I think you were trying to convey.

Now I understand why Skynet was so unbalanced, all it's AI logic was based on that of a teenager. Skynet thought it knew everything, but really had horrible judgment. It thought it was indestructible but , yet felt alone because nobody understood what it was going through. Plus with all those hormonal issues it was natural for it to become emotionally unstable. And of course, all teenagers rebel against their parents. It is so obvious now, Skynet was a pimpled face teenager. Well maybe when Skynet does rise up we can use Canada's failsafe weapon, Bieber do diffuse it. :)

In a sort-of like Mars Attacks manner... we start blasting Bieber at ungodly volume and every robot's head immediately explodes. Pray that our own doesn't, though.

The Turing test was developed in a different era when there was a different perspective on computing. Convincing 33% of the judges that the computer was a 13 year old human may satisfy the Turing test, bit is far from good enough as the test presents too low of a bar. When it gets closer to 100% the most likely unintended result will be a loss of more human jobs to machines. These days a well written program on powerful hardware can be made to appear human like, sort of like IBM's Watson on Jeopardy . All they are doing is accessing a database more powerful, deeper and wider, than Turing could envision. Until machines have true intellectual capacity accessed by means other than brute force, and not programmed to appear to use AI, Skynet will not become self aware. Programming tech has a very long way to go.

Edited by seeprime, Jun 8 2014, 9:11pm :

Have you seen this boy? Ahhhhhhhh! Run, it's a T-1000! J/k... That's a pretty cool article! Skynet gets more and more real every day it seems. As long a as it doesn't become self aware, hopefully we should be okay.

If I'm not mistaken the article is written by Neowin staff from news at the originating source. Since it is an original article they can create their own headline. Same thing when you see different newspapers have different headlines for the same subject matter.

That's pretty cool. But I don't think much has actually changed just because of that 33% border, we still have a long way to go..

Cøi said,
That's pretty cool. But I don't think much has actually changed just because of that 33% border, we still have a long way to go..

Yeah, wake me up when they break 50%. Although, I don't know what the percentage mark is for misclassification of real humans.

Perhaps Bayesian inference makes the result more important that we might intuitively believe without running the numbers.