The 'Turing test'

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Pressie

The 'Turing test'

Post by Pressie »

From the link:
Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.

http://www.guardian.co.uk/technology/20 ... lligenceai

Any thoughts?
james uselton

Re: The 'Turing test'

Post by james uselton »

What if someone set up this test--- $100,000 entry fee---if you lose, you forfeit your money, if you win, you collect the $100,000.

You sit at a table playing someone in another room. You must determine if you are playing a human or a machine. How many players would enter such a contest and what would be the result.

Would this be considered a Turing test?
User avatar
Ovyron
Posts: 4562
Joined: Tue Jul 03, 2007 4:30 am

Re: The 'Turing test'

Post by Ovyron »

After reading most logs from conversations of the Loebner Prize and other chatterbots competitions I can say that there's still a long way to go. I think that jabberwacky has the most potential as it will hit the most human like conversation by luck once in a while.
User avatar
Rolf
Posts: 6081
Joined: Fri Mar 10, 2006 11:14 pm
Location: Munster, Nuremberg, Princeton

Re: The 'Turing test'

Post by Rolf »

Pressie wrote:From the link:
Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.
http://www.guardian.co.uk/technology/20 ... lligenceai
Any thoughts?
Sure, although I dont know if all thoughts will be appreciated by CC believers. I have thoughts of comparing this show with the ugly DB/IBM show back in 1997. But this is just because we are here in a forum for computerchess, I could likewise discuss examples out of biology with "speaking" or "thinking" apes like Koko. You can find always the same misconceptions and delusions. Something is exposed what is interpretated so that it corroborates your theories. But the details of the setting allow you quickly to refutate that the setting could show what it allegedly should prove.

Here is what a critic of the Turing show has to say:

One such philosopher is Professor AC Grayling of Birkbeck College, University of London. 'The test is misguided. Everyone thinks it's you pitting yourself against a computer and a human, but it's you pitting yourself against a computer and computer programmer. AI is an exciting subject, but the Turing test is pretty crude.'

Unfortunately the article in the Guardian only describes the setting of the show but it doesnt discuss it. The same happened in 1997 when the IBM paid DB team used a setting that is by itself already the proof for an anti-human design where a human with his best qualities is domesticated like a machine. Kasparov of course was incapable of dominating the 'entity X' in three games with White and three with Black. Because he didnt know the possible depth, he knew nothing about typical chess strengths (apart from the calculating power as such, but again, not to what depths and this is crucial for mistakes in chess) and typical blunders. Of course he had assumptions from history but then you cant applicate your experience if you have only three shots to find out and then to execute along a decade long trained routine of a super GM in chess. By definition you cant defeat a dumb computer if you dont know where the holes are where a certain depth doesnt prevent the machine from makinmg false evaluations and decisions. Once in the second game Kasparov tried it, in a worse position and then there was a break of 1 hour after that the machine allegedly made exactly NOT the move Kasparov had waited for. The often raised question of cheating is irrelevant if already the limits of the setting prevent the usual adaption of the superior human player, speaking in terms of 1997. In 2008 the Milov games have shown that it still depends on the length of the experiment which is practically the obstacle if you need a top GM who then should play a different chess adapted for machines.

Everything from the 1997 show is also in the Turingshow in 2008. Here are the conditions:


The test will be carried out by human 'interrogators', each sitting at a computer with a split screen: one half will be operated by an unseen human, the other by a program. The interrogators will then begin separate, simultaneous text-based conversations with both of them on any subjects they choose. After five minutes they will be asked to judge which is which. If they get it wrong, or are not sure, the program will have fooled them. According to Warwick, a program needs only to make 30 per cent or more of the interrogators unsure of its identity to be deemed as having passed the test, based on Turing's own criteria.

Warwick said: 'You can be flippant, you can flirt, it can be on anything. I'm sure there will be philosophers who say, "OK, it's passed the test, but it doesn't understand what it's doing".'

The main point of the setting is first of all the machine has a 30% win definition. That alone says it all, namely how low the expectancy is defined in advance out of the experiences with the setting. But the next big hoax are the five minutes when you should talk simultaneously to two entities. So, without knowing the details one can conclude that it must be very difficult for the human clients to solve the problem.

But I see another hoax in the setting. Turing might have been a bright nathematician but was he experienced in psychology? His test is really crude because human clients are downgraded to evaluating a displayed form of communication and basically to the question if a human PC user could define in a virtual conversation only over displayed written words if his correspondent is a machinme or not. And he has in average 2,5 minutes for the potential machine. And en plus the machine "wins" with a performance of 30%.

Grayling made the main objection. The operators resp. designers of the machines and the setting as such which also includs the way the human alternative simulated partially a typical machine like "speech" with a permanent recursion on the formerly sent words. But this is only an small sample of human communication. Which is very difficult to discriminate in such a short time and with the unfair odds that the client must always perform better than 70%. Altogether you have the almost impossible deal.
-Popper and Lakatos are good but I'm stuck on Leibowitz
gerold
Posts: 10121
Joined: Thu Mar 09, 2006 12:57 am
Location: van buren,missouri

Re: The 'Turing test'

Post by gerold »

Pressie wrote:From the link:
Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.

http://www.guardian.co.uk/technology/20 ... lligenceai

Any thoughts?
No.
Pressie

Re: The 'Turing test'

Post by Pressie »

Compared to the Matrix or even the fears of Sarah Cononr, the Machines have a long way to go. But for the generation ahead, which has the capabilities of time travel, this may be the appropriate point to rain on technologies parade. :)

Of course the truth is, you can't put the Genie back in the bottle, whether it's a nuclear or computer concoction.
plattyaj

Re: The 'Turing test'

Post by plattyaj »

I think that people always raise the bar too high for computers. We assume that the other end will be a rational, intelligent being able to converse logically, illogically too ;), with humor, etc., etc..

But what if we changed it. What if we said the person in the other room was either a computer or Sarah Palin and you had to guess which. Hmm, not so difficult now is it ;)

Andy.
Pressie

Re: The 'Turing test'

Post by Pressie »

plattyaj wrote:I think that people always raise the bar too high for computers. We assume that the other end will be a rational, intelligent being able to converse logically, illogically too ;), with humor, etc., etc..

But what if we changed it. What if we said the person in the other room was either a computer or Sarah Palin and you had to guess which. Hmm, not so difficult now is it ;)

Andy.
I met some friends of hers on a recent trip to Tahoe. Actually she's too much of a flesh and blood human being, so her answers would be far more representative of a real person than the cold, deliberative, lawyeresk speak of the other side.

But... oops.....that is turning the post political, which is no where near it's original intent, right? :wink: .
User avatar
Bill Rogers
Posts: 3562
Joined: Thu Mar 09, 2006 3:54 am
Location: San Jose, California

Re: The 'Turing test'

Post by Bill Rogers »

Last year Ultra Hal won the Loebner Prize. I have owned a copy of this program for over 6 years now and I can attest that it sounds more human than any of the other programs out there. For one thing almost everyone of the other programs use 'canned' answers to questions put to them. Ultra Hal, on the other hand, takes your sentence apart and then tries to formulate an answer based upon what little information that it might contain on the subject. It, Ultra Hal, also does something that most other programs don't do and that is it learns from all you input. It learns about your likes and dislikes and then slowly conforms to your preferences of all things. So far no one has ever reached a limit as to what it can learn and talk about. Some of its memory banks have exceeded 54 mega bytes by private owners.
In any event the site for Ultra Hal is 'www.zabaware.com'
Bill
guyhaw

Re: The 'Turing test'

Post by guyhaw »

Well, it's going to be an interesting competition - and as I'm at the University of Reading where it is being hosted, I'll be looking in on what happens.

Sounds like 'Ultra Hal' has an edge, but the Ultra-Hal conversation was easily seen as non-human.

Of course, picking out a conversation with a computer from one with a human is harder if the human tries to behave like today's computer-conversationalists. There's a TV program where one tries to tell whether celebrities are lying or not: the trick is to sound like you are lying when you are in fact telling the truth.

g