Photo Rating Website
Start vanitas, A vat-25, uszkujnik-, v1.3, mody
unplugged-20-the turing test 0

unplugged-20-the turing test 0, Filozofia, Alan Turing [ Pobierz całość w formacie PDF ]
//-->Activity 20Conversations with computers—TheTuring testAge groupMiddle elementary and up.Abilities assumedAnswering general questions.TimeAbout 20 minutes.Size of groupCan be played with as few as three people, but is alsosuitable for the whole class.FocusInterviewing.Reasoning.SummaryThis activity aims to stimulate discussion on the question of whether computers can ex-hibit “intelligence,” or are ever likely to do so in the future. Based on a pioneering com-puter scientist’s view of how one might recognize artificial intelligence if it ever appeared,it conveys something of what is currently feasible and how easy it is to be misled bycarefully-selected demonstrations of “intelligence.”From “Computer Science Unplugged”cBell, Witten, and Fellows, 1998Page 213ACTIVITY 20. CONVERSATIONS WITH COMPUTERS—THETURING TESTTechnical termsArtificial intelligence; Turing test; natural language analysis; robot programs; story gen-erationMaterialsA copy of the questions in the blackline master on page 225 that each child can see (eitherone for each pair of children, or a copy on an overhead projector transparency), andone copy of the answers in the blackline master on page 226.What to doThis activity takes the form of a game in which the children must try to distinguish between ahuman and a computer by asking questions and analyzing the answers. The game is played asfollows.There are four actors: we will call them Gina, George, Herb and Connie (the first letter ofthe names will help you remember their roles). The teacher coordinates proceedings. The restof the class forms the audience. Gina and George arego-betweens,Herb and Connie will beanswering questions. Herb will give a human’s answers, while Connie is going to pretend to bea computer. The class’s goal is to find out which of the two is pretending to be a computer andwhich is human. Gina and George are there to ensure fair play: they relay questions to Herb andConnie but don’t let anyone else know which is which. Herb and Connie are in separate roomsfrom each other and from the audience.What happens is this. Gina takes a question from the class to Herb, and George takes thesame question to Connie (although the class doesn’t know who is taking messages to whom).Gina and George return with the answers. The reason for having go-betweens is to ensure thatthe audience doesn’t see how Herb and Connie answer the questions.Before the class begins this activity, select people to play these roles and brief them onwhat they should do. Gina and George must take questions from the class to Herb and Connierespectively, and return their answers to the class. It is important that they don’t identify whothey are dealing with, for example, by saying “She said the answer is. . . ” Herb must give his ownshort, accurate, and honest answers to the questions he is asked. Connie answers the questionsby looking them up on a copy of the blackline master on page 226. Where the instructions aregiven in italics, Connie will need to work out an answer.Gina and George should have pencil and paper, because some of the answers will be hard toremember.1. Before playing the game, get the children’s opinions on whether computers are intelligent,or if the children think that they might be one day. Ask for ideas on how you would decidewhether a computer was intelligent.Page 214ACTIVITY 20. CONVERSATIONS WITH COMPUTERS—THETURING TEST2. Introduce the children to the test for intelligence in which you try to tell the differencebetween a human and a computer by asking questions. The computer passes the test if theclass can’t tell the difference reliably. Explain that Gina and George will communicatetheir questions to two people, one of whom will give their own (human) answers, whilethe other will give answers that a computer might give. Their job is to work out who isgiving the computer’s answers.3. Show them the list of possible questions in the blackline master on page 225. This caneither be copied and handed out, or placed on an overhead projector.Have them choose which question they would like to ask first. Once a question has beenchosen, get them to explain why they think it will be a good question to distinguish thecomputer from the human. This reasoning is the most important part of the exercise,because it will force the children to think about what an intelligent person could answerthat a computer could not.Gina and George then relay the question, and return with an answer. The class shouldthen discuss which answer is likely to be from a computer.Repeat this for a few questions, preferably until the class is sure that they have discoveredwho is the computer. If they discover who is the computer quickly, the game can becontinued by having Gina and George secretly toss a coin to determine if they will swaproles.The answers that Connie is reading from are not unlike the ones that some “intelligent”computer programs can generate. Some of the answers are likely to give the computeraway quickly. For example, no-one is likely to recite the square root of two to 20 decimalplaces, and most people (including, perhaps, the children in the class) would not be able toanswer that question at all. Some questions will reveal the computer when their answersare combined. For example, the “Do you like. . . ” answers sound plausible on their own,but when you encounter more than one it becomes apparent that a simple formula is beingused to generate the answers from the questions. Some of the answers indicate that thequestion was misinterpreted, although the class might reason that the person could havemade the mistake.Many of the answers are very bland, but safe, and a follow-up question would probablyreveal that the computer doesn’t really understand the subject. Answering “I don’t know”is reasonably safe for the computer, and might even make it seem more human—we wouldexpect a child to answer “I don’t know” to some of the questions too, such as the requestfor the square root of two. However, if a computer gives this answer too often, or for avery simple question, then again it would reveal its identity.Since the goal of the computer is to make the questioners think that they are dealingwith a person, some of the answers are deliberately misleading—such as the delayed andincorrect answers to the arithmetic problem. The questions and answers should provideplenty of fuel for discussion.Page 215ACTIVITY 20. CONVERSATIONS WITH COMPUTERS—THETURING TESTQuestionAnswerQuestionAnswerQuestionAnswerQuestionAnswerPlease write me a sonnet on the subject of the Forth Bridge.Count me out on this one. I never could write poetry.Add 34957 to 70764.pause for about 30 seconds. . . 105621.Do you play chess?Yes.My King is on the K1 square, and I have no other pieces. You have only yourKing on the K6 square and a Rook on the R1 square. Your move.after a pause of about 15 seconds. . . Rook to R8, checkmate.Figure 20.1: Are the answers from a person or a computer?QuestionAnswerQuestionAnswerQuestionAnswerQuestionAnswerIn the first line of the sonnet which reads “Shall I compare thee to a summer’sday,” would not “a spring day” do as well or better?It wouldn’t scan.How about “a winter’s day”? That would scan all right.Yes, but nobody wants to be compared to a winter’s day.Would you say Mr. Pickwick reminded you of Christmas?In a way.Yet Christmas is a winter’s day, and I don’t think Mr. Pickwick would mindthe comparison.I don’t think you’re serious. By a winter’s day one means a typical winter’sday, rather than a special one like Christmas.Figure 20.2: These answers are probably from a person!Variations and extensionsThe game can be played with as few as three people if Gina also takes the role of George andConnie. Gina takes the question to Herb, notes his answer, and also notes the answer from theblackline master on page 226. She returns the two answers, using the letters A and B to identifywho each answer came from.In order to consider whether a computer could emulate a human in the interrogation, con-sider with the class what knowledge would be needed to answer each of the questions onpage 226. The children could suggest other questions that they would have liked to ask, andshould discuss the kind of answers they might expect. This will require some imagination, sinceit is impossible to predict how the conversation might go. By way of illustration, Figures 20.1and 20.2 show sample conversations. The former illustrates “factual” questions that a computermight be able to answer correctly, while the latter shows just how wide-ranging the discussionmight become, and demonstrates the kind of broad knowledge that one might need to call upon.There is a computer program called “Eliza” (or sometimes “Doctor”) that is widely availablein several implementations in the public domain. It simulates a session with a psychotherapist,and can generate remarkably intelligent conversation using some simple rules. If you can getPage 216ACTIVITY 20. CONVERSATIONS WITH COMPUTERS—THETURING TESThold of this program, have the children use it and evaluate how “intelligent” it really is. Somesample sessions with Eliza are discussed below (see Figures 20.3 and 20.4).What’s it all about?For centuries philosophers have argued about whether a machine could simulate human intelli-gence, and, conversely, whether the human brain is no more than a machine running a glorifiedcomputer program. This issue has sharply divided people. Some find the idea preposterous, in-sane, or even blasphemous, while others believe that artificial intelligence is inevitable and thateventually we will develop machines that are just as intelligent as us. (As countless science fic-tion authors have pointed out, if machines do eventually surpass our own intelligence they willthemselves be able to construct even cleverer machines.) Artificial Intelligence (AI) researchershave been criticized for using their lofty goals as a means for attracting research funding fromgovernments who seek to build autonomous war machines, while the researchers themselvesdecry the protests as a Luddite backlash and point to the manifest benefits to society if onlythere was a bit more intelligence around. A more balanced view is that artificial intelligence isneither preposterous nor inevitable: while no present computer programs exhibit “intelligence”in any broad sense, the question of whether they are capable of doing so is an experimental onethat has not yet been answered either way.The AI debate hinges on a definition of intelligence. Many definitions have been proposedand debated. An interesting approach to establishing intelligence was proposed in the late 1940sby Alan Turing, an eminent British mathematician, wartime counterspy and long-distance run-ner, as a kind of “thought experiment.” Turing’s approach was operational—rather than defineintelligence, he described a situation in which a computer could demonstrate it. His scenariowas similar to the activity described above, the essence being to have an interrogator interactingwith both a person and a computer through a teletypewriter link (the very latest in 1940s technol-ogy!) If the interrogator could not reliably distinguish one from the other, the computer wouldhave passed Turing’s test for intelligence. The use of a teletypewriter avoided the problem ofthe computer being given away by physical characteristics or tone of voice. One can imagineextending the exercise so that the machine had to imitate a person in looks, sound, touch, maybeeven smell too—but these physical attributes seem hardly relevant to intelligence.Turing’s original test was a little different from ours. He proposed, as a preliminary exercise,a scenario where a man and a woman were being interrogated, and the questioner had to deter-mine their genders. The man’s goal was to convince the questioner that he was the woman, andthe woman’s was to convince the questioner that she was herself. Then Turing imagined—forthis was only proposed as a thought experiment—a computer being substituted for one of theparties to see if it could be just as successful at this “imitation game” as a person. We alteredthe setup for this classroom activity, because the kind of questions that children might ask todetermine gender would probably not be appropriate, and besides, the exercise promotes sexualstereotyping—not to mention deception.Imitating intelligence is a difficult job. If the roles were reversed and a person was tryingto pass themselves off as a computer, they would certainly not be able to do so: they would begiven away by their slow (and likely inaccurate) response to questions like “What is123456×Page 217 [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • anette.xlx.pl
  • Jak łatwo nam poczuć się tą jedyną i jakież zdziwienie, kiedy się nią być przestaje.

    Designed By Royalty-Free.Org