Wednesday, September 21, 2022

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

111 comments:

  1. IMPORTANT:

    Please post early in the week, not just before the next lecture, or I won't have time to reply.

    And please read all the preceding commentaries and replies before posting yours, so you don't repeat what has already been said.

    ReplyDelete
  2. I found this reading very helpful to critically analyze Turing’s paper, which I initially read with little second thought. It is useful to remind us that even giants like Turing can be “dead wrong” about certain things, such as his misdescription of solipsism. Turing explains solipsism as the inability to be sure that a machine thinks without being the machine and feeling oneself thinking. However, solipsism is instead the belief that only I exist and all else in the world is merely my dream, which takes this idea further. The confusion here is with the other-minds problem, which states that we can’t know if anyone but ourselves has a mind. More generally, this walkthrough of Turing’s paper really clarified his points, especially the objections to the Turing test for intelligence.

    ReplyDelete
    Replies
    1. Good points. Do you think Turing was a computationalist?

      Delete
    2. My initial intuition is that Turing's work inspired computationalists, but I'm unsure whether he himself subscribed to this view. Indeed, it seems that he is concerned about whether computing machines can be intelligent, not about whether intelligence is attributable to computation. In other words, he wonders whether computation could create intelligence, not whether intelligence IS nothing more than computation.

      Delete
    3. Yes, but in cogsci "intelligence" means cognitive, and passing TT means indistinguishability from human performance capacity. But I think no one knows what Turing really believed, and I tend to agree with you,

      Delete
    4. I definitely agree with you Amelie about the 2b reading being helpful to think critically about Turing. The example at the end is really good about how “if psychokinesis were genuinely possible then ordinary matter/energy would not be enough to generate a thinking mind.” I was glad to read this because it helped me resolve this part of Turing’s paper. I reread the section (9) The Argument from Extrasensory Perception a few times and I was struggling to grasp why Turing wouldn’t just say “This is a silly idea, let’s move on,” because if a TT passing computer just needs to do what an average person can do…I wasn’t seeing the relevance. I was glad to see that my skepticism about this concept wasn’t misplaced.

      Delete
    5. I agree; it did help me understand 2a. But it also made me think about the concept of "giants" we talked about in class. Do we consider some people "giants" as long as we agree with what they're saying and until a better idea is suggested? Have there been people we viewed as giants in the past but now remember as small ideas? Or do we have the capacity to look at their ideas within the context they lived in?

      Delete
    6. Teegan, about “psychokinesis,” please read other replies in the 2a and 2b skywriting.

      Melis, what I say about Lilliputians and Giants (which I think I will shorten to “Putes” and “Brogs” (for “Brobdingnagians”) is just “Stevan Says” and bad-proffery -- but, yes, both Putes and Brogs sometimes change their ideas, both big ideas and small ideas. Both solipsism and telepathy were small ideas of Turing’s. Alchemy was a small idea of Newton’s. But what makes Brogs Brogs (at least in science and maths) is that their big ideas turn out to be right – or at least right enough to produce some important, lasting new results (permanently reducing real uncertainty).

      Delete
  3. To me, this reading most importantly clarified the goal of the Turing test as an empirical test of performance capacity. It addressed some of the questions I had about Turing’s suggestion that, for a machine to pass the Turing Test successfully, it meant that the interrogator made the wrong decision as often as when the game is played trying to differentiate between a man and a woman. Understanding real performance capacity as the ability for a system to be totally indistinguishable from that of a real human being, for a lifetime if needed, makes me better appreciate the complexity of the Turing Test and its ability to test for intelligence.
    The reading also indicated that Turing intended to, or should have intended to, make success in the Turing Test available solely to computers at the t3 level. I was wondering why the t2 level wasn’t enough to test for the ability of a machine “to do everything that we can do”.

    ReplyDelete
    Replies
    1. The Turing Test is not a game, and it’s not about an interrogator. We are all Turing-testing one another every day, lifelong. If Kayla passes, then we have no better or worse reason to doubt that she is thinking and feeling than we have for one another.

      The goal of the TT is to reverse-engineer our capacity to do anything and everything a human can do. Kayla is there to fix your intuitions of what it takes to pass the TT.

      But Kayla is a T3 robot, not just a T2 computer. Searle’s argument shows that a T2 computer would not understand. He uses “Searle’s Periscope” on the normally impenetrable “other-mind” barrier. (Why does that work?)

      But a computer is not the only possible candidate for passing T2. For example, Kayla (a T3 robot) could pass T2 too. And T3 is not penetrable by Searle’s Periscope. (Why not?)

      So perhaps the fault is not with T2 as the test, but with computation (symbol-manipulation) as the means of passing T2. Maybe it would be impossible to pass T2 without at least being at least a T3 robot.

      So, do you think Turing was a computationalist?

      Delete
    2. "Why does that work?"
      Searle's Periscope works on T2 because it allows us to put ourselves in the shoes of computers. In the Chinese room, I will be able to answer questions in Chinese correctly with the help of the rule book (program), making the Chinese speaker outside believe that I speak Chinese. But I'm merely executing algorithms according to the program that's given. And obviously, I don't feel like that I understand Chinese because I'm just following instructions. By the same logic, a machine that has passed T2 can converse like a human, but it doesn't mean that it understands languages.

      "And T3 is not penetrable by Searle's Periscope. (Why not?)"
      Thinking is much more than delivering indistinguishable verbal performance. Searle's Periscope works for T2 because the only criterion under evaluation is the response given by the person/computer in the Chinese room. A T3 passer has sensorimotor experience capacities, meaning it must be a physical robot (hardware). And the software (i.e., the rule book in the Chinese room) is independent of the hardware in the same sense that you can download and play Candycrush on smartphones powered by either Android or IOS. So, If T3 passer, Kayla, is in the Chinese room, Searle's Periscope would fail because now I'm no longer putting myself in the shoes of a program (rule book) as per the Chinese room. And if I want to know whether Kayla actually understands Chinese, I would have to "become" Kayla, which is impossible.

      "Do you think Turing was a computationalist?"
      From what I understand, deciding if Turing was a computationalist depends on whether or not you believe that intelligence is reducible to empirically observable performances. If yes, then Turing was a computationalist. If not, it is to say that intelligence is much more than just being indistinguishable from real humans. If the latter is what I prefer, it inevitably implies that there are just things that we can't know as to what intelligence is.

      Delete
    3. Yucen, I wanted to ask about your last point. I am a little confused when you say that deciding whether Turing was a computationalist depends on whether "you" believe that intelligence is reducible to empirically observable performance. Does it not depend on whether Turing thought this, rather than on our own opinion on the subject? Also, I may have misunderstood because I thought that reducing intelligence to empirically observable performance was the point of Turing's test. Thanks for letting me know what I missed!

      Delete
    4. Hi Amélie! Thanks for the question. Here is what I think but take it with many grains of salt lol

      A machine that passes TT is said to be intelligent because its performance capacities are indistinguishable from those of humans. But the indistinguishability here is what intelligence does, not what it is. Computationalism is the view that cognition and consciousness "are" a form of computation. It can very likely be the case that cognition involves computation, but cognition cannot be all just computation (i.e., T3-passer, Kayla). If a TT-passer can be called "intelligent," then we are just inferring a machine is intelligent by what it does. But we are still just circling the question, "what is intelligence?" I think reducing intelligence to empirically observable performance was not the point but the premise of TT (of course, I know I can be very wrong on this lol). In other words, Turing wonders whether computation could create what we "think" intelligence is.

      So, going back to the question (was Turing a computationalist?), if you ascribe to the view of computationalism that cognition is all about computation and that it is empirically observable (as performance capacities), then what Turing did (TT) was computationalist in nature. However, if you think computationalism does not fully address what cognition is and that there is more to it than just computation, then calling Turing a computationalist is a bit misleading. Let me elaborate: there is no way to know whether Kayla (T3) is a robot unless you cut her open. She is indistinguishable from humans for life. And like mentioned in Harnad's paper, any engineered device able to pass Turing Test must be able to deliver T3 performance, not just T2 (13). And T3 means intelligence is more than just computation (since it involves sensorimotor capacities). By this logic, do you think maybe Turing might've contradicted himself?

      Delete
    5. From what I understand, Searle's Periscope can breach the OMP as it shows through the thought experiment the ability of computers of producing answers solely using symbol manipulation (computation), as we know computers do, according to certain formal rules without actually understanding what it is doing (using Chinese in this example).

      I believe that a T3 robot is not concerned by Searle's periscope because it is applicable only in cases where the implementation of the system is done by the person using the system: it is the only case where the person becomes the system and can confirm or deny whether there is any understanding of the meaning of what it is doing. This isn’t the case with a T3 robot, as it entails more than just computation (it also involves indistinguishability in performance capacity). There is no way to become the T3 robot, therefore no way to infer on its capacity to understand what it is doing.

      Concerning Turing and computationalism, I would tend to say that he was a computationalist (or what we would call so nowadays). I believe that this is shown particularly well by the shortcomings of his paper highlighted in the 2b reading. His criteria that a T2 computer could pass the Turing Test, and could therefore be considered as having intelligence, seems to show that he considered computation/symbol manipulation as the fundamental (and sufficient) component of intelligence. He therefore seemed to believe that what the brain is doing is computation, and only that (but I might be missing something here, please correct me if you see this question from another angle!)

      Delete
    6. Yucen, T3 cannot be passed by computation alone because robotic capacity (sensorimotor capacity, seeing and recognizing and manipulating things) cannot be done by just computation. The T3 tests more of our capacities than just the verbal ones. But the verbal ones depend on T3 capacities too (which is why “Stevan Says” that not even T2 can be passed by just computation).

      Searle’s Periscope is the hardware-independence of computation; that’s why Searle can mind-read T2 by himself becoming the hardware that executes the T2-passing algorithm (and being able to see that he does not understand Chinese). He thereby also shows that feeling is part of understanding words, and that he does not have that understanding when he manipulates the symbols. Missing also is grounding, the connection between words and the things they refer to. For that you need (at least) sensorimotor robotic capacity. But Searle cannot “become” Kayla the way he can become the TT-passing computer. Sensorimotor function is not implementation-independent (although Helen Keller shows that grounding does not depend on any particular sense: it can be done even if you are blind and deaf – but not with no sensorimotor capacity at all.

      Introspection cannot tell us much, but it can tell us what it feels like to think. And Searle shows that executing a TT-passing program does not produce thinking (cognition: forget the weasel-word “intelligence”). Ask Kayla whether she thinks and what it feels like. And ask anyone else. Any difference? (PS: T3 is not, and cannot be, just computation.)

      Turing did not contradict himself (or the TT) if he was not a computationalist. (Aside: there’s also T4: what’s that?)

      Delete
    7. Amélie and Mathilda, you are both right. T3 is a TT too. According to computationalism, computation is enough to produce cognition. But Searle shows it cannot even produce T2. If Turing is not a computationalist, then he would accept T3, which a TT but not just a computational one.

      But Mathilda, although Searle refutes Turing if Turing was a computationalist and believed computation alone could pass T2, but did Turing believe that? He used T2 as his example of a TT, just as he used the he/she “imitation game” to first broach the TT, but any TT calls for complete indistinguishability in doing-capacity. And we have far more doing capacity than just verbal ones. (That’s why “Stevan Says” T3 capacity is part of what’s needed to pass T2. And a Pute like me cannot believe that was obvious to a Brog like Turing! [In mathematics it’s a common complaint that the Brogs leave gaps in their proofs because the missing parts are obvious to them.]

      Delete
  4. This reading helped remind me to question and analyze the content that I read, rather than taking it at face value. It emphasized to me that no person should be exempt from criticism or feedback, regardless of their level of influence or innovation. Correcting and building off of previous work is an essential part of the process of reaching new ideas.

    The clarification of the levels of hierarchy of Turing tests and their comparison with Turing’s writing helped me to better understand the exact questions that Turing was attempting to answer in his paper. I had originally interpreted that Turing was only intending to achieve T2, but it is argued that he also meant to include T3. This had confused me when I read Turing’s paper, as he heavily emphasized verbal indistinguishability. This is clarified by Harnad: “He merely used verbal performance as his intuition-priming example, without meaning to imply that all "thinking" is verbal and only verbal performance capacity is relevant.”

    ReplyDelete
    Replies
    1. So do you think Turing was a computationalist? Why? Or Why not?

      Delete
    2. I totally agree! Reading the first text and then the contrast to this one had me not only understanding more but made me think more as to how these giants become icons when their own work isn't perfect and immune to criticism either.

      I really enjoyed the criticism of the terminology of the "Imitation Game" where it's criticizing not only the social implications but also clarifying it to be reverse bioengineering.

      Delete
    3. Thanks Helen, but skywritings are supposed to show you’ve understand the reading rather than that you enjoyed it! I have to evaluate the skywritings to rate how your understanding is progressing. Please give me something to go on!

      Delete
    4. It seems to me that Turing is not evaluating whether cognition is computation, but rather trying to determine if computation can be used to create something indistinguishable from human cognition. Many aspects of his work and theories seem to align with computationalism, but I do not think this is the idea that he was ultimately trying to convey.

      Delete
    5. ”It seems to me that Turing is not evaluating whether cognition is computation, but rather trying to determine if computation can be used to create something indistinguishable from human cognition.”.

      But why would he want to do that?

      Delete
  5. I am confused about the concept of T5 - indistinguishability down to the last molecule. It seems like it almost wouldn’t be a Turing Test anymore. It is not looking at a non-human machine and comparing it to a human; it’s comparing a human machine (one that has come into existence through a non-typical process, like in a lab rather than being born to human parents) to another human machine. It seems like T5 is just asking, “if we make a human, will it act like a human” which isn’t asking anything.

    Perhaps I am misunderstanding T5. By my current understanding, though, it also seems like the concept of T5 isn’t useful because, to make a T5 machine, it seems we would already need the knowledge we are trying to acquire.

    ReplyDelete
    Replies
    1. I think you pose an interesting question. I was thinking about this as well. If in a hypothetical world, one were able to create an organism that could pass T5, then it could be worth considering (possibly?). Though indeed if identical down to the molecule, then behaviour, function, physiology should be identical to a human.
      Turing does rejects it from possibility (rightly so) and also excludes “from the machines men born in the usual manner” from his test.

      T5, I feel like to a certain extent, is a point that shows that there cannot be an infinity of Turing tests, demonstrating a finite limit to the thought experiment.

      I have a small question: is there a T1? I am just curious why we go from t0 to T2.

      Delete
    2. I also thought a lot about Turing's hierarchy, and about what he believed the most plausible and implausible for a synthesized product or machine to “imitate.” A machine could easily perform tasks, perform speech, and potentially perform sensorimotor actions indistinguishable from humans. However, the two higher levels T4 and T5 start to refer the deeper, more unattainable qualia of human consciousness: internal structure/function and physical or exterior structure/function. I’d contend that the physical structure is more basic and more accessible than the internal structure. As I see it, the internal structure is what we call the "hard problem", of explaining how and why we feel, a phenomena we to this day cannot justify. I agree with the statement that “knowing cannot be just computation,” since human brains are not finite, and are not bound to any rules. A computer can manipulate symbols and show results all you want, but none of them are capable of proving that their behavior springs from emotions felt. My favorite of arguments though, and that deserves more credit, is Ada Lovelace’s argument that machines can only do what we program them to do, and cannot create. I feel this is central to distinguishing between a machine and a brain. It may be human-centric of me to think, but I do believe humans are able to originate new things whereas machines don’t think outside the box. Can anyone clarify why this might be wrong?

      Delete
    3. Yes, T5 is of no special interest. That’s why I only talk about T2, T3 and T4. T5 is only mentioned to stress that T4 could (in principle) still be synthetic rather than biological. (Cloning a T5 would not be Turing-testing though, as Turing notes, because T5 would not be reverse-engineered, hence we could not explain how and why it can do all we can do, any more than we could with a real brain.

      (Who knows, though, maybe a way to reverse engineer T5 first, through computational simulations, and once those work, the real T5 body and brain could be built by a “3-D printer.” But this is more sci-fi than cog-sci, so forget about T5. -- But do think about which of T2 – T4 is the right level for cogsci, and why.)

      There is no T1: anything below a T2 is just a toy, t1, producing only an arbitrary part of what humans can do, hence not indistinguishable.

      Tess:

      Please read the other comments and especially replies in all the four readings so far. The Turing hierarchy is an observability hierarchy: What neurons inside are doing is observable, so it’s something that can be tested too, in T4. But will it help? (Week 4.)

      But inside the brain does not mean inside the mind. So T4 is not necessarily closer to feeling, the other-minds problem, or the hard problem.

      If you think thinking cannot be just computation, you need reasons. Searle has one (the Chinese Room Argument). And the Symbol Grounding Problem is another, suggesting that you need at least T3 grounding (for which a computer, computing, is not enough: why/how not?).

      You like Ada Lovelace’s objection, but what about Turing’s reply? What was it?

      Delete
    4. Turing's reply is that even we, as causal systems, are governed by rules, but that doesn't mean that we cannot create anything novel. In the revision in bold after this argument, it says that mathematicians and philosophers usually assume facts are accompanied with exhaustive conclusions. I do agree that this is never true, since there is always uncertainty and unpredictability. However, what we do have is our reactions and emotions facing these facts and these consequences of these facts. This allows us to construct and originate new ideas spontaneously, to search for new ideas that are useful to the context. Can computer systems really be programmed to be unpredictable? What kinds of thoughtful, relevant and/or useful creations could such a computer come up with?

      Delete
    5. This is discussed in other replies (q.v.)

      Delete
  6. The "Argument from Continuity in the Nervous System" was particularly interesting to me in view of the nature of some 'top-down' psychosomatic pathologies. While I agree that

    "Any dynamical causal system is eligible [for modelling consciousness], as long as it delivers the peformance capacity"

    in testing our cognitive abilities, I find it very difficult to imagine that a machine could produce the qualitative states that are generally corequisite with or possibly causal of physical symtoms of something like Hypochondria. As a pathology often caused by social and environmental factors (among many others) it seems to back this argument up quite well; it indicates continuity of the nervous system not just with the subject's surroundings, but equally with a particular subjective qualitative mental state (i.e. anxiety) that radiates downwards into physiology such as the nervous system.

    Even if abnormal mental states are

    "hardly the one on which to build one's attempts to generate positive performance capacity"

    psychosomatic illnesses seem to be a common response to environmental pressures and threats- an otherwise healthy mental and physical system's response to situational factors. This makes me wonder- should a cognitive modelling system that seeks true accuracy not try to replicate pathological responses?

    ReplyDelete
    Replies
    1. There’s no continuity within the nervous system: neurons, action potentials, molecules and even electric charge is discrete, not to mention quantum mechanics.

      And apart from eating and voiding, there’s no continuity between organisms and their environments, not even with jellyfish or amoeba.

      So does Kayla really need to get sick to prove she’s thinking and feeling?

      Delete
    2. Asking if machines need to have pathological responses to prove they are thinking is very interesting. A T3 or T4 machine with sensorimotor abilities needs to learn to respond to its environment as humans do, so would keeping it in a lab 24/7 generate a response similar to anxiety, claustrophobia or restlessness as a human would most likely feel from being cooped up? Or can the T machine be selective of which sensorimotor stimuli it observes to strictly deliver the required performance capacity? I believe that Kayla must be able to mimic fluctuations in mental and physical well-being to perpetually “fool” the interrogator and as Turing says, this should be possible with enough computing power.

      Delete
    3. We Turing-test people every day. Do they not pass until we put them in a lab 24/7 and stress or damage them? The task for the class was to decide whether Kayla passes T3, and if so, would we kick her. Are you saying kicking her is part of the TT itself?

      Delete
  7. The annotation game helped clarify some of Turing’s views from his original paper, including that the Turing Test is also about discovering which type of machine we, as humans are by generating our performance capacity in ways that are understandable to us, (or the engineers), who designed the machine. The fatal flaw mentioned in T2, associated with the ability to perceive and comment on images inserted in an email exchange was quite enlightening to distinguish between T2 and T3; however, did Turing himself think of this difference, or did he believe a T2 Turing machine was enough to pass the Turing Test? I also appreciated the view on the Other Minds problem which stated that we do not worry about questioning whether others have minds since their behaviour is relatively the same to ours (from a functional perspective). Does this suggest that asking if computers think is meaningless so long as it does not directly appear to threaten us?

    ReplyDelete
    Replies
    1. Cogsci is not asking whether computers can think; it’s asking how humans can think: It’s trying to reverse-engineer all the things humans can do. And computationalists are testing whether computation alone can do all that. And the TT just says: if you can’t tell them apart, that’s the best you can do.

      Delete
  8. The described difference between artificial intelligence and cognitive modelling made me think about the future of cognitive science as a field. As Dr. Harnad says, "the goal of AI is merely to generate a useful performance tool while cognitive modelling is to explain how human cognition is generated." It is interesting to compare AI programs and the brain on the basis of their goals and objectives. When a computer is programmed, the creator has a specific set of goals in mind that they then code into instructions. It is difficult to define the general goal of the brain, besides in specific tasks we attempt to complete. On a large scale, our brains are evidently programmed to just to survive life on earth. But when we consider things like emotions or sublime experiences, there does not seem to be a clear evolutionary reason for their existence. In fact, they often create obstacles to our life preserving functions. Once again, we are asking ourselves "how and why do we think and feel?"

    ReplyDelete
    Replies
    1. Cogsci is just trying to solve the “easy problem” first: What’s that?

      Delete
  9. I thoroughly enjoyed this reading, it gave me a better understanding of what Turing was getting at. In particular, I found the part about other minds most insightful. “Stevan says” there is no other mind's problem with T2, they have no abilities to process sensory-motor information, and are just simulating it in symbolic code. Therefore, there is no reason to believe they have a mind like ours. However, in T3, they are able to actually process real sensory motor information, like humans do, hence raising the question of their ‘mind’, which we of course, cannot see. We are not here to ponder the mind or feelings of a robot, but I think the important part here is this is actually an argument against computationalism. If a computer is just dealing with symbols, and humans deal with real sensorimotor information, then there is no reason to assume that computationalism can ever be fully equivalent to cognition, hence no other-minds problem.
    Stevan raised the question to a few other responses of “Was Turing a Computationalist”, if we take him as advocating for T3 (as the paper argues that he might have intended” then no, I don't think he was. Rather I think his goal in the 1950 paper was just to show the capacity of what computers are capable of, not argue that they can fully explain cognition. But of course, that is just what “Sophie Says” I have no way of knowing what Turing actually thought!

    ReplyDelete
    Replies
    1. What is "Searle's Periscope" and how is it related to the other-minds problem, computationalism, and the difference between T2 and T3?

      Delete
  10. As a brief aside (I will post my main comment later), I'd like to make a suggestion concerning the definition of "machine", which, I think, improves upon Harnard's definition, and thus may prove useful for future discussion. My proposal, in short, is to take 'machine' as a synonym for 'constructor', as defined by constructor theory (https://www.constructortheory.org/):

    "A constructor is an object that can perform the task and stays the same after doing it. It does not have any net change at the end of the process in its ability to cause the transformation: it ends up back capable of doing the task over and over again. That’s what performing the task reliably means."

    This definition does a better job at capturing our intuition of what a machine is, while also doing away with the inherent vagueness in the notion of "causal systems": anything that is governed by the laws of physics can be said to be a "causal system", but only a tiny subset of these are constructors. A computer is a constructor, and so are people, steam engines, cells, and catalysts. But free-floating molecules are not, nor are waterfalls, the universe itself, and, more broadly, the substrates upon which a constructor operates – excluding the case where a constructor operates upon another constructor, that is.

    ReplyDelete
    Replies
    1. Thoughtful suggestion (related to attempts to define “autonomous systems”), but I’m not sure it’s necessary, or even that it makes a difference. A molecule fails the TT anyway. And the objective of reverse-engineering (which is broader than Turing Testing) is to figure out how things work, causally: molecules and constellations and gravity can be computer-simulated too; that’s the Strong Church-Turing Thesis: [what’s that?]).

      Please do the weekly skyreadings, but keep your skywriting per reading to around 100 words each. There are over 50 of you and only one of me, and only 7 days in a week…

      The midterm and final exam will give you a chance to put it all together and show you’ve understood. (But there are length limits there too!)

      Delete
  11. Perhaps I should save this point for when we discuss Searle's Chinese Room thought experiment, but I could not help wonder if the "Turing hierarchy" described in reading 2.b was influenced by, and perhaps intended as a reply to, Searle's critique of the Turing Test.

    If I understood correctly, Harnard believes that computation alone is not enough to pass T2, let alone T3; moreover, he states that "T3 is the level of test that Turing intended (or should have), meaning that a machine that passes T3 would be capable of generating human-like performance capacity (intelligence?).

    But would such a machine possess intentionality in Searle's sense of the term? Would it be capable of understanding, and having other cognitive states with semantic content? I tend to think it would not. I don't see how adding analog, sensorimotor mechanisms to a computer (to build, say, a robot), would get us any closer to closing the gap between syntax and semantics. None of the sensorimotor add-ons 'understand' anything, nor (if Searle is right) the computer to begin with; thus, the "dynamical system" resulting from their composition cannot 'understand' either. This is puzzling, because I can envisage the same argument being applied even to a machine that passes T5: none of the neurons in the brain can be said to 'understand'; they are just machines meaninglessly operating on electrical inputs/outputs according to genetic instruction. How can a brain, which is simply a collection of neurons, understand what the electrical-chemical activity going on inside it is about? The issue, as I see it, is even deeper than Searle imagined: it's not only that computation appears to be insufficient to explain intentionality, but that no explanation whatsoever (or at least, no mode of explanation we're currently aware of) appears to be satisfactory.

    ReplyDelete
    Replies
    1. The “Turing Hierarchy” is a hierarchy of properties you can observe. It places T-Testing in the general framework of reverse-engineering, empirical science, and causal explanation.

      “Intentiontality” is a weasel-word. “Understanding” is clear enough. If I say “The cat is on the mat” you understand me if you have the ability to recognize and interact with cats and mats in the world [T3 sensorimotor grounding], as any of us can, including describing and discussing them in words [T2 verbal capacity], AND if you have the feeling that monolingual English speakers have when they hear or read “The cat is on the mat” but not when they hear hear or read“猫在垫子上”

      If you ever find yourself in a state of perplexity or scepticism that seems to imply that nothing can be explained or understood or reverse-engineered, you are probably the victim of a philosophical pseudo-problem – like Zeno’s paradox, according to which it’s impossible to walk across the room – and you are better off shaking your head, blinking, walking across the room, and forgetting about it.

      Cogsci is not philosophy, even if it sometimes sounds like it.

      Delete
    2. I agree with the professor's argument on whether a T3-passer understands what it's doing. For humans, we know someone understands us if they react to us in the way we intended. It could be a wave of goodbye, a helping hand when hearing the word "help," or more complex activities such as learning. A T3-passer should be able to perform all the above activities or more with its human-comparable sensorimotor abilities, thus cannot say it doesn't understand what it's doing.

      What got me thinking from here is the feelings we convey in our daily activities. If a T3-passer is indistinguishable from us, it should react as humans do. For example, a T3-passer would cry at a funeral and share the feeling of sorrow with the crowd. Although we can't say it is genuinely sad, it's interesting that we cannot tell if it's only showing signs of sorrow, not feeling it. Again it falls into the old Other Minds' Problem, and the role feeling plays in our cognition. As said in other comments, feeling does look superfluous in this case since we do not need to understand it to solve EP.

      Delete
  12. Initially, when I did the reading I was very confused about what would be the distinguishing factors between the T3 and T2 model, but after reading through the comments I do have a grasp on the subject. There is still remaining confusion on how to distinguish between the T4 and T3 model which the reading discusses that the boundary between T4 and T3 are “fuzzy”. I really enjoyed this reading because it provided more clarity about Turing’s work.

    I found the explanation of the different goals between artificial intelligence (AI) and cognitive
    modelling (CM) to be really fascinating. “[A]rtificial intelligence (AI), whose goal is merely to generate a useful performance tool” while cognitive modelling’s goal “is to explain how human cognition is generated”. This explanation helped me better understand the design of the Turing Test and what limits it from being considered a T4 model. Because if I understood this correctly, the T4 model is supposed to achieve cognitive modelling.

    ReplyDelete
    Replies
    1. T2 calls for verbal capacity indistinguishable from that of any normal human’s.

      T3 calls for verbal AND sensorimotor (robotic) capacity indistinguishable from any normal human’s. Kayla can not only talk to you about cats on mats (or anything that can be talked about) , but she can recognize recognize cats and mats, and do with them what needs to be done (love and care for the cat, and keep the mat nice and clean).

      T4 calls for verbal AND sensorimotor (robotic) capacity indistinguishable from any normal human’s -- AND, inside their heads, neural activity indistinguishable from any normal human’s.

      [Kayla might not be a T4 -- or what she has in her head might be a synthetic version of what we have in our heads (in which case she would pass only T4 and not T5).]

      One of the questions to think about in this course is whether T4 is necessary, or T3 is enough. Often an exam question (such as “was Turing a computationalist”) is “which is the right T-test level, and why?” There are interesting things to reply that would justify any answer and demonstrate that you have understood the course.

      Delete
    2. In response to Professor Harnad's question above ("whether T4 is necessary, or T3 is enough"), I believe we can only solve the easy problem (how and why humans do what we do) with T4. With any lower-level version of the Turing Test, we cannot explain the causality behind how and why we do what we do since the internal structure of any T3- or T2-passing machine differs from that of a normal human. Therefore, I cannot see any possible way of extrapolating causal explanations of cognition in a T2-passing robot to the human mind.

      Answering this question made me wonder: is it possible to solve the easy problem without simultaneously solving the hard problem (how and why organisms feel)? I ask this because surely emotion is what explains so much of how and why we do what we do? Are the two problems' solutions not inextricable?

      Delete
    3. It’s clear that passing T4 is required to explain the brain. But is that a reason T3 is not enough to reverse-engineer the brain’s cognitive capacities? Why?

      To solve the easy problem (EP) is to reverse-engineer how and why organisms can DO what they can do. To solve the hard problem (HP) is to reverse-engineer how and why organisms can FEEL If you could explain how and why the EP cannot be solved without also solving the HP you would have a solution to the HP. Without that, FEELING looks superfluous for solving the EP (which is also why the HP is hard). What do you think the causal role of FEELING is?

      (And even if T3 or T4 FEEL, how would you know it? That’s not the HP but the Other-Minds Problem [OMP]…)

      Delete
    4. @Stevan Harnad: This reminds me of a point that you mentioned in your paper: what is the line between the physical and the intellectual capacities of humans? Obviously, this concerns T3 and T4. Is accurate modelling of human behaviour possible without a human body? - The thinking faculty of humans is mainly the brain, a physical organ, and it and other organs influence each other constantly: the nausea one feels when they see something uncomfortable, the 'gut feeling' when 'there is a butterfly in one's stomach,' and the dizziness one feels when one spins. The interplay of vegetative functions and cognitive functions plays a role here too. At the same time, humans' thinking induce physical behaviour in many places and there are complex mechanisms in that. If the modelling of the body is neglected, and so there is not a model of the human body, then when a doctor sees that there is a pain in an incorrect area when, for example, the AI 'thinks too much' (as humans do), or when a doctor sees that the AI blushes (of course, described in verbal expression) in a wrong place on the face when the AI 'meets someone attractive,' the doctor will obviously regard the AI as non-human. This must be relevant as 'indistinguishability' in T2, T3, T4, and T5 are defined as an indistinguishability at absolutely all times without exception. So, I think that in order to model humans' thinking promptly so that the machine can pass even T2, a model of the human body has to be come up with when modelling the human thinking faculty with the highest precision.

      Delete
    5. Cognition needs a body, but not because of that. Doctors are doing clinical medicine, not Turing-testing or cogsci.

      Delete
    6. [deleted comment]
      I think T3 is the ideal level because it can perform sensorimotor and verbal capacities and is intended to do everything a human can do physically and cognitively. The Turing Test is questioning if machines can reverse-engineer human capacities. The standard practice uses the T2 model, but the T2 model is limited to only verbal capacities; humans are more than their verbal capacities. Hence the T2 is unable to reverse-engineer every human capacity, and the T4 would likely be excessive because Turing establishes that appearance is not necessary to pass the Turing Test.

      Delete
  13. From my initial understanding, distinguishing between T3 and T4 was clear but there could be points of contention on whether certain behaviors needed to be passed at the T3 level or if they are a T4 problem. So like in our discussion in class on whether something can be classified as cognitive or vegetative, we found that many of the examples we gave as a class were either very clearly cognitive or very clearly vegetative. Attraction and the bodily reaction, being the example given by Harnad amongst other phenomena, simply did not cross our minds. This might just be a wording issue for me, but my confusion lies in what the difference is between the “robotic capacity” T3, that is being able to act as humans do, and the “function” aspect of T4. At first I thought wouldn’t it be easier to just conflate the function and robotic capacity into one but my understanding now is that the function aspect refers to those bodily functions we’re not fully aware of or initiate?

    ReplyDelete
    Replies
    1. Introspection shows that we have no idea of how we do anything – so all the things going on in our brains while we do anything, or think of anything, are invisible to us: we are waiting for cogsci to reverse-engineer and explain them to us.

      You are right that there is some similarity – in fact overlap – between the fuzziness of the cognitive/vegetative distinction and the T3/T4 distinction. Both are about what is relevant to solving the “easy problem” of cogsci: how and why can organisms do the (“cognitive,” “intelligent”) things they can do. Cogsci is not about reverse-engineering and explain digestion, or growth, or immune response. But as some of the comments about mood and psychopathology have suggested, there may be some overlap, interaction, or dependency between vegetative and cognitive functions.

      Perhaps T-testing could serve as a filter for determining which vegetative functions are relevant to reverse-engineering cognitive functions. T3, in particular, might serve as a filter as to how much of T4 is relevant to cogsci:

      Although it’s pretty vacuous, when we are still so far from having solved the easy problem, might it be useful to say that the only parts of T4 that are relevant to cogsci are the ones without which it proves to be impossible to pass T3?

      The rest of vegetative function would still be relevant to reverse-engineering life but not to reverse-engineering thinking (cognition).

      There are criticisms, however, that could be raised against this “modular” notion of function. It may just be a symptom of our ignorance [uncertainty] at this still very early stage along the path of reverse-engineering.
      Barrett, H. C., & Kurzban, R. (2006). Modularity in cognition: framing the debate. Psychological Review, 113(3), 628.

      Delete
  14. I think that it is interesting that Turing rejects T5 by saying that “no engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin”. If I understand well, it would not even be relevant to create such a machine since as it is said at the beginning of the paper, when it comes to the Turing test, we are looking at the performance capacity and not at the physical appearance. The comparison between the man and the machine should be unbiased by appearance.

    ReplyDelete
    Replies
    1. Where the organ’s mechanism can almost be read off from reading its structure, as with the heart, you hardly need the TT. But the brain does not wear its function on its sleeve.

      Delete
  15. Could you please provide me with clarity on the difference between an outcome that would be a measure of performance capacity and not just something that is formally equivalent to it?
    In the text, he says that the candidate machine must really have the generic performance capacity of a real human being, that is totally indistinguishable from that of a real human being to any real human being. To me this implies that all humans perform cognitive functions the same way and so in order for a machine to be considered indistinguishable it too would have to conduct the cognitive function in that same way. However, even within humans there is a great deal of variability in how we produce outputs or responses (for example there are many methods of doing a long division question, none of which are more right than the other, permitting they come to the same output)? So why would a machine that produces a formally equivalent output to a human in the test not meet the performance capacity criteria?

    ReplyDelete
    Replies
    1. Kayla passes T3, but to pass T3 you don’t have to be identical to Kayla – just to be able to DO what she can do.

      Delete
  16. The concept of T5 was especially interesting to me. Assuming that a machine passes T5, it “must be indistinguishable from other human beings right down to the last molecule.” It is also stated that a cloned human being would not be eligible for the Turing test “because we didn't build it and hence don't know how it works.” Would the existence of a machine that passes T5 simply imply that both the easy problem and the hard problem of cogsci have been solved? In my mind, a machine that passes T5 would be functionally identical to a clone. Would the difference just lie in the fact that it was built from scratch instead of grown inside a womb from DNA that already exists? It seems to me that the implication that all parts of a human could be synthesized to create a machine, and that we would understand at that point how it works, would mean that the problems of cognitive science have been effectively solved. Is this an accurate interpretation of what a machine passing T5 would mean, or am I completely overthinking it?

    ReplyDelete
    Replies
    1. We are all T5-indistinguishable from one another, but that does not mean that we know how and why we can do what we can do. That’s why you have to know how to build it, not just to clone it, or procreate it.

      Delete
  17. I found this reading very helpful regarding the discussion on the difference between AI and Cognitive Modeling (CM)- in CM, we must build the system and know how it works, whereas AI is useful for getting something done.

    I thought this was an interesting distinction as it related to your reply to a discussion post from 2a. The discussion focused on the notion of computers being “programmed” and how this does not necessarily rule out the factor of the computer producing something of “surprise” (this connects to the Lady Lovelace objection you propose in this paper- “We know from complexity theory as well as statistical mechanics that the fact that a system’s performance is governed by rules does not mean we can predict everything it does”). However, one thing that I am still confused by is the difference between AI, CM, when it comes to the notion of “surprise”- can “surprise” exist even when we know how things work?

    ReplyDelete
    Replies
    1. The MIT cognitive roboticists who built Kayla know how Kayla can do all the things she can do, but they can’t predict what she will be doing at 3pm next Tuesday.

      Delete
  18. After reading this second text, my point of view on the Turing paper has changed a little bit. Yes, the test could be seen as trying to develop an AI who can deceive and lie, but I think Turing wrote this as a foundational block for future learning machine ideas. One of the points he addresses is the “black-box” problem in AI. Is it worth it to create an intelligent machine if we don’t know what makes them intelligent. This problem is at the source of my studies as a Cognitive Science major. I started being interested in AI in the hopes to achieve brain-inspired Artificial Intelligence, machine learning models strongly inspired by human neural networks. I think this should come from a cooperation, a symbiosis between neuroscience and computer science. “Here is the beginning of the difference between the field of artificial intelligence (AI), whose goal is merely to generate a useful performance tool, and cognitive modeling (CM), whose goal is to explain how human cognition is generated.”. I think we need to solve cognitive modeling, work further and deeper into neuroscience, understand our own intelligence and then create AI model inspired by the architecture of our brain. I think this would eventually go beyond the Turing test.

    ReplyDelete
    Replies
    1. For me, I think it’s truly unbelievable how much foresight Turing had when he published this paper and how it laid the groundwork for our understanding of A.I today. In 1950, most of his ideas concerning A.I and machine learning models involving neural networks were purely theoretical, with there being no computer sophisticated enough to even come close to passing the Turing test. It has been only in the past decade that, in its classical setting of an interrogator and two participants, the Turing test has become truly necessary to test these A.I systems with the invention of Siri, Alexa and the like; Turing really was 60 years ahead of his time.

      Delete
    2. Yes, T was prescient, and yes, T4 and T5 are more than T3, but does cogsci need more than T3 to solve the easy problem?

      Delete
  19. This paper is a great review for me to understand what Turing is trying to say and what's wrong about it because I had some doubts and questions while reading it. What is simulation? I got an idea from the airplane example: a simulation of an event can be said of that the action of such simulation has no consequences in reality. Sometimes, the internal thinkings we do merely have any real causal effects on the reality - e.g. wondering off with your own thoughts. Can we argue that a T3 (or T4) be simulated in the way that we cannot tell it apart from us via their performance capacity, and do have internal thinkings (that just don't lead to an output action)?

    ReplyDelete
    Replies
    1. A curious observation: the development of technologies for online virtual worlds has been skyrocketing for the recent years, e.g. a team-based online game, you have a figure A representing you in an online setting in which you "controls" it as if you are in the setting, and you can set demands for how the figure on the screen should act then let it go free. In such gaming setting, as the player on the opposite team also controlling a figure B, it is sometimes indistinguishable between playing with a real human player or program-generated player. If there's a chance, what level of the Turing hierarchy should we put the player figure A on? It is tricky for me because such simulation has causal effects on the human manipulator sitting in front of the screen, at the same time, it passes the TT in this particular gaming setting.

      Delete
    2. You still have not quite grasped the difference between a simulated ice cube and a real ice cube (what is it?).

      Game-playing is not TT-passing (why?).

      Delete
    3. The simulated ice cube is not the real ice cube. Game-playing is not TT passing because it does not attempt at reverse engineer how our brains do the things we can do cognitively.

      Delete
  20. A quote from this reading helped clarify the motivation behind asking “Can machines think?”, the goal is not to merely duplicate the ability to think, and rather explain thinking capacity. And it’s not a meaningless question, and rather has an undecidable answer. Prior to this reading, it was still unclear to me why we were trying to address this via the Turing test in the first place, when my intuitive answer would always be “no” – at least at the T2 verbal level. I feel as though this relates to the critique behind the Sapir Whorf hypothesis, how it is based on the notion of saying that the language we use directly shapes our perception and how we think. But this idea is inherently flawed seeing as thinking is not observable, and language is. It is unrealistic to say that every single thought, and the articulation of each thought, and how that thought was constructed, and articulating the construction behind the construction of that thought etc. can be explained solely through verbal output.

    ReplyDelete
    Replies
    1. What is and is not a machine?

      That the meaning of words can just be more words is simply enough seen. But the relation of that to the Sapir-Whorf hypothesis is not that clear: what do you mean?

      Delete
    2. From my understanding, the overall criticism behind the Sapir Whorf hypothesis was that it was based on the assumption that if a human’s language has no word for a particular concept, then that human would not be able to understand that concept, which isn’t true. I think what I was trying to get at was it helped clarify to me what our goal is when we ask the question “Can machines think”. I was trying to draw a link between the Sapir-Whorf hypothesis’ flawed assumption and this question, and how it does not help us explain thinking capacity.

      Delete
  21. This reading was really interesting and I think it allowed me to understand some problems with the original Turing test and Turing's defense of computers' ability to replicate human behavior. Additionally, it quickly discussed the difference between the goals of cognitive modeling and artificial intelligence, which I was completely unaware of and was quite interesting. However, I don't really understand how the Turing test would be based on anything other than getting a certain level of consensus on whether the computer being tested functions like a human being. Unless we know what exact dimensions and amounts of variation are allowed, how can we say whether the testing candidate is within those bounds? If we were to compare the machine against a 'typical' or 'generic' human, who would that human be?

    ReplyDelete
    Replies
    1. I was also confused regarding what qualifies as passing the TT. In a 2a thread, Dr. Harnad mentioned that “perfection” has nothing to do with passing the TT, and the candidate must only have “indistinguishable, ordinary capacities”. A candidate would not have to pass for someone with extraordinary capacities, like Einstein. However, I am confused about what the threshold criteria is to pass as a human behaviourally; what constitutes an “ordinary” human?

      Delete
    2. Cogsci is far from producing anything close the TT. But if we could produce Kayla, we’d soon know we had a winner, right here, in Birks 203. Just us.

      Delete
  22. In the critique of Turing’s paper, a feature “more important” than a random element is presented. Autonomy is the candidate to take its place, for which I would agree. The frame in which Turing has structured the “imitation game” however (even if unintentionally) is one of comparing purely verbal capacity. If we are taking the structure of the Turing test for what the critique has reworked it to display, I think autonomy addresses some of the skeptics. Notably, the argument against Lovelace’s opinion could start with the presentation of autonomy in the constructed machine’s “orders”.

    ReplyDelete
  23. In my opinion, this reading is as essential as the paper it criticizes. It gave me a better sense of Turing's ideas from a more modern point of view.

    I especially liked how the Turing Tests were separated by their levels (I don't ever recall seeing them on the internet).

    I wanted to comment briefly on the t3 and t4 tests. I agree with Professor Harnad that the t3 test seems the most important one to pass since it physically does everything a human can do cognitively without looking like one. Going above the 3rd test is useless to answer whether a machine can pass the Turing Test. Knowing that a machine can move the same way as a human does, does not bring anything new to solve the initial question posed by Turing. However, this is not to say that t4 and t5 are utterly useless. They are only meaningless for the present question at hand.

    ReplyDelete
    Replies
    1. I agree with this statement. I also questioned the purpose of t4 and t5 if in the hierarchy of Turing tests t3 is the embodiment of the Turing test. I wonder as well where the mind/body problem ties into this as I was wondering the same during the last reading. How come the mind/body problem is still an issue if the t3 test still supposedly does everything that a human can do? I understand that t3 is replicating a human's cognitive abilities but to what extent do cognitive abilities mirror the generation of feelings?

      Delete
    2. I could be mistaken, but I guess to answer where the mind/body problem ties into it, we can look at the definitions made in class and then connect them to readings. That said, from my understanding, the easy problem is "how and why people CAN do what they do". Whereas the hard problem is "explaining HOW and WHY animals feel".

      Moving forward, based on the readings, it conveys the impression that humans are regarded as equivalent to machines. Specifically, the similar means of "T3 - verbal performance capacity and T4- robotic (sensorimotor) performance capacity". However, referring back to what you had mentioned (i.e., matter of cognitive abilities to mirror the generation of feelings, the idea that t3 does everything a human can do). I think the mind/body problems may come into play when we refer back to the 2a reading theme -> "can a machine think". In other words, thinking can include abilities categorized in T3 and T4; however, we don't know "how and why", and where "how" can be regarded as "how it is to think and how it may feel to think"—also known as the hard problem.

      However, I'm not entirely sure, as I still find myself trying to grasp these concepts. But this is what I've taken away from the readings and lectures.

      [apologies for the removed, for some reason when I was copy and pasting my response from google docs it would miss the last portion ]

      Delete
    3. Maira, I think you meant T2 and T3, not T3 and T4.

      Delete
  24. While this reading made me further think about how Turing thought about machines and intelligence, I have concluded the following: Apparently for Turing, being a computer is not a requirement for intelligence, otherwise, he would think that human beings are computers. Rather, computing is one way to be intelligent, but he is agnostic about if human beings do computing the same way. Furthermore, he makes a point about distinguishing digital computers and brains and believes that our brains cannot be Turing machines. So I believe that he actually
    does not have a theory on what makes something intelligent or what intelligence is, he just knows that it applies to human beings, and he probably believes that intelligence is the computation, not the behavior. This would also imply that Turing is using humans as a baseline and behavior as a measuring stick to determine whether machines are intelligent or not.

    ReplyDelete
    Replies
    1. AlaraPlease let’s get rid of the obsolete weasel-words. “Intelligence” just means the ability to do what thinking organisms can do. Cogsci wants to explain how and why they can do it. Computation can do some of it, but far from all or even most of it. What is computation? And what can’t it do? and why?

      Delete
  25. One part of the reading that I was a bit in disagreement with was the notion that by calling the Turing test “the imitation game” implied that it was for fun or involved some sort of deception and thus lessened the process. Turing likely called it the imitation game because of what the Turing test looks like in reality: the very nature of the test causes it to become game-like between the interrogator and the participants. I think that it is actually clever to present the Turing test to the interrogator as a game of 20 questions, because this provides the most real-world applicable; this way the interrogator will not feel pressured into asking certain questions to obtain scientific results, making the process more similar to encountering a sophisticated A.I outside of the test (if that is the goal, that is).

    ReplyDelete
    Replies
    1. It’s interesting because on the one hand I agree with your points– I also think calling it the Imitation game has popularized this scientific test in a way no other scientist (especially cognitive scientist) has been able to do. However, I do think the biggest downside of this is that it has caused a great deal of misinformation when it comes to the Turing Test. Before these readings, I realized that I had misunderstood what the purpose of Turing’s test is. I always believed that his test and the T2 robots that were used for trials of the game had very high success rates, but I’m now realizing Turing’s true goal: a way of “reverse-engineering” human cognitive capacity.

      Delete
    2. It is more helpful to think of the TT as our daily interactions with Kayla (and with anyone else), not as either a Q&A Game or a way of tricking people. (Popularizing the TT is not the goal: solving it is.)

      Delete
  26. I found this reading very interesting as it separates and redefines Turing’s article. It specifies the terms and link the different concepts that were described by Turing. The part that really struck was the one distinguishing the goal of AI versus the goal of Cognitive Modeling. Indeed, I find that with the concept of computation and the link between machine and human, I was a bit confused as to what Turing was aiming towards and the intersection between the two fields. Artificial Intelligence focuses on building devices that are highly performant, aiming to resemble human intelligence. Cognitive Modeling is centered around the origin of human cognition. It revolves around the question of how we do the things we do, which is thought in this case. I am glad I got that clarification.

    ReplyDelete
    Replies
    1. Ines, AI just produces useful tools; need not resemble human cognition, just do the job.

      CogMod is the one that wants to model human cognition.

      Delete
  27. Is it fair to say that modern computational modelling research is doing much of the work to reverse engineering to understand cognition functionally that the Turing Test announced? In this paper, the utility of fooling most people most of the time was called into question; by fitting model predictions to empirical data sets, computational modelling seems to in part address this issue by using actual human behaviours (at least those captured by laboratory experiments) as a metric for success rather than whether one can mistake a behaviour for actual human behaviour. I imagine there are loads of limitations to using experimental data sets, and granted, the insights of present day cognitive modelling seem far less potent and comprehensive than would be required to make a machine pass T3+, but is it a viable avenue to accomplish what the type of understanding through reverse engineering that the TT demands?

    ReplyDelete
    Replies
    1. Kid-sib found what you said a little complicated: What are you saying/asking?

      Delete
    2. Asking if current computational modeling in cog-sci is doing what TT set out to do piecemeal. The idea is that fitting a model's to data from experiments is looking for similarities between computer output and human behaviour, just like TT when asking people if they are interacting with a robot or a machine.

      Delete
  28. This reading helps me understand what is a Turing machine because .''A reasonable definition of "machine," rather than "Turing Machine," might be any dynamical, causal system''. In the past I could only see what is it that it does and avoided how it does it. At the emailing performance level it can simulate a conversation, but it is not actually conversing. If we consider how it does what it does, it is indeed not doing what we do when we are thinking since we are not computers. In conclusion, TT is testing whether a machine can simulate a dynamical system, or any dynamical system for the computationalists. By that I mean generating outputs using simulation.(I am not quite sure what do you mean by dynamical system).

    ReplyDelete
    Replies
    1. Please read the rest of the comments and especially my replies on what is meant by “simulation,” “cognition,” “computation” and “computationalism.”

      Delete
  29. Turing state that
    "The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man. " However, it makes more sense and more correct if he used terms like verbal performance capacity instead of "intellectual capacities" because "intellectual capacities" should depend on both verbal and non-verbal performance. If he actually wants to test its intellectual capacities, then the Turing machine should be T3 instead of T2? Or did Turing assume having intellectual capacities is merely about having verbal performance capacity?

    ReplyDelete
    Replies
    1. From my understanding of the reading and discussion threads, Turing did not intend to imply that having intellectual capacities is solely about verbal performance capacity. Rather, he used the verbal performance capacity test (T2) as an example of a turing test. However, he did exclude T3 by making digital computers the only candidates for the TT. What I got from this is that Turing assumed that simulations of behavioural performance equate to the real thing, and can then be verbally described during T2. Contradicting this, the reading pointed out that a simulation may superficially appear the same as the real thing, but it is not (e.g. a simulated kettle cannot boil water as an actual kettle does). As such, a digital computer with simulated experiences cannot verbally describe those experiences to complete accuracy during T2.

      This made me wonder, is T3 even necessary if a machine cannot pass T2 without being able to pass T3?

      Delete
    2. You both make good points.

      But even if T2 is enough for testing whether you have successfully reverse-engineered cognition, if you need T3 capacity to pass T2, it seems hard to imagine you could reverse-engineer it without testing for it directly:

      If all and only working hearts have a BP of 72, how would you learn to reverse-engineer how the heart pumps blood by only measuring BP?

      Delete
  30. In light of my previous comment (for 2a.) I was very interested by the distinction you draw between artificial intelligence (AI) vs cognitive modeling (CM), with the first being more tolerant of a "black box" whose inner workings remain to a significant extent unexplainable even by those who engineered it. I have found myself a bit frustrated by the limited scope denoted by the term AI in current discourse referring almost solely to machine learning methods, which then get unfortunately conflated with older and broader meanings of the term.

    One might hypothesize (or just wonder idly) that if we have such ML methods on the one hand and on the other hand computational cognitive models as well as early efforts to map the brain, eventually these might grow in such a way as to eventually coincide or unite, and that that event might give rise to some leap in cognitive science bearing in some way on the "hard" problem or "artificial general intelligence," but they for the moment seem so distant that it is difficult to imagine how or whether that could ever occur.

    ReplyDelete
    Replies
    1. Zahur:Both AI and CM are forms of (software) engineering. Both need to know how their software works. To design a useful software tool (AI) you still have to figure out how to do it: you can’t just do it by sleep-walking.

      Delete
    2. I thought the distinction made between artificial intelligence (AI) and cognitive modeling (CM) was interesting as well. As a computer science student (who has worked in developing software incorporating machine learning), I’d like to expand on Zahur’s hypothesis further and add to Prof. Harnad’s comments.

      Modern ML methodology requires one to have a list of parameters to take into account so AI knows how to react in a given situation. When learning from feedback, these factors get adjusted (increase or decrease in emphasis in a specific situation).

      In order to do what Zahur proposed about using ML and CM to map the brain for our understanding, we first need to be aware of every piece of information the brain uses. If one were to build a simulation of the brain and left out undiscovered key information, it doesn’t matter how well the simulation works because it does not accurately represent the real situation we want to understand.

      I do think it is possible to develop this type of tool without fully understanding how the brain’s “software” works however. In the situation where every part of the brain and every possible “input” is labeled as a parameter within a ML model (without necessarily understanding how these parts and inputs interact with each other), one could train a ML model interact like a brain by comparing its responses to a real brain’s responses. It’s also possible that in the future, AI will be advanced enough to take in unknown inputs to incorporate in decision making. After this theoretical ML model is trained, a researcher could then backtrack and investigate the why behind certain actions and their relationships with other parts of the model.

      This is just what I think though and I am far from an expert in this subject.

      Delete
  31. I was most interested in the question posed "is blushing T3 or T4", not because I have a special interest in the physiology or psychology of blushing, but because it illustrates well the blurred line between the two levels, and the importance of what is and what is not a "sensorimotor" performance. If T3 cannot blush (or cry for that matter) then is T3 really the level that Turing had intended to refer to? Or would it be T4? On the other hand, a functionally equivalent nervous system in T4 seems quite a bit farther then what Turing was talking about.

    ReplyDelete
    Replies
    1. Questions about what’s vegetative and what’s cognitive, and what T4 capacity is necessary to pass T3, are not trivial questions. We’re just as far from being able to answer them than in being to pass any TT. But there are good reasons for thinking even T2 cannot be passed by computation alone.

      Delete
  32. I find that a lot of confusion lies within the derivations we make when initially handed the statement "can machines think". Oftentimes, we're concerned with questions of the soul or of consciousness yet, these problems exist even when considering the validity of another human's 'experience' when paralleled to our own. This is why the question of" whether or not machines can do what thinkers like us can do -- and if so, how" increases in significance as we further discuss the issue. It's important to understand that just because a cow doesn't articulate itself the way a human can, does not mean that it does not hold motives that fuel it's view of the world. Comparatively, a baby who grows up to become a human adult is unable to articulate itself the way that a typical adult can yet we know through the end product, that in those stages, a human baby does think in ways that contribute to the mental development observable once they are adults. Even if their internal awareness or introspective capacities don't match that of an adult, new born babies lack self awareness yet are able to develop that as they grow. All of these concerns are irrelevant to the TT as it is not concerned with what is happening internally, but rather, the resulting output - performance capacity. This is what it means for a machine to "do what thinkers like us can do". Which goes back to the idea of: the TT is testing for whether machines can do what thinkers like us can do, not necessarily following the same mechanism. This point is brought up later in the paper when discussing the different considerations for a virtual versus a real robot. This makes deciding what real performance capacity is versus mere imitation difficult.

    ReplyDelete
    Replies
    1. Turing’s point is that DOING is the only evidence available for reverse-engineering thinking; you can’t observe FEELING directly. Turing indistinguishability from someone who feels is the best you can hope for. (And learning is only learning to DO, even if feeling somehow piggy-backs on it, inexplicably.)

      Delete
  33. The paper discussed Turing's points regarding determinism calling them red herrings, a point I agree with. This reminds me of the recently popularized movie "Everything, Everywhere, all at once". In the movie, you have a seemingly infinite (it is in fact, finite) possible states that you can enter (which are reminiscent of your parallel lives) if you perform a very unlikely action. Here, despite not having 'free will' as we know it, and in a way, manipulating the state that you inhabit to appeal to the situation you are in, the individuals don't discount their autonomy, capacities as beings, or what we would call in this context, performance capacity. Thus, in agreement with the paper, I find that the point regarding determinism is quite irrelevant to what we're really measuring here especially because we do not place that causal limitation upon ourselves when we consider whether or not we have that property that we are searching for in the TT.

    ReplyDelete
    Replies
    1. You lost kid-sib in some of this -- and I would encourage you to set aside introspection as well as sci-fi cinema as you start to think about how to reverse-engineer a way to pass cogsci’s (and Turing’s) Test.

      Delete
  34. Like many of my classmates, this annotation helped with my understanding of the Turing Test and the flaws it had originally when named "The Imitation Game". I learned that simply "imitating" humans in order to appear indistinguishable more often than not was not the right way to think about this experiment. In order for the machine to be totally indistinguishable, it must be indistinguishable to every human being and forever. We must never know it is not human unless it were to tell us. I think excluding the appearance of the robot as a relevant factor is correct because the performance capacity doesn't depend on how indistinguishable it is to the eye, but rather whether and how well it can do everything that we can do.

    ReplyDelete
    Replies
    1. Alexander: But passing TT may depend on having some sort of body, and testing that requires seeing Kayla, in the world, not just texting with her.

      Delete
  35. When you say "'thinking' is verbal", how do we know for sure that it is purely verbal? In Cognition, I've learned that there is a phonological loop, in which we do think with regard to language, but there is also a visuospatial sketchpad, where we temporarily to hold visual and spatial information when thinking. How would this factor in to the Turing Test and how might it change if we had to include visual components to thinking, rather than only verbal?

    ReplyDelete
    Replies
    1. I misunderstood your annotation. I thought that Turing was implying that all thinking is verbal, but you meant that you don't believe him to be implying that. Regardless, how can non-verbal "thinking" be tested? Is this impossible because of the other-minds problem?

      Delete
  36. I really liked this article and despite already having some refutations regarding Turing’s paper, this reading has expanded my thoughts and helped me to better analyze some of the things Turing said. At the very beginning of the article, Harnad quotes Turing and states that he is equivocal about his definition of thinking. I agree with that, in my opinion his experiment (the imitation game) is far from representing what thinking really is. Another criticism made by Harnard that probably makes me feel a little better about myself, is the confusing definition that Turing gave about what ‘machine’ is. When I first read Turing’s paper I was confused about that but after reading this paper it clarified to me that Turing in fact did contradict himself and perhaps made some irrelevant comments when explaining the definition of ‘machine’. To conclude this, I would like to say that this was a very clear reading and it has improved my understanding on the different turing tests.

    ReplyDelete
    Replies
    1. Did you remove your comment or did you write to remove my comment? I am not sure

      Delete
    2. Turing would probably have said "cognition" today, rather then "intelligence," but his definition of "thinking" (in the form of reverse-engineering and the T-test -- i.e., the "easy problem" of explaining DOING capacity) seems fine: Do you have other suggestions?

      On the distinction between "machine" and "Turing Machine") see other comments and replies in the Week 2 threads. Distinguish ice-cubes (which are also machines) from computer simulations of ice-cubes (which are Turing Machines, i.e., symbol-manipulators).

      Delete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...