Thursday, September 22, 2022

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"




1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: BBC Horizon Documentary and 
3Le modèle Turing (vidéo, langue française)

114 comments:

  1. Please post early in the week, not just before the next lecture, or I won't have time to reply.

    And please read all the preceding commentaries and replies before posting yours, so you don't repeat what has already been said.

    ReplyDelete
  2. I always assumed that when we try to program a machine to ‘think’ like humans, we are necessarily programming its finished version. Instead, the reading suggests that we might program a digital computer to match the “initial state” of our human mind at birth, and subsequently expose this computer to an education similar to the one that modifies it into our adult human mind. It was especially interesting to consider this learning process both at an individual and evolutionary level. Indeed, each human's education accounts for its adult state of mind after modification of its child state of mind. However, subjecting a digital computer to a similar education might allow to select more generally “advantageous” qualities that may allow it to think, and this learning process more closely resembles evolution than lifelong learning. I particularly found it enlightening to explore the parallel between the machine and humans in describing this survival of the fittest selection:
    - Structure of the child machine = hereditary material
    - Changes of the child machine = mutation,
    - Natural selection = judgment of the experimenter

    ReplyDelete
    Replies
    1. A T2 computer could learn “lifelong.” but so could a T3 robot that was not just a computer, computing. Sensorimotor learning can also ground symbols. (And there’s no reason Turing Testing could not go beyond a single lifelong individual, if it also reverse-engineered the genetic mechanism; but although evolution can produce and alter cognitive capacities, in itself evolution is surely a vegetative function rather than a cognitive one, isn’t it? Cogsci has a tall order, but it doesn’t have to do all of biology!

      Delete
    2. I pretty much had very similar thoughts while reading the paper. Although, it made me wonder if a child's mind is truly more "limited" compared to the adult mind or it simply a comparison of knowledge rather than learning potential.

      Delete
    3. This is what I found the most interesting about this piece of work as well. Turing suggested that a machine could start from an initial state and gradually learn to simulate an adult's mind, just like how a child grows. By learning and education, Turing mostly meant the process of rewarding and punishing. This kind of feedback mechanism was chosen to be the way of tuition because Turing thought the machine would be made fun of if put into a school. Moreover, he argues that legs and eyes might not even be important because Ms. Helen Keller managed to communicate with her teacher. However, it seems to me that sensorimotor learning and physical reactions are still inseparable from the process in which a machine learns. Also, it was touching the water that made Helen Keller realize what "water" is. Hence, building a robotic machine which could be able to have some physical reactions to the outer world seems to be really necessary, and now I'm getting to know why we need a robot "Kayla" in our class.

      Delete
    4. It’s interesting that whereas computationalism at the time Searle challenged it was more concerned with encoding “knowledge,” Turing himself was from the very beginning also one of the founders of algorithms that can learn. These are today called neural nets (although they only resemble neurons very superficially). These too are just doing computations, but computations that could prove to be important for learning sensorimotor categories (Chapter 6) and grounding the meanings of words by connecting them to the things in the world that words refer to.

      Delete
    5. I found this query interesting too as I had never thought of the idea of programming a machine at a human child level and not adult. My question here is: since as human children we learn and take in information and knowledge as we grow, as well as emotional experiences and feelings, to the point of in the end developing an emotional intelligence based on our upbringing, to what extent will this machine replicate the human to the feeling/emotional intelligence aspect?

      Delete
    6. This section of the reading also intrigued me. Saying that “we need not be too concerned about the legs, eyes, etc” is presumptuous, in my opinion. If the goal is to produce a computer that simulates a child’s mind so that it can develop into an adult mind, shouldn’t we attempt to simulate a human life/learning as closely as possible? Drawing from the passive/active kitten experiment described in lecture, legs and movement are important in learning depth perception. So, depriving an organism of self-actuated movement inhibits their ability to learn. The same can be said for a “child mind” computer which needs to learn as a human child would.

      Delete
    7. Ariane Please always read the comments and (especially) my replies that have already been posted before posting your own. Otherwise you might just repeat things that others have already said, and have been replied to.

      Josie, Turing stressed that his T-test is not about what the candidate looks like, its appearance, but what it can DO, its performance capacity. That’s what cogsci needs to reverse-engineer. In Turing’s day, robots hardly existed and would immediately have failed the TT because of suspicion about anything that looks like a machine. That’s one of the reasons Turing described a T2 computer, out of sight, rather than a T3 robot, like Kayla.Today we’re used to movie robots and that bias is gone. If anything, we are too ready to assume that robots think, even when they can’t really do much of anything. But to understand this course we need to imagine someone like Kayla. And our capacity to do what we can do in the world rather than just with words, is certainly part of being able to DO everything we can do.

      You are right, though, that being able to DO requires being able to move (even though the capacity to move is itself vegetative (motility) rather than cognitive. DOING is not just moving. This is another reason T3 is probably the right level of TT. (Why?)

      The goal of cogsci is not to “simulate” anything. (What does “simulate” mean? See discussion on this and other threads.)

      The goal of cogsci is to reverse-engineer and explain how and why organisms can DO all the things they can DO. (Why the emphasis on doing, and capacity to do?)

      Delete
  3. I found Turing’s discussion of the “Argument from Consciousness” intriguing as it addresses the very doubts we have discussed in class when speaking of the Turing Test and computationalism. The quote from Professor Jefferson’s Lister Oration articulates these doubts very well: how can we consider a machine to be intelligent, to be capable of thought, when the sentences it generates are by the “chance fall of symbols” rather than emotions and thoughts? This also reminded me of our discussion on the difference between the words we speak as humans and their shadows that are outputted by the computer. They have the same form, the same outline, but not the same substance: they cannot be considered the same.

    ReplyDelete
    Replies
    1. Good points.

      When Turing speaks of the argument from consciousness he is actually talking about the other-minds problem (OMP), not the “hard-problem” (HP). He frankly admits that the TT is just a test of DOING capacity. It cannot test directly for FEELING. But in doing so it tests indirectly for feeling, And his crucial methodological point is that is the best we can do even when T-testing one another for the OMP!

      What he says about “solipsism,” however, is silly, and irrelevant. (What is solipsism, and what does it have to do with the HP, the OMP, and the TT?)

      Delete
    2. I found the argument from Extrasensory Perception particularly interesting for similar reasons. Turing admits that an overwhelming amount of evidence supports telepathy so it cannot be denied as an existing human force. As a solution for the TT, he proposes a telepathy-proof room in which a telepathy-capable human cannot use this advantage against a machine that is incapable of telepathy. In section 1, it was asked if Turing would agree with computationalism. This supports the claim that Turing doesn’t agree with computationalism, because the human is capable of outputs based on perceptions that the machine can’t have. Then again, Turing proposes that the acceptance of ESPs opens doors to machines with all kinds of capabilities.

      Delete
    3. After reading the 2b paper explaining many of the points made by Turing in this paper, I can definitely see why the mention of solipsism as a reply to Jefferson's discussion of the OMP is not an appropriate one: Jefferson is not implying that everything except his own existence is part of a dream, but rather expresses his doubts on the intentionality of a machine that uses symbol manipulation well enough to pass the TT. But then again, what the TT tests for is intelligence, or how well a machine imitates what humans can do, so a critique based on a lack of intentionality doesn't seem relevant either from my understanding.

      Delete
    4. Telepathy is fun, but it’s nonsense. Turing was a giant, yet he believed in telepathy; so too, Newton, also a giant, believed in alchemy. But for neither of them did their real, monumental contributions have anything to do with their respective offbeat beliefs (although in Turing’s case, there are some psychodynamic theories about a connection: the early loss, and attempt to retrieve, someone he loved who had died: see the videos if interested). [“Stevan Says” Turing did the enormous things he did despite his offbeat beliefs, not because of them…]

      Delete
    5. Why should we consider the TT an indirect test for feeling? I understand that it it is the best we have, but Turing says himself that the question of whether a machine can think is the wrong one to ask, and I think it reasonable to extend this to “feeling”. What a successful TT most strictly establishes is that a machine can display human conversational abilities well enough to fool a human ("DOING"). He says: “It might be urged that when playing the ‘imitation game’ the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind,” and he does not elaborate further on why he believes this. If we actually consider the TT as an indirect test for feeling rather than mimicking ability, we say by definition that a successful TT implies, if only weakly, that machines might feel. But unless we justify the assumption that convincing conversational ability requires feeling, isn't a more convincing implication of a successful TT simply that feeling isn’t necessary for convincing conversational ability? I don’t have a clear grasp on what Turing himself thought the implications of a successful TT were beyond, as you say, "DOING"; as he grants us, “I have no very convincing arguments of a positive nature to support my views.”

      Delete
    6. You can’t observe either thinking or feeling (let alone the capacity to think or feel). You can only observe and T-test doing (by the body, or, inside, by the brain, T4). Cogsci has to infer both thinking and feeling from doing.

      With T2, this is based on (lifelong) capacity to interact verbally with us in a way that makes sense and that cannot be distinguished from interacting with anyone else who is speaking meaningfully and understanding. There are also things that we can observe people DOING (behaving as if they were angry, frightened, tired, nervous, sympathetic, understanding, puzzled (T3), secreting adrenalin (T4)) from which we infer, indirectly, what they are thinking and feeling -- things we cannot distinguish from when anyone one of us is acting as if they are thinking and feeling them. (Even with verbal communication, we cannot observe meaning or understanding directly; only words and gestures that we cannot distinguish from those of our fellow-humans when they (or we ourselves) are meaning or understanding something.

      Kayla is there to give us a realistic sense of what this sort of capacity, and indistinguishability, would be like if cogsci succeeded in reverse-engineering T2 and T3 capacity.

      Delete
  4. Response to: (8) The Argument from Informality of Behavior from Reading 2a
    This counterargument made me think of the ethical dilemmas surrounding self-driving cars.
    Some people feel fully self-driving cars could be realized very soon, but others have pointed out a difficult dilemma. If a self-driving car were to encounter a situation where there are two distinct groups of pedestrians, and it is inevitable that one group must die and the other could be spared, how does the car choose? The argument is that computers cannot make moral decisions, and even if a computer could be “taught” morality (as described by Turing in “7. Learning Machines” and also by Amélie above), there is no universal moral code. It varies between cultures, genders, religions, etc. So, it would be impossible to program or “teach” the computer of a self-driving car a version of morality that adheres to every moral code and effectively chooses correctly in every perilous situation.
    I don’t think this argument threatens the Turing test anymore than the example about the red and green light given in the 2a text, but I do wonder what Turing would have to say about the morality of a human vs. the morality of a computer.

    ReplyDelete
    Replies
    1. How do humans learn to make ethical choices? A Tesla can’t pass the TT, but if a candidate could pass the TT, we could not distinguish it from a human in any way, for a lifetime: So what would it be missing, and how and why?

      Delete
    2. I would say it's a combination of nature + nurture. I believe we have an innate sense of good and bad but some things need to be learned or at least reinforced. And if we believe Turing then a computer can achieve this. The "nature" would be the initial program and the "nurture" would be machine learning. Thus I think ethics are well within a computers abilities.
      It's the variation in ethics that fascinates me. An ethical computer by Canadian standards may not pass the TT in another part of the world.

      Delete
    3. Teegan, I still don't understand (and neither does kid-sib). Could you explain a little of what you learned from the Turing paper to kid-sib?

      Delete
    4. Right, so I’m attempting (maybe unsuccessfully) to ponder Turing’s “Learning Machines” argument alongside the counter argument number 8 about the informality of behavior.
      Some things that we learn about don’t have much room for variation. For example, the acceleration due to gravity is 9.82 m/s^2 and this doesn’t fluctuate from person to person or day to day. But other things like ethics, morals, or even the example of the simultaneous red and green light from Turing’s paper do not have such formal answers and DO fluctuate from person to person or day to day.
      So, if we are to “teach” a computer how to be human, I’m wondering/thinking aloud about whether this poses a problem for Learning Machines. But my instinct is that is does not, because with enough storage a computer could learn many (if not all) ethical concepts and choose the “best answer” based on the given scenario and the particular interrogator it is trying to convince.

      Delete
    5. The TT is about reverse-engineering human cognitive capacity – perceptual, rational, linguistic and emotional – lifelong. Consider Kaayla, and all she can do and say. That’s the goal. But it’s a generic goal. Kayla has the cognitive capacity of a university student, but a high-school dropout could also be a T3 robot. And these individual differences would also be possible in ethical sense and judgment. (Don’t forget that even Putin would pass T3, though a psychopath is not a good place to start explaining cognition!)

      But learning does not mean verbal instruction, and certainly not initially. Many things are learned by nonhuman animals, and by human babies, before they even have enough language to be formally instructed in words. And nonverbal learning continues even once language is fully functional. We will be talking about three kinds of learning: unsupervised, supervised, and, eventually verbal: Weeks 5 to 9.

      Delete
  5. If our brain is comparable to a program, whether or not a "machine" can think like us depends on how its input data is stored and structured. That said, the question "can machine think" is to ask if the way a machine's input data is being organized, processed, and stored is identical (or at least similar) to how it is treated in human brains. Regardless of the data modelling methods utilized in a machine, it is possible that various methods can all provide satisfactory answers in a Turing test. But does that mean all these machines can "think"? As mentioned in the class discussion, cognition is ungrounded. A big part of thinking happens at an experiential level, and there is an impenetrable realm of privacy to that. That said, asking, "can a machine think" is analogous to asking someone to prove that they can see the colour green. I wonder if using the Turing test as a standard to define thinking is overly metaphorical.

    ReplyDelete
    Replies
    1. 1. What is and is not a “machine”?

      2. Turing does not mention how the candidate passes the TT. If it’s T2, perhaps a computer could pass it (but see Searle). But if it’s a T3 robot, it can’t be just a computer. And a T3 robot’s symbols are grounded (how? what is grounding?)

      3. Turing invented computation (Turing Machine), the computer, and the Turing Test (which is an empirical test of reverse-engineering, not metaphorical). But do you think Turing was a computationalist? (I sometimes use this as an exam question; there are good reasons for replying yes as well as for replying no. If you understand both sides, then you’re understanding the course.)

      Delete
    2. (1) From what I understand, a machine is, at least by Turing's definition, a rule-based symbol-manipulating device. (2 & 3) On one side, I guess Turing can be an anti-computationalist in the sense that if a device can pass T3, it becomes indistinguishable from us (at least verbally) and thus is capable of "thinking" (at a performance level). But the indistinguishability here is merely a byproduct of a machine's symbol-manipulation process. In other words, verbal performance results from thinking (it is a stretch to assume that an entity can think by simply observing what it does). Further, it is unclear if the machine is aware of what these symbols represent in the real world (i.e., the Chinese room), nor do we know for certain that it is mindful of the process of symbol manipulation itself. But like you mentioned in your paper (21), this unknowableness is of little importance; what matters is what a device can do (operational). So I guess if a device can pass T3, as per Turing, we can reasonably assume that it can "think" the same way we assume other human beings can think (intuitively). And maybe by this logic, Turing can be said to be an anti-computationalist (?). On the other hand, I suppose that one premise of the Turing test is that "humans can think"; otherwise, Turing wouldn't simultaneously place a machine and a person behind the curtain. If the machine's performance equates to that of the person, it is said to be intelligent, though this merely reduces intelligence/thinking to an operational level. But still, I'm guessing that inferring intelligence to verbal performance and juxtaposing machines that can deliver such a performance with humans make Turing a computationalist?

      Delete
    3. A T3 cannot be just a computer (why not?), and it is not just indistinguishable from us verbally, but in everything it can do. (Consider Kayla.) And a T3 is grounded through its sensorimotor capacities.

      (I think you need to read not just 2a, but 2b, 3a, and 3b to sort all this out.)

      Delete
  6. This was an engaging text to read. I have never thought that Turing legitimately dabbled within the field of philosophy of mind.

    What intrigued me the most was his idea of how could a machine beat the interrogator during the "Imitation Game."Instead of developing a hyper-complex machine that would require an infinite amount of storage for the data. Turing proposed to develop a machine that could learn what type of information is useful for it and what is not. To further clarify this idea, Turing made an analogy of a machine as a "child" that needs to be taught. Realistically, we cannot make a child learn everything available in the world, thus we would need to teach the child the skills to be able to learn by themselves. The same principle could be applied to the machine. If we could create a child-like machine, then we could teach it the skills needed to self-educate.

    Indeed, this still does not answer if a machine can think or not, but it is a very clever way of looking at the whole problem in general.

    ReplyDelete
    Replies
    1. This idea was particularly interesting to me as well. I had previously understood that the aim of programming was to create an intelligent, final product, as mentioned by Amelie above. The idea that the goal can be to create a ‘child’ and allow the machine to undergo learning is an exciting prospect. This will allow for a man-made evolutionary process in which the experimenter is able to act as a tool to accelerate natural selection. As Turing points out, this process will not involve random mutations, and will instead allow the experimenter to select mutations that will improve efficiency and functioning. This concept mirrors genetic engineering in humans.

      Delete
    2. The idea (in computationalism) is not to outdo what humans can do but to show that computation alone can do everything they can do. And learning (nonverbally as well as verbally) is one of the most important things we can do. So it is a core of the TT all along.

      But no need for evolution and mutations, if the TT can reverse-engineer the cognitive capacities of the child.

      And remember that computationalism is just one of the possible ways to try to pass the TT. (And that the Strong Church-Turing Thesis is not the TT!)

      Delete
  7. This reading helped address the objections that naturally arise when presented with the question, "Can machines think?" I was particularly interested in Lady Lovelace’s objection, as I had similar hesitations. She argues that "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform." An aspect of thought that I considered central to its definition is that it can occur independently and without direction. Lady Lovelace echoes a similar sentiment, emphasizing that origination is central to her understanding of thought.

    Turing and Hartree offer interesting rebuttals. Hartree emphasizes that simply because Lovelace has never witnessed a machine ‘thinking for itself’ does not mean that it is not possible in principle. Turing objects to the argument that machines "never do anything really new” by reflecting on the idea that humans themselves cannot be confident that they are doing anything new. The thoughts and actions of humans may be influenced by their environment and teachings that they have received. This perspective was particularly striking, and made me rethink my initial objections. If humans are incapable of creating completely new thoughts, this argument cannot be used to distinguish humans from machines.

    ReplyDelete
    Replies
    1. I, too, was particularly interested in Lady Lovelace's objection. The question of being able to "originate," create something new is something I considered passing T2 would require, as it seems it is a behaviour that humans seem to do. But rather, is it simply pridefulness and ignorance that leads to the human illusion that they create and innovate rather than transforming or echoing something already existing?
      Therefore, as it can be questioned whether humans do it, I understand why Turing closes this line of objection to his argument.

      However, the question of "surprise" is an interesting one to me. As a T3 robot, I am regularly surprised as I do not know all outcomes before or as I encounter a situation in my day-to-day life. Turing states that the view that machines cannot give rise to surprise is because of "the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it," which, in my experience, is not true.

      Delete
    2. Hi! I too found the Lady Lovelace argument as the most convincing one against Turing's concept. However, I did not find Turing and Hartree's rebuttal fully convincing. Sure humans take in information just as computers may learn how to, but ultimately the computer must be told in the form of its programming that is to take in new information. Even a computer that appears to be learning on its own, is only capable of doing so because humans have told it to. Humans do not need this, they are able to absorb information without anyone telling them too. This can be paraphrased as, humans are capable of unsupervised learning, but machines are not. This may not be so much of an issue for T2 (which is what Turing and Hartree are arguing for…) , but I do not think a machine would be able to pass T3 without first being capable of unsupervised learning. So maybe Turings rebuttal is sufficient for the purpose of this paper, but is not evidence enough for a full computational view of cognition.

      Delete
    3. Lady Lovelace, born Ada Byron, daughter of the poet Lord Byron, was a mathematician and worked with Charles Babbage, who designed the “analytical engine,” the precursor of the computer.

      Kayla can you relate “surprise” to the definition of “information”?

      Sophie, but computers are capable of unsupervised (as well as supervised) learning! What are they?

      (And everyone should revise their notion of computers being “programmed.” It is now evident that even though a computer is executing an algorithm (program, symbol-manipulation rules), if it is executing the right algorithm it can do things that even surprise the “programmer.” For example, theory-proving programs can come up with proofs that even the inventor of the algorithm did not know about, or expect. And they can even surprise Kayla who is actually executing them. If part of an algorithm is that it produces the capacity to learn, all bets are off about what it can eventually end up doing. Much the same is true of dynamical, noncomputational properties, sensorimotor capacities: who knows in advance where they may lead?)

      In a similar sense, if you have a set of axioms in mathematics, a set of formal assumptions that you assume to be true, all the true theorems that follow logically (i.e., formally) from the symbol-manipulation rules are already true: We just don’t know it until and unless we discover them, and a way to prove they are true. (This all heads toward the puzzle of “free will,” which we will eventually discuss – not as a metaphysical question about “determinism,” but about information, uncertainty, and predictabillty.)

      Delete
    4. I was also quite interested in Lady Lovelace’s objection, as I was having similar thoughts myself. (In general I thought Turing’s counters were pretty convincing, as he seemed to address every argument I was formulating as I thought of them). I especially think the argument Turing makes for a learning mechanism for a machine, akin to that of a child’s mind, works in tandem with his argument for the theological objection. If humans can procreate and “make” children and that is not considered “usurping His power of creating souls,” as Turing says, then machines that think and “have souls” may be created in the same way. And his proposal that a learning mechanism for a machine would imitate the learning process of a child’s mind reinforces his argument against the theological objection very nicely. I was pleasantly surprised that even though Turing didn’t take the theological objection particularly seriously, he still provided an argument against it consistently throughout.

      Delete
    5. @Stevan, I see your point about unsupervised learning, I read this (https://www.ibm.com/cloud/learn/unsupervised-learning) Article which cleared some misunderstanding up, AI and ML algorithms do seem capable of this. However I am still unclear as to why we can fully disregard the programmer. Even if the computer can solve problems that the programmer didn't anticipate or cannot solve themselves, they were still ultimately given the capability to do so by the programmer. Kayla is still given the capacity to learn, regardless of what she ends up learning. I think we get into murky waters of determinism here, and I struggle to see a situation where a program becomes fully deterministic. Would love to discuss this more!

      Delete
    6. Sophie, Where do you think you got your capacity to learn? It was coded in your genes (a genetic algorithm, evolved through natural selection. The “programmer” was natural selection: the “Blind Watchmaker.” What difference does that make, if the outcome is TT capacity? Cogsci wants to reverse-engineer the mechanism, not its biological origins. (That would be a task for evolutionary and developmental biology.)

      Delete
    7. Hello Sophie, I found your comment helped me making a distinction. We humans seem to have certain capacities that computers don't have. However, we did not create these capacities by ourselves. It was given to us as we were born by nature. So, why should a machine where such capacities were programmed to them be any different from humans? Importantly, if a machine can do what we can do, why care about the differences in how they are programmed?

      Delete
  8. Turing ends his paper by wondering about two possible sets of approaches that might be tried to help 'teach' machines so that they can sufficiently advance as to pass the test he has set out or to even to compete in all sorts of intellectual enterprises:

    "Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc."

    Turing has certainly "set the agenda" as you say insofar as both of these pathways have had significant and successful (to a point) work on them in the past decades. Given that computer chess, computer vision (whether with physical "sense organs" like cameras or simply with uploaded images) and natural language processing often operate these days with neural networks and other machine learning (to use the modern narrow definition) approaches, Turing's note that "An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil's behavior" is quite interesting in light of the "black box" effect of many current machine learning where we indeed have very little idea what is really "going on" for the learning to be able to proceed.

    ReplyDelete
    Replies
    1. I also found myself wondering how Turing and his constructed critics might respond to computational text (like GPT-3) and image (like DALL-E mini) generation software. "Write me a sonnet on the subject of the Forth Bridge" might now be a fruitful directive. While our recent popular culture has reacted much to both AI chess players in high-level tournaments and to image generators on the internet, we seem to have a different sort of reaction to the apparent artistic capability of the latter that touches on the difference in the two approaches outlined by Turing, and the sorts of objections brought forth relating to apparent disabilities of the machine. This all ultimately stems from human-derived text-image associations but nonetheless creates new images that seem to go against the variant of Lady Lovelace's objection that a machine could "never do anything really new."

      Delete
    2. Excellent observations. And they’re just about today’s ordinary AI accomplishments, via computation, which are still far from T2, let alone the computational/dynamic potential of a hybrid symbolic/sensorimotor T3 robot (Kayla).

      Delete
    3. Zahur’s comment was very interesting to me and probes the question, what is considered “new”? One could argue a simple computation of random number generations to put together a string of digits is considered new and original work by a machine although it may not be particularly interesting work to a human interpreter.

      Turing himself acknowledges this point when saying:

      “This may be parried for a moment with the saw, "There is nothing new under the sun." Who can be certain that "original work" that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles.”

      Expanding on this further, innovative work created by humans in art, writing, architecture, engineering etc, is rarely completely original and is often another variation of something existing or a combination of multiple existing things. Things that are considered "new" or "innovative" by us are often just concepts in rare combinations. Even the most influential artists and writers in history have always had many influences and inspirations they pulled from in order to create their work. Technology companies are often pointed to as world changing innovations but they have clear roots from already existing operations. There is no doubt that Amazon has changed the world by introducing an accessible online marketplace for anything one could imagine, but can it really be considered “new” or “innovative” when it is simply combining the concept of buying and selling goods but online?

      If the most “innovative” human feats are simply combinations of existing concepts, there is no reason a machine would not be able to do the same.

      Delete
  9. As far as I understand, in the argument of consciousness (4), the goal of reverse engineering here is to produce a machine with the same performance capabilities as a human. Not necessarily a machine that is the same as a human (i.e. the goal is for the machine to be indistinguishable from a human, not identical to one). So for Turing, whether or not a machine has consciousness is irrelevant (or testable); we care about whether this supposed difference (having a consciousness or not) produces a noticeable change in performance.

    This made me think of would it be okay to kick a T3? My initial response would be that I would not be able to justify kicking her. I say this because I would not kick another human and since she is indistinguishable from us in sensorimotor performance capacity I have no way to differentiate. Further, if she is indistinguishable, then I should be applying the same conventions to her as anyone else. On the other hand, this is not just how we "reason" about T3, but the thought of kicking her feels just as unthinkable as with any other person. Turing's point about "intelligence" leads to a moral point about empathy…

    ReplyDelete
    Replies
    1. Thoughtful observations, not just about ethics and empathy (which will come up again in the last Week, 11) but, about why the “hard problem” (of explaining how and why organisms FEEL) is so hard (Week 10).

      Feeling is the only thing that matters in the universe. Yet cogsci, if it succeeds in solving the “easy problem” (of explaining how organisms can DO everything they can do) -- as there is no reason to doubt cogsci should be able to succeed in explaining -- also seems to show that feeling is causally superfluous. Not only can T-tests not observe it, but it does not seem to be causally necessary for all the things organisms can do. (Not a bad time to start thinking about this too, since Weeks 4 to 9 will now only be about the easy problem.)

      Delete



    2. I also wondered if we should treat machines as humans if they are indistinguishable from us. After all, this is how we tell other humans are humans. But it also got me thinking about how we understand other species are sentient just like us. For example, we won't kick a cat, even though we know it's not human, because we know it has feelings.

      Delete
    3. The TT would apply to all species, but our human mind-reading (i.e., Turing-Testing) is much more perceptive about what humans can or can’t do. Would we know whether a T3 pig was distinguishable from a real pig?

      [There are people who kick cats – and dogs and pigs and chickens and cows and horses and fish and mice. Isn’t it odd that we wonder about whether we would kick a robot, like Kayla, that does not exist? And yet all those other thinking, feeling creatures are being massacred by and for us every day with most of us not giving it a second thought? Yes, some of us do it to Ukrainians too. But most of us wouldn’t. Yet almost all of us do it to other species, pay others to.]

      Delete
  10. 2.a Reading “Computing machinery and intelligence” was interesting because it was the first time I read about the different objections to the opinion that machines can think. One objection to that idea that I found particularly interesting to think about is the “Argument from extrasensory perception”. It is said that it is becoming harder to refute the idea of telepathy for example, since the statistical evidence is striking. Because telepathy and extrasensory perception is still a mysterious phenomenon that is not studied enough, we don't really know if every human is capable of it, and how predictable it is. As being said, it is possible that “psychokinesis might cause the machine to guess right more often than would be expected on a probability calculation” during the imitation game, but these are phenomena that are hard to predict since we are still far from understanding how extrasensory phenomena work. Therefore, one question that I am asking myself is whether or not saying that because machines are not capable of extrasensory phenomena such as humans is a relevant argument to refute the idea that machines do think. Since we don’t even know if every human brain is capable of extrasensory perception in the same way every human brain is capable of intelligence, can we even consider that extrasensory perception counts as a category of intelligence?

    ReplyDelete
    Replies
    1. this is a fascinating argument; it would be hard to create analogues of human ability if we don't understand exactly what kinds of ability we have. this seems to be one of the few 'easy' problems that Turing Machines still cannot account for practically as well as theoretically. however, one objection I could think of for that particular example is that the burden of proof here would be on the proponent of ESP and not on the cognitivist. after all, ESP seems to be unverifiable if it is not in some way reducible to perceptual or cognitive function.

      Delete
    2. I think this is not just “Stevan Says”: You can save yourself a lot of time and trouble if you assume that telepathy and telekinesis and clairvoyance do not exist. Not only is there no credible evidence for the “paranormal,” but the “hard problem” (of explaining feeling, which really does exist, as Descartes pointed out) is already hard enough without our inventing things that don’t exist.

      (Please always read the comments and replies in a reading’s thread before posting: telepathy has already been discussed.)

      Delete
    3. Jacob: In a way, I think your point relates to what Turing says in his response to the mathematical objection (3). In this response, Turing argues “In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.” While computers may not be currently able to model all capacities of human behavior (within this also the behavior that we ourselves don’t know), there is still going to be computers in the future that will be able to model this behavior (when we figure things out ourselves as well- and then this cycle will continue on again). So I think in this response Turing almost refutes the notion that Turing Machines cannot account for “unknown” human ability.

      Delete
    4. But this is not just about the power of computation (the Strong C-TT). Kayla is not just a computer.

      Delete
    5. Thank you for all your answers, they helped me nuance the argument about ESP.

      Delete
  11. This reading caused me to raise a few questions, along with the general concept of questioning if a machine can think. My main confusion was when Turing described the three conditions of a machine and included the engineers who created the machine being unable to describe the manner in which the machine works, unless this explanation was done experimentally. Why is this a requirement of a machine? To my (limited) understanding, wouldn’t understanding the mechanisms behind the functioning of a machine be required to create it, and wouldn’t this involve solving the easy problem such that we can get closer to solving the hard problem? The only answer I can think of is in relation to the concept of a learning machine, as we cannot predict or fully understand how a child will develop, even if we control the circumstances surrounding their development. However, I’m unsure if these points are related to one another.

    ReplyDelete
    Replies
    1. Good point, but see above, about theory-proving algorithms that go on to prove theorems that the designer of the algorithms did not know were true, nor how to prove them. This is especially true for algorithms that learn.

      Delete
  12. I was particularly intrigued by the “Head in the Sand” argument that the author mentions is not substantial enough. Granted, it might be simplistic to think that Man is superior and cannot fathom another system commanding himself. The only truth I see in this argument is that if humans create machines that can think and that can ultimately one-up us, then it dramatically forces us to question our own intelligence, as something that came out of our own intelligence could eventually take over us. If the idea behind computationalism is that computation can do the same things humans can do, then can computers really build on that foundation to form a consciousness and thoughts that could ultimately trump human thought? To me, that idea is scary. Consolation is unnecessary and too easy of a solution considering this worry is of our own doing; contrition would be more fitting.

    ReplyDelete
    Replies
    1. I was also taken by the "Head in the Sand" argument. It seems like such a self-righteous and naive stance to take. Reading about it from a third person perspective is almost comical, however, I too find myself using the same excuse when pondering this question. We are quick to write off the possibility that machines can be like us because, as you have mentioned, it is quite terrifying. As a race, we tend to think that we are vastly distinguishable from all other organisms or species as a way to feed our own egos and pacify such daunting thoughts. The "argument for various disabilities" evokes a similar train of thought. Perhaps our subjective thoughts, feelings, and emotions that comprise our human experience are not purely specific to us, and it is only a matter of time until these experiences can be coded and executed to produce the same experiences in machines.

      Delete
  13. Upon reflecting on the essential question behind the Turing test as an assessment of intelligence, “will this test also be able to capture how we feel?” That is, will it actually generate what it feels like to be a system that understands any form of human experience? Nagel wrote an essay where he asks, hypothetically, “What is it like to be a bat?”, and proposes that the closest we can get to understanding this phenomenon would stem from a human’s perspective, because there is no way of knowing it from the bat’s perspective. There is a fundamental, subjective aspect pertaining to what it is truly like for an organism to be that organism. Coming back to our topic of Computing Machinery and Intelligence, the mind/body problem continues to remain an unanswered question within our field of study, and attempting to tackle the question of whether a Turing machine can generate any form of subjective experience is improbable from a computationalist perspective. How is one supposed to use objective measures in order to tackle something that is inherently subjective? My comment might more of a philosophical take on this question, but I felt it was worth sharing!

    ReplyDelete
    Replies
    1. Please read previous comments and replies.

      Delete
    2. Ah, I hadn’t realised that when Turing is addressing the argument of consciousness, he is referring to the other-minds problem rather than the hard problem. Thank you for the clarification in your earlier reply!

      Delete
  14. Turing in this reading discussed dividing the problem of building a machine that could imitate how the human mind works by dividing the mind into two programs: the child program and the education process. He also mentions a third component: “Other experience, not to be described as education, to which it has been subjected”. This assumes there is a distinction between the “education process” and “other experience” but where does the boundary lie? My first assumption was that “other experience” would be exploration but I assume there would be some limitations to that. Would the child program be able to explore their environment in some fashion like how children are encouraged to explore to develop and strengthen their curiosity and critical thinking?

    ReplyDelete
    Replies
    1. In this course the two kinds of learning will be called (1) induction and (2) instruction (Week 6). Induction includes (1a) unsupervised learning (exposure without feedback) and (1b) supervised learning (trial and error, with corrective feedback). (2) Instruction is verbal instruction.

      Since T2 is purely verbal, unsupervised learning is mere passive exposure to words, and supervised learning is verbal feedback to words.

      It is in T3 that sensorimotor induction ( category learning plays the important role of grounding word meaning (through learning the features that distinguish the members from the nonmembers of categories: categorization is “doing the correct thing with the correct kind of thing”: eating edible mushrooms; avoiding poisonous toadstools).

      (Since symbol grounding must precede verbal learning, this is another reason why T3 is likely to be the right TT, and why computationalism is wrong.)

      Delete
    2. There was an earlier thread where another student named Josie mentioned that perhaps features like eyes and legs are needed in order for the robot to be at the right level of TT. However, after reading this thread and many of the others, I’m stuck with my understanding of T3 and how it accomplishes category learning… Though Turing stressed that appearance should not matter for his Turing Test, I’d argue that T3 would be the right level of TT only because it can perform both symbolic and non-symbolic interactions. These non-symbolic interactions deal with sensorimotor aspects of the world, rather than just the symbol-dependent discourse that T2 can accomplish. Because these non-symbolic interactions deal greatly with the sensorimotor parts of the world, I’m forced to return back to Josie’s argument… Wouldn’t it be essential for a robot to have something that can behave/mimic like a human’s eyes or legs? If symbolic and non-symbolic interactions are what makes T3 an ideal candidate for the Turing Test, then it seems the only way to get there is with the help of ‘eyes’ or ‘legs…’

      Delete
    3. Anaïs, yes T3 needs sensorimotor capacities (e.g. eyes, feet). T3 Robots have those; computers don’t. As to the role sensorimotor capacity (and neural nets) could play in grounding symbols (words)… stay tuned (Week 6).

      (No secrets: it’s to learn to connect actions (especially words) to the things in the world they apply to or refer to; most words are the names of categories. To categorize is to do the correct thing with the correct kind of thing (eat it, run, from it, name it, and eventually, Week 8/9, describe it) – by learning to detect and abstract the features that distinguish the members of one category from the members of another.)

      Delete
  15. It's interesting to see this adaption of a Turing machine. Whereas we previously saw it as a simple machine with just a handful of elements, here, there are certain restrictions placed on It, and it is better known as a digital computer. As the article progresses, this idea of a digital computer and its abilities grows in complexity. Additionally, the adaptation of the question "Can machines think?" to instead consider its capability in succeeding at the imitation game is at first, a difficult parallel to swallow. However, this article provides a compelling approach to adopting this point of view. This helps me better understand how this problem should be approached/defined. This has been something I struggled with as it always confused me how passing the imitation game would indicate thinking but reading this article, I better grasp what it means to pass the imitation game and what it means to do something that is like thinking.

    This also helps ground the question "Can machines think?" in a way that is digestible to anyone familiar with a digital computer.

    ReplyDelete
    Replies
    1. The “imitation game” is the TT. See skyreading 2b).

      Delete
  16. In Turing's paper, "COMPUTING MACHINERY AND INTELLIGENCE", I find the "Imitation Game" quite interesting in that it raises the problem of other minds. Even if one cannot distinguish a robot from a human, and we assume the robot in a given experiment to be human, we still could never tell whether or not they can feel or think for themselves. I take issue with this game in general in that regardless of whether the machine can trick us into thinking it is human more often than not, there would still be a portion of interrogators that would notice the small inconsistencies in their behaviour. In that case, the experiment wouldn't work because it would require being indistinguishable 100% of the time. This is a unique experiment such that it cannot rely on statistical significance; it must be actually indistinguishable to all humans. However, how could a machine ever be indistinguishable to people when there is such variability in personality? Computers can randomize their behaviour according to a pre-programmed list of personalities, but it would always be simulating these behaviours, rather than subjectively experiencing them. I believe that one day, computers could be indistinguishable from humans once we've added and simulated all of the possible layers to human variability, but I don't think it would ever be conscious, unless we figure out what consciousness is and how it is represented in the brain.

    ReplyDelete
    Replies
    1. I think the point you raise about computers not being able to "subjectively experience" personalities is exactly what Turing addresses in the argument of consciousness. There is no way for me to know whether you, a man, subjectively experiences any of the things that I do. You could say I'm being polite :) The only thing that would matter here to answer the question is that the machine acts in a manner indistinguishable from a human, not so much whether they are conscious. I think this will be interesting when we look into defining consciousness and what it means to be conscious.

      Delete
    2. The “imitation game” is not a trick or a game; it is the TT. See skyreading 2b).

      If Kayla passes the TT (T3, or, by email, T2) then the TT is as good a test of the other-minds problem as the one we use with one another every day. No statistics needed. And individual differences are irrelevant, just as they are in reverse-engineering hearts.

      T4 tests even more than T3, but is it necessary?

      T-testing cannot penetrate the other-minds barrier (except in the special case of T2, if passed by just a computer: why?)

      Robots are not computers.

      What is the difference between an object and the computer simulation of an object?

      T-testing addresses only the easy problem of cogsci (what is that?). The hard problem (what is that?) is hard because solving the easy problem seems to show that feeling, although it is real, is superfluous.

      The assumption (or hope) is that once cogsci has reverse-engineered how and why thinking organisms can DO what they can do, consciousness (FEELING) comes with the territory, even if we cannot explain how or why.

      Delete
    3. The difference between an object and its computer simulation is that the object is grounded and the computer simulation is ungrounded. We cannot ground the meaning of symbols in other symbols.

      Delete
    4. The difference is much bigger that. What’s the difference between a real ice cube and a computer simulation of an ice cube? This has nothing to do with cognition, computationalism, or the grounding of the meanings of symbols. What is it?

      Delete
    5. The difference is in the experience felt or elicited by the real object. A simulation cannot feel or be subject to an experience of any kind. An ice cube, although it can't feel cold, it is still inherently cold. The simulation isn't.

      Delete
    6. The difference is much bigger that. What’s the difference between a real ice cube and a computer simulation of an ice cube? This has nothing to do with cognition, computationalism, or the grounding of the meanings of symbols. What is it?

      Still too cognitive. Forget about cognition when you define simulation. It’s not about whether an ice cube (simulated or real) can “feel”!

      A simulated ice cube cannot melt either: Why not?

      BTW, an ice cube, melting, can be simulated too: no real ice cube, no real melting. Just symbols being manipulated.

      That simulation can be very accurate (because of the Strong Church-Turing Thesis that computation can model or simulate just about anything, as accurately as you like -- so exact that it can be piped to a Virtual Reality (VR) device (which is not a computer, and includes goggles and gloves you wear) that makes it look and feel like you are seeing and holding an ice cube, melting. There is still no ice cube melting. Just symbols being manipulated. (and the VR device and gloves & goggles receiving the signals from the computer are real too. Take them off and you see there is no ice cube at all there, just wires from the computer to the VR device’s gloves and goggles.

      (So, everyone, please forget sci-fi ideas, if you have them, about the possibility that you are just simulations in the Matrix: This is cog-sci, not sci-fi!)

      Delete
    7. I guess the difference would be that a simulation cannot act, or do whatever they're simulating to do, in the real world? The simulation is just symbol manipulation as you said.

      Delete
    8. There is only one world, Alex. The Matrix is sci-fi.

      Delete
  17. At the end of his paper, Turing ponders on how a machine will be able to be tested in order to compete at the same level as a man in various intellectual fields. Throughout the objections that were brought up and refuted in section 6, I wondered the same thing. I understand the response to the argument of consciousness, that there would be no way to “be the machine” and confirm whether it feels, but in the case of the argument of disabilities I am still unsure to what limit testing could go. Who decides the standard of what is acceptable to be lacking in and what is not if we can't even be sure of our own abilities (from human to human)? How would it be decided at what point the machine would “deliberately introduce mistakes in a manner calculated to confuse the interrogator”? Essentially I ask this as the point is not to create a perfect machine but rather one that would succeed and fail in intellectual fields the same way a man would.

    ReplyDelete
    Replies
    1. I think you’re understanding the TT. We do it every day, and Kayla passes, and gender has nothing to do with it. (Turing just used the he/she game to set the stage for the TT.) Neither does “perfection.” Just indistinguishable, ordinary capacities (not an Einstein’s).

      Delete
  18. In section six where Turing addressed the contrary views to his own, I was challenged to confront many of the "Grandma and Grandpa Assumptions", I held. The Lady Lovelace objection was one I had previously thought was quite strong. However, upon further contemplation, I began to question human originality and its origins. We, like a computer have a type of hard drive from which its software develops through learning. As I see it, it is through our differences in experiences and interactions and the gradual changes to our base "programming" that our individual originality develops. So, taking Turing's suggestion for a learning machine, originality of sorts would then be plausible in a machine.

    ReplyDelete
    Replies
    1. And don ‘t forget that computationalism is not the only option. There are other possibilities, such as sensorimotor, physiological and biochemical processes. All of those have evolved initial states and structures.

      Delete
  19. This reading, especially “The Argument From Consciousness” section made me consider what Turing really meant when referring to “thinking”, since it is something that requires consciousness, so in order to conclude that a machine can “think”, I believe that it should be conscious. That is, even if the machine manages to deceive the interrogator, would that mean that it can think? Because all the machine is doing is some form of computations which involve symbol manipulations. To be able to say that the machine is “conscious” would probably require the machine to be “aware” of the fact that it has deceived the interrogator. Only then we can conclude that the machine is intelligent.

    ReplyDelete
    Replies
    1. I think Turing is primarily concerned with one unique ability of the mind, namely, the ability of being intelligent. And by thinking I think he is referring to logical thinking, the kind of thinking that is logically guided. As to the rest properties of the mind I think he is less concerned. For example, random ideas and thoughts, or feeling or self-awareness. I believe for Turing, the intellectual property of the mind IS the most prominent component of the mind.

      Delete
    2. I think the intellectual property of the mind was particularly appealing to Turing. But I don't mean to say Turing is not concerned about that such a machine would be cold and calculative, as some might worry about. Being intellectual means being reasonable, and being reasonable is desirable. Then it is desirable for a machine to think intelligently, even without self-awareness.

      Delete
    3. The point you raise, Alara, about awareness reminds me of an earlier comment chain in 2a where Professor Harnad wrote about how a T3-passing robot can surprise the "programmer" and itself. I'm convinced that a T3-passing robot could surprise its "programmer", but curious about how a T3-passing robot could surprise itself... Since surprise occurs upon encountering something unexpected, does this mean that a T3-passing robot must be (or pretend to be) ignorant of certain abilities that it has in order to pass T3? If the robot is ignorant in this way, does this enhance its ability to pass T3 or increase its susceptibility to exposing itself as non-human because the robot doesn't truly understand the differences between the way it operates and the way a human does? Perhaps I'm going down a bit of a rabbit hole here, but I'd love to hear others' thoughts about the question I've raised.

      Delete
    4. Alara. the TT is not a trick, it’s a test. (Test of what?)
      Yes, we are conscious (the Cogito) and that has to be explained too. But if you knew how or why we have to be conscious to be able to pass the TT, you would have solved the hard problem (What is that?)

      Delete
    5. Yumeng, what is the TT testing? What is the easy problem? And what does “intelligent” mean? What can Kayla DO to pass (T2), via texting alone? And what can she DO via T3, out in the world?

      Delete
    6. Polly: Ask Kayla how she remembered who her 3rd grade teacher was. Does she know how? If she’s not surprised, it’s because she’s used to her brain delivering everything on a platter. But ask her if she was surprised that she did not know how she did it…

      Delete
    7. The easy problem is how can the brain do what it can do, in this case, being intelligent. The TT tests the easy problem because it is testing intellectual ability. By intelligent I mean computation ability. That is manipulation of symbols based on rules. So, to past the T2 she has to simulate a conversational reaction intelligently. T3, she has to simulate all real life events intelligently. Although we have to judge whether they are intelligent or not.

      Delete
    8. Intelligence is the capacity to think (cognition).

      What is simulation?

      What is computation?

      Is cognition just computation?

      What is the TT? T2? T3?

      Delete
    9. Simulation is just squiggles and squabbles that your brain can interpret, but it's never the real thing. Computation is symbol manipulation. Cognition includes computation (e.g. mentally doing quick multiplication or division) but it's wrong to say cognition is computation. TT is an indirect way of inferring that human beings understand.

      Delete
  20. When Turing talks about "universality", he is referring to what we in class called the Weak CT thesis" – the conjecture that a Turing machine can mimic the behavior of any other discrete-state machine. My question, then, is whether Turing was aware of the stronger proposition that a Turing machine can mimic the behavior of any physical system, i.e., what we called the Strong CT thesis. According to Wikipedia (not the most reliable source, I know), the first person to entertain this idea was Alan Turing's friend and student Robin Gandy, so it is reasonable to assume the two of them must have discussed it at some point. Be that as it may, do you think Turing would have agreed with it, and if so, how would it have impacted his work? (say, would it have led him to defend computationalism, as some of my classmates have speculated?)

    ReplyDelete
    Replies
    1. The Strong CTT is presumably called the Strong Church-Turing Thesis because Church and Turing knew it. But whether or not they believed it (“Stevan Says” they did), it has nothing to do with computionalism, reverse-engineering, Turing-testing or cognition. Nor with whether Church and Turing were computationalists. (Explain)

      Delete
  21. Reading #8 the argument from informality of behaviour a few times got me thinking that our body is fundamentally regulated by predictable physiological cycles, can we not say that we are not machines? To go further with this, the question of what a machine is should be answered first. Agreeing with what Turing has said, we are not machines that it is impossible to attempt to have rules for predicting everything and all future events/scenarios. However, I am curious about how would the physiological rhythms & cycles we have discovered regulating our bodies (which seem to be considered "laws of behaviour" according to Turing) will be categorized in the substitution of "laws of behaviour which regulate his life" for "laws of conduct by which he regulates his life".

    ReplyDelete
    Replies
    1. Every causal system from atoms to amoeba to asteroids is a “machine.” Cogsci is just trying to reverse-engineer what kind of machines WE (and other organisms) are: how and why they can do what they can do. The answer would be: the machines with the capacity to pass TT (T2, T3, or T4) for each organism.

      (The physiological and regulatory cycles so far discovered [homeostasis] do not explain how or why organisms can pass TT; they just explain some vegetative capacities, even if some physiologists, neurologists and philosophers get a little carried away with them…)

      Delete
    2. Okay, I see. Just from going over the notes I now understand that the focus should be figuring out the capacity of passing TT at any level, any other things are just vegetative.

      Delete
  22. To my understanding from the reading and previous comments, the Turing Test is the question of whether a computer is able to replicate the processes and interactions that a human would have at different levels of interaction, with T2 being text communication, T3 being physical action, and so on. Additionally, the computer does not need to replicate the processes of the human mind, it just needs to be able to reproduce its results. Is this understanding correct? Furthermore, is it possible for us to create a machine that truly is capable of doing whatever humans are capable of doing if we do not know the exact limits of our capabilities? Finally, in section 7, Turing seems to writes that a program that would replicate a child's mind would contain "Rather little mechanism, and lots of blank sheets." However, wouldn't a program that was built to learn require a decent amount of pre-written guidelines to at least help it filter out important information so that it could learn?

    ReplyDelete
    Replies
    1. TT tests an organism’s capacity to do what it can do by reverse-engineering and testing a causal model that can do it.

      T3 and higher is not and cannot be just a computer, computing.

      (What is computation? And some things are exceptional about T2: what things?)

      Would we need to know the limits of our capacities to test Kayla? Don’t we know well enough the limits of human capacities? (Einstein is for tomorrow’s cogsci assignment, after the easy problem has been solved.)

      Bottom-up learning from childhood does not have “blank sheets” but many more tomorrows of interaction with the world. And the start-up slate and chalk include the capacity to learn.

      Delete
    2. T2 is different from the higher levels of turing test because it is the only stage that does not require the candidate to manipulate things physically. As such, it does not need to understand the physical referents to the symbols it is manipulating. Also, I ask about limits not because I think the goal of cognitive science is to recreate Einstein but because it wouldn't be possible to create a T3 robot that is indistinguishable from a human without understanding the exact way we weigh information while making decisions, or even just how we pick out visual patterns and how much noise is required to 'cover up' a pattern.

      Delete
    3. T2 requires whatever it takes to pass T2 -- which might well require manipulating things physically (grounding). So if a candidate cannot do that, it can't pass T2.

      Delete
  23. I really enjoyed this article as it proposed a very tangible and insightful approach to the idea “can machines think?”. Indeed, this seems like a very overwhelming vague and hard to empirically answer question. Turing simplified the problem to what he defines as the ‘imitation game’ by asking in what way will the presence of a machine affect this game? Turing puts the emphasis on the idea that the aim shouldn’t be to build a complex and data-storing machine that has the capacities of an adult human mind. Instead, it would be more interesting to aim for a machine that is able to select what information is worth learning and retaining and what isn’t. Once that basis for learning is acquired, the machine can then learn what is useful to it, as Turing compares to a child-like mind. Indeed, we tend to forget that this capacity to select and categorize is the gateway to learning more skills.
    Then, he states a few opinions that would argue against his opinion. He mentions the argument from extrasensory perception. I am not sure I have quite understood this concept. Is it essentially certain phenomena like telepathy that go against the idea that thought can done by machines?

    ReplyDelete
    Replies
    1. Please read the other threads before posting: Telepathy and ESP have been discussed.

      Delete
  24. Concerning the topic “Can machines think?” Alan Turing proposes the idea of the imitation game, where a machine tries to fool an interrogator into thinking that it is a human being. He imagined that in 50 years’ time, “an average interrogator will not have more than 70 percent chance of making the right identification after 5 minutes of questioning”. Unfortunately, we have not attained this point yet. Even though, some very strong chatbots such as LaMDA could be considered by some to pass the Turing Test. Nonetheless, we do not know if that route is a safe one for AI. Indeed, the test is fundamentally based on deception, the machine has to make the interrogator believe that it is a human. Do we really want to develop machines who are programmed to lie and deceive? Is that really what we consider as intelligence? This could be seen as a major red flag, as an AI able to pass the Turing Test, carries the danger of knowing how to deceive people.
    In addition, I think some aspects of AI should not be disregarded because they are not considered to imitate human intelligence. If a machine can compute 30234 * 34895 faster than a human being, that should be a good thing. We are happy that our calculators gives us this answer in a matter of seconds. The AI should not be taught to delay the answer, to fit into categorical human intelligence. We should build on the fact that they can have better capabilities than us in certain domains.

    ReplyDelete
    Replies
    1. Please read the other threads. The TT is not a trick, or deception. What is reverse-engineering?

      The TT is not the “Loebner Prize” (google that). Turing was just predicting how quickly (slowly) progress would be made in Turing-testing.

      The TT is about reverse-engineering Kayla’s human capacities, not superhuman ones. Computing can also produce superhuman capacities, but that’s not what cogsci or the TT are about.

      Delete
  25. It seems to me that Turing is proposing a behaviour-based method to approach a fiddly question of “Can machines think?”. He continues to address conjectures which prescribe words such as consciousness, souls, and feelings which are more abstruse than the ones included in the originally proposed question! It appears the ones making the opposing arguments brought their own definitions of the words “machines” and “think” or do not accept the “imitation game” as a placeholder. This is not completely unjustified... Turing does indeed tackle a question of “thought” (indistinct and evidently attributed with human traits) with a solution merely testing behavioural performance capacity. I think the more intriguing arguments would arise if we replaced “think” with something more befitting and distinct. It seems necessary to change the question to “Can machines solve the “easy problem””?

    ReplyDelete
    Replies
    1. Turing’s method is: “Thinking is as thinking does (or can do). Reverse-engineer that and test it with the TT.”

      Cogsci is just trying to do that. That is the easy problem. Organisms are machines, and we are organisms. So it’s tautological that machines (us) have to reverse-engineer us.

      But although everything (hence every machine) can be modelled or simulated by a computer (Strong CTT), that does not mean everything is a computer. Explain.

      Delete
  26. The 'learning machine' that Turing proposes at the end of his paper is largely realized by neural nets in our times, with high precision in predicting certain tasks, sometimes even higher than humans. In contrast, symbolic modelling ('reverse-engineering our thinking faculty'), though investigates how the faculties of human thought work, is largely irrelevant to the project toward an artificial intelligence now, as far as I know. Are they truly two conflicting approaches? If the former can outperform the latter, is the latter still interesting in terms of artificial intelligence, as mechanism is not important in a 'thinking machine'?

    ReplyDelete
    Replies
    1. Neural nets today are computational algorithms that can learn (some things, some ways). They can also be implemented as parallel, distributed sets of units with activations and wired to update their activations and connections according to their inputs, connections, outputs and feedback to adjust their activations according to feedback from having produced a correct or incorrect output (“backpropagation”). As such, they do not have a symbol grounding problem.

      But they cannot (yet) pass T2 and they can never pass T3 (because they are not robots with sensorimotor capacities.

      When we get to Searle, think about whether Searle’s Periscope (what is that?) would work against T2 passed by a neural net. For the parallel, distributed implementation it could not, but that version would be (“weakly”) equivalent to the parallel/distributed version). What difference would that make?

      Neither version of a neural net alone could pass T3 (why not?) but it could help (how?)

      Whether computational (symbolic) or analog, a neural net still faces the symbol grounding problem in T2. What is that? and why?

      Delete
  27. For this reading, I would like to gain a bit more understanding and clarification. The focus was on the question of “Can machines think”. In this case, the imitation game is brought into the play. That is, “What will happen when a machine takes part in A in this game?”. However, are there not some instances that would lead to the distinct identity and ability that a machine, unlike a human, cannot think?

    Specifically, if we were to take the example of when humans are asked on the internet to “check off all the fire hydrants to prove we are not a robot”. Is this feature not one that would show that, in a way, “machines cannot think”? Particularly, they cannot check off all the fire hydrants like us humans as we can think and see which images must be checked off. Thus, if we relate it to the imitation game, surely there must be instances similar to the "fire hydrants" that exist and, in turn, cause machines to struggle, demonstrating that they, in fact, cannot think?

    ReplyDelete
    Replies
    1. I was also wondering about this, and how possibly mechanisms similar to the “click all the pictures with fire hydrants” could be implemented to actually perform the opposite function in determining that the user is actually a human and not a machine, sort of like a simplified reverse Turing test to obtain this opposite result. Although after thinking about this idea a bit more, I realized that it would actually be fairly simple: all that would be needed is to ask a complex Math problem or something of this nature and the average person would not be able to solve it; a computer, on the other hand, would likely be able to.

      Delete
    2. Deep learning programs
      can easily learn to beat Captcha robot-detectors,
      but they can’t pass T2.

      And although they’re called robots,
      the robots that Captcha captures
      are not robots
      just computer programs.

      Searle’s Periscope could capture a T2-passer,
      if it’s only a computer program (how?)
      but not if it’s a T3 robot (why not?).

      Kayla can beat them all.

      But the TT is not a game, a trick, imitation or 20-questions.
      What is it?

      Delete
    3. https://medium.com/@ageitgey/how-to-break-a-captcha-system-in-15-minutes-with-machine-learning-dbebb035a710

      Delete
  28. What I found the most fascinating about this reading is Turing's own “critique of the new problem” with reference to the imitation game (now more commonly known as the Turing test) because he puts into perspective how reliant we, as humans, are on physical characteristics. In Turing’s time, this section might have not seemed as relevant since it was difficult to fathom a scenario where one might encounter a machine without knowing it’s physical characteristics, but today with the internet and the creation of sophisticated A.I, in the right setting, a machine’s inability for practical demonstrations is not as large of a shortcoming. This faces us with the notion that whether we like it or not, physical characteristics are perhaps the most important defining features of humanity, especially as we shift further into a world that exists online.

    ReplyDelete
    Replies
    1. I would agree that in the ever growing online world even half decent AI (which would not pass any Turing test) in the right setting would be convincing enough to pass as a human (even for extended periods of time) with a given user. This supports your claim that physical characteristics should not be overlooked as a measure to judge human from machine, and I don't think anyone would disagree with that claim. It is bold though to suggest they are the defining feature of humanity as one day in our lifetimes there may exists machines which cannot feel and think like humans do but look and move identically.

      Delete
    2. IF only T3 can pass T2 then (some) non-computational capacities are essential (e.g., sensorimotor capacities) for passing TT, and for thinking.

      A thinker needs a body to interact with the world, at least at the beginning of its learning curve, no matterger how much of its time it eventually decides to spend online.

      Delete
  29. The question of “free will” has two versions:

    1. Is everything thinkers DO determined by cause and effect (determinism), or do they have a “choice”?

    That is not a cogsci problem. It’s a problem in physics and metaphysics.

    2. Why and how do thinkers feel they have a choice? (That’s part of the hard problem, but a small part; the big part is why and how do thinkers feel anything at all? Why aren’t they just intelligent (sic) T3 or T4 or T5 DOERS? (Week 10).)

    ReplyDelete
  30. Before reading this article, I had already sat with the thought of “teaching a machine” to get to the same point as a particular human. I thought of doing so by perhaps by making it live all the experiences that the particular human had lived before. Now I see the amount of implications with my naive thought, particularly with the fact that (1) The initial state of a human has not yet been proven to be a blank. So how could we even start programming such a state on a machine if we could not even find that state yet? The second objection that came to my mind about my naive assumption, is one that Turing attempted to say that it is perhaps doable, that is (2) Make the machine live the same experiences while disregarding some small details such as not having legs, eyes etc. Turing here admits that one could not make a machine go through the exact same things as a human since “it will not be provided with legs so that it could not be asked to go out and fill the coal scuttle”, however, turing makes the point that such experiences could be disregarded and that it would not affect the learning needed for a machine to pass the test. I can not make myself agree with this. This just seems too strong of an argument. That is because we have not been able to solve the easy problems yet, we do not know what is and what is not needed in order to achieve a certain cognitive level. That is, I do not agree with Turing's points and I do not think we are even close to making machines think.

    ReplyDelete
    Replies
    1. Vitoria, what thee computer is missing is the sensorimotor capacity to ground its symbols. Please read the replies to the other commentaries,

      Delete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...