Friday, September 23, 2022

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

110 comments:

  1. When reading the portion about Introspection, all I could think about was how they describe the experience of those without aphantasia, or at least I presume. As described, "imagery theory leaves of lot of explanatory debts to discharge". By a lot of the imagery theory 'rules'. those with aphantasia would not be considered as cognitive or be able to do computation to the standard person if I understand the application of it correctly.

    ReplyDelete
    Replies
    1. Whether your imagery is rich or thin, it does not reveal how your brain does all the things you can do. So introspection is not the way to do cognitive science.

      Delete
  2. I found this reading quite useful to acknowledge our lack of answers when it comes to cognition. Indeed, it was interesting to read about the thought experiment performed in class where one must recall the name of their 3rd grade teacher. When asked to explain how we remembered this information, we are faced with the extent of our ignorance about our own mental capacities. Considering the scope of our cognitive abilities, meaning what we are able to do, it seems counterintuitive that we understand so little about how and why we are able to do these things. This is even more striking when we consider that understanding cognition (ie how and why organisms are able to do what they can do) is merely an easy problem. Can we hope, then, to uncover an answer to the hard problem, which asks how and why we are able to feel what we can feel?

    ReplyDelete
    Replies
    1. "Stevan Says": No. (But that's just "Stevan Says.")

      “Sentience,” the capacity to feel, just like the capacity to fly, or to forage, is a biological trait, coded in our genes; it evolved, through Darwinian variation and selection, like all other biological traits, because it conferred some adaptive advantage on our ancestors and us (Week 7), in survival and reproduction. (Not just humans: ALL sentient species -- which does not seem to include single cells, fungi, and plants: only those who have the organ of both doing and of feeling, a nervous system.)

      So what makes the “hard problem” so hard is explaining how and why the capacity to FEEL conferred an adaptive advantage. It looks as if the solution to the easy problem (of explaining how and why organisms can DO all the things they can do) is enough: Feeling -- although it is the only thing that matters at all, the only thing that makes life (and thoughts and words) meaningful at all (Week 11) – looks to be superfluous. Another way to put it is this: “How and why are sentient organisms NOT just smart robots?” What does sentience add, causally? (Evolution is lazy: traits don’t just evolve for nothing.)

      But if what “Stevan Says” is wrong, and there IS a solution -- a causal explanation of how and why sentient organisms feel -- then the hard problem is not unsolvable, just hard.

      PS The TT is just a test of doing; it cannot test for feeling. But that’s not the hard problem.That’s the “other-minds problem.” The only way you can know for sure whether something feels is to BE that something. That problem is not solvable either, with certainty. But it is more like ordinary scientific problems, like whether apples will always keep falling down rather than up: You can’t be certain, as in maths and logic, but you can be almost certain; and that’s close enough! We can’t be sure other people feel, but being Turing-indistinguishable from one another is enough. Same is true for nonhuman species like mammals, birds, reptiles, and even fish and octopus. The less like us they are, the greater the uncertainty, but the uncertainty is not very big, even with insects, especially because we know that they have an organ that could potentially produce feeling: a nervous system. All bets are off, though, with species lacking a nervous system, like amoebas and mushrooms and (as we vegans fervently hope) plants…

      Delete
    2. Thank you for your answer, as a vegan this makes me think of the gray area that are mussels and other bivalves that don't have a brain and supposedly cannot feel pain. It's interesting to see sentience ie 'feeling' across living beings as a spectrum rather than a clear-cut boundary that could for example be placed between animals and plants.

      Delete
    3. Dear fellow-vegan! There is a lot of variation in what different species feel. We, for example, cannot feel sonar, whereas bats can. Moles cannot see. But sentience itself -- which is whether they can feel anything at all -- is not a spectrum; it is 0/1: Either a species is capable of feeling (something, sometimes). or. like a rock, it is not.

      But because of the other-minds problem, it's important to get it right (especially for the victim, if I think it is insentient, but it's sentient).

      It’s safer (for their sake) if I adopt the "Precautionary Principle." So I don't eat mussels or other bivalves. (And, by the way, they do have a nervous system, just not a centralized one.)

      Sneddon, L. U. (2015). Pain in aquatic animals. The Journal of Experimental Biology, 218(7), 967-976.

      Delete
  3. This skyreading was particularly fascinating to me in the strive to understand what cognition is. The passage about anosognosia and being “cognitively blind” to the complete picture shows how our brain glosses over its own capacities, creating “just-so” narratives that omit functional justification. It seems like a lost cause of a mission to understand how we do everything that we can do. Sure, we can perform computation and we can mimic that same computation in a Turing machine or computer, but what this text really emphasizes is the brain’s ability to actually understand what is being computed. This subjective phenomenon is key in distinguishing the two systems. Searle’s experiment is a testament to this; the human in the Chinese room, just like a computer passing the Turing test, is externally passing the test and creating the desired output yet does not know the meaning of what he is manipulating or producing. This is ultimately why the Turing test cannot summarize cognition. Thus, the functional explanation to cognition can never be elucidated in my opinion, and the computational one isn’t founded enough, so where do we look for answers?

    ReplyDelete
    Replies
    1. Perhaps we could start by asking our robot, Kayla. If she can do everything we can do (let's, just verbally, T2, right now) why would you say she doesn't understand? Searle has a reason, but we haven't gotten to him yet. We have to read Turing next week first (2a).

      Delete
    2. Is this simply because there is no way for Kayla to draw semantics from syntax? In other words, she could do everything we do and thus produce the appropriate output symbols etc, but without having a concept of the meaning of these symbols and their manipulation. Therefore, she can do everything we do yet she doesn't understand.

      Delete
    3. Kayla is a robot (T3), so she can’t be just a computer, manipulating symbols (T2). If you ask her what an apple is, she can not only describe one verbally, but she can point one out, and pick it up. That’s not yet semantics (meaning), but it’s grounding; and without grounding (hence reference) you can’t have semantics (meaning).

      There’s something else you need for meaning, though; what is it? (It’s the core of Searle’s refutation of computationalism: his “Periscope”.)

      Delete
  4. Thanks for your reply @Stevan Harnad. Q: "What is computation and what is computationalism?"
    Computation is any type of calculation that follows a well-defined model or program. Computationalism is the belief that cognition is computation.

    ReplyDelete
    Replies
    1. Well, defining "computationalism" is easy, as long as you've already define "computation." But you haven't! "Calculation" sounds like just a synonym, so it doesn't help. And what is "a well-fined model or program"? (By next week I hope it will be clearer.) Hint: it's following a recipe based on the shape of the symbols, and rules on how to manipulate them: that's what a Turing Machine does.

      Delete
    2. Let me try again :) Computation is the act of following an algorithm, containing a set of rules that serves as the instructions to manipulate symbols to get a desired outcome.

      Delete
  5. Putting this reading in conversation with the previous papers on computationalism we have read was quite interesting as it allowed me to nuance the different definitions and ideas we have read about. I thought that the ironic comparison between Skinner’s and Zenon’s ideas was helpful to understand both their theories and their shortcomings. There always seems to be a “black box” that we avoid describing. Zenon’s relegation of internal phenomena that escapes a computationalist understanding to a “subcognitive” domain, while understandable, shows the inability of computationalism alone to provide a satisfiable model for cognition. I wonder how Turing would’ve addressed the symbol-grounding problem, or more generally the idea of the “black box”.
    I still have some trouble with understanding the idea of “homuncular explanation”. While I think I understand the concept of it, and its relation to the grounding problem, I don’t think I would be able to explain it clearly to Kid-Sib.

    ReplyDelete
    Replies
    1. The way I understood the homunculus explanation, or rather the fallacy, is that we are to imagine that there is a small person living inside our heads who is cognizing for us (like that Disney movie, Inside Out). But the immediate problem becomes, ok so what’s happening inside this little man’s head? Does he have a littler little man inside his head? So immediately we have an infinite loop. That’s why it doesn’t work.

      Delete
    2. Mathilda, yes, a "subcognitive" component would beg the question of whether computation is true ("cognition is just computation"); and, yes, that problem leads to the symbol grounding problem (Week 5): symbols and symbol manipulation alone cannot connect words (symbols) to the things in the world to which they refer.

      Teegan, the homuncular infinite loop is also connected to the symbol grounding problem (the dictionary-go-round). But we already see that when we try to find a causal mechanism that will pass the TT by consulting our introspection: all that give us is a homunculus, not a mechanism.

      Delete
  6. This skyreading provides a clear route to learning the critical means of doing cognitive science. Introspection fails because we can't explain how we are able to do certain things. Behaviourism begs the question of how as well, and it has dismissed the importance of learning the brain's internal structure. Zenon tried to explain the functions that carry out actions by equivalenting computation to cognition, which is not the case. He also disregarded an internal dynamic system as a part of cognition, and computationalism has other problems.

    Computation only manipulates shapes, and shapes are arbitrary. The word "arbitrary" reminds me of a piece of work by Nietzsche immediately. He was never a cognitive scientist, but what he says is really interesting. In "On Truth and Lie in an Extra-Moral Sense," he argues: "What is a word? The image of a nerve stimulus in sounds." "We believe that we know something about the things themselves when we speak of trees, color, snow, and flowers; and yet we possess nothing but metaphors for things - metaphors which correspond in no way to the original entities." The same question also arises in an introductory semantics course I've taken. I was told that the meaning of a sentence is acquired by its compositionality, and our knowledge for evaluating the truth values of a truth condition represents our knowledge of meanings in languages. But it only works formally and mathematically. We still don't know how we acquire the meaning of the word that composes the entire sentence or, more precisely, the proposition. The article presented us with a way to do the symbol grounding by Turing-testing all of our behavioural capacities. By that time, we might be able to see how we cognize.

    ReplyDelete
    Replies
    1. Computation (Week 1), Turing Testing (Week 2, 3), Symbol Grounding (Week 5), categorization (Week 6), propositions and language (Week 7) should become clearer as the weeks go by.

      For now, just keep in mind that to ground the meaning of words in the things they refer to, connecting words to words is not enough.

      You need to have sensorimotor capacities that somehow connect words to the things they refer to – this involves capacities which are not just computational (i.e., not just symbols and symbol-manipulation).

      Just think of how to connect the word “apple” to apples; and all the things that you (and our robot Kayla) are able to do with apples. That can all be described by computation (as well as by language) but it cannot be done by computation (or language) alone. That’s why Kayla has to be a robot, not just a computer.

      Delete
    2. I had a similar question when I first read the article. It was proposed in the reading that the full robotics version of TT could test the symbolic capacities grounded in sensorimotor capacities. "The internal processes of the robot itself can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters." This quote initially made me skeptical about why we can examine cognition through this measure. Or, more specifically, how we can tell the dynamic component, the part of cognition outside of computation, truly exists. Nevertheless, if there does exist a robot that can pass this test and pass the symbol-grounding problem autonomously, who's to say they completed all this solely relying on computation?

      Delete
    3. Both a robot and a human being are physical systems, sensorimotor ones. The sensorimotor part (at least) cannot be just computation (symbol manipulation) -- and not just because of the independence of software from hardware.

      Explain how and why this is true to show you have understood what computation is and isn't.

      Delete
    4. Through his experiment, Searle showed that passing the TT problem did not render cognition, did not mimic the actual experience of understanding the world. As mentioned above, sensorimotor capacities are needed to ground meaning to words, and connect words to a deeper understanding rather than simply symbol manipulation (computation).

      Could it be said then that because the sensorimotor part involves this grounding process, certain qualia, actual experience and understanding of the world, it cannot just be computation? And therefore, there could not exist a robot that can pass the symbol-grounding problem autonomously?

      Delete
    5. You can't show that "qualia" (sentience, feeling) is necessary to pass T3 unless you have a way to determine whether a robot (or anyone else) has or lacks sentience. Do you? That’s not the easy problem, nor the hard problem, it’s the “other-minds problem. What’s that?

      Turing said we could not do better than the easy problem. Why?

      What is the difference between T2 and T3? And how did Searle penetrate the other-minds barrier with T2? Is it possible with T3?

      Successfully passing T3 solves the symbol grounding problem, but you need to solve the hard problem to explain sentience.

      But all of this is getting ahead of ourselves. We have not yet gotten to the Turing (Week 2) Test, or to Searle’s Chinese Room (Week 3). The first Week was Week 0 and now we’re in Week 1 (What is computation?)

      More of you have to prove you understand what computation is first. It’s very easy, as long as you don’t conflate it with cognition and computationalism from the outset.

      Delete
    6. As I come back to review for the midterm, I think I have more understanding of the problems now. "Qualia," sentience or feeling, is something only the self knows and feels. So we cannot know if others have feelings, which is the "other-minds problem." The "hard problem" is how and why organisms feel what they feel, which requires us to solve the "other-minds problem." Otherwise, we wouldn't know what anyone else is feeling.

      T3 is a robotics version of T2, which requires only computation. Computation is about symbol manipulation according to a set of rules and is implementation-independent. Searle's periscope argues that if Searle memorizes all the rules and symbols in Chinese, he still doesn't have a feeling of understanding Chinese. Since implementation-independence means, Searle should have the same computation property as a computational T2 executing Chinese, and computationalism argues that mental states are computational states, Searle's lack of understanding can be applied to the T2 executing Chinese. Thus T2 doesn't understand Chinese either. However, this argument cannot be applied to T3, or a not purely computational T2, since the property of implementation-independence may not hold.

      Delete
  7. In my final year of my cognitive science degree, I feel I finally have an understanding of what questions the field is really trying to tackle. This paper touches on the many complicated facets of cognition and specifically one of the most jarring; categorization — how we string together previously encountered knowledge in order to make sense of something we have never encountered before. For example, as a child I saw many spoons and many red items. But, perhaps at the age of 21, I am first exposed to a red spoon. Though I have never seen such a thing, I am easily and instantaneously able to comprehend what is before me. For myself, this example really “begs the question” of how we perform such cognitive tasks.



    I also found the software/hardware machine analogy as a solution to the mind body problem very fascinating. As I read the theory I too, like Zenon, felt hopeful that this would solve the mind body problem. But as I reread the paragraph on Computation and Consciousness, I find myself questioning once again what computation is. In this instance, it seems like yet another umbrella term for something between the physical and immaterial — still indescribable and undefinable.

    ReplyDelete
    Replies
    1. I hope that by Friday it will be clearer what computation is (it's just rule-based symbol manipulation). Categorization will have to wait for Week 6 (but I can already say it's doing the right thing with the right kind of thing: eating edible mushroom, avoiding poisonous toadstools).

      Forget the "physical" vs. the "immaterial": this is not a course in metaphysics. It's just cog-sci, and mostly the "easy problem": what's that?

      Delete
  8. 1b. I found the reading “Cohabitation: Computation at Seventy, Cognition at Twenty” interesting, especially the part about the differences between the approaches of behaviourism and Cognitive science.
    As someone who has never taken any course about cognitive science before, I realized that I never really thought about how the two approaches differed. I think that one of the quotes that illustrate the difference is: “What makes us able to do what we can do? The answer to this question has to be cognitive; it has to look into the black box and explain how it works—but not necessarily in the physiological sense.” If I understand well, behaviourists are really interested in what is concrete, what we can see, while cognitive scientists look at the processes such as association, categorization, in other words, processes that are more abstract perhaps. The example that is given is Hebb’s example where he asks a class what was the name of their 3rd grade teacher and then wonders what are the processes used to remember the name. One thing that I am not really sure about is whether cognitive science replaced behaviourism or if they kind of evolved and coexisted at the same time.

    ReplyDelete
    Replies
    1. Behaviorism tried to describe, predict and control WHAT we do (mostly by shaping it with rewards). Cogsci tries to explain (reverse-engineer) HOW we can do what we can do.

      Turing was certainly not a behaviorist (even though T2 and T3 are just behavioral); but was he even a computationalist? ("Stevan Says" no; what do you think? But read Turing (2a) first.

      Delete
    2. I don't really think that Turing was a computationalist but it is not very clear. In my opinion, it looks like he was just wondering whether machines are capable of intelligence, but he doesn't seem to affirm that what the human brain does is computation.

      Delete
  9. The reading “Cohabitation: Computation at 70, Cognition at 20” encapsulated many of the concepts mentioned in class, (where I inevitably fell into the same trap of trying to understand how it is I remember the name of my third-grade teacher). I agree with the notion of disregarding the homunculus theory, as it inevitably creates a never-ending cycle of little men in our head to blame our thoughts and behaviours on, as well as introspection not being the answer to most questions in cognitive science. However, what I have a hard time understanding is that the homunculus theory should be fully replaced by a mindless full autonomous process. The reason for this difficulty is that for some cognition to occur, we must occasionally put in effort, such as when I tried remembering the name of my third-grade teacher. Wouldn’t this effort suggest that the process is not fully mindless? It also appears to me, that a mindless, autonomous process does not account for errors we make while trying to make certain computations such as memory recall, where concepts such as false memories can occur, but I may be overcomplicating the point. A section that I enjoyed in the paper was when it was stated that a Turing Test-passing system would not be a cognitive being simply due to the fact that they passed the Turing Test, as I agree, this does not ensure that the program has understanding. However, would the Turing-Test passing system be cognitive without the element of understand, (which I hope) is only a human-quality for the moment.

    ReplyDelete
    Replies
    1. The homunculus (which explains nothing) has to be replaced by a causal mechanism that can do everything we can do, including learning, reasoning and talking, indistinguishably from any of us (as Kayla can). Otherwise we’ve explained nothing.

      We are sentient. We put in efforts, but almost everything we do (except maybe long division) is handed to us on a platter by “slaves” in our head and we have no idea how they do it. If they have minds, then it’s their minds that cogsci will have to reverse-engineer, not ours. But why would they need to have minds? Surely one mind in a head is enough. More than enough, actually, because if cogsci ever reverse-engineers a system that can pass the TT (the easy problem) the hard problem will be to explain why that’s not enough! What does sentience, and voluntary effort, add, and how, and why?

      See earlier replies above about the other-minds problem too.

      Delete
  10. In this reading it is explained that a machine that passes the Turing Test can compute problems without understanding the symbols it may include. It is also said that, "it is just a bunch of symbols that are systematically interpretable by us-by users with minds". This let me wondering what is meant by mind (in a kid-sibly definition). Without the definition, I interpret this sentence to mean that only humans with minds can systematically interpret symbols and that machines compute symbols without knowing what they represent. But in this case, what does it really mean to attach meaning and can we separate the interpretation from computation. If so, if one can attach meaning to symbols are they automatically cognitive?

    ReplyDelete
    Replies
    1. Yes, only humans can understand what symbols mean (when the symbols mean anything at all) and their meaning is grounded in the human interpreter’s brain. But that does not mean that nothing can EXECUTE the symbol manipulations. You don’t have to have a mind to do that. A mindless machine can do it too.

      2 + 2 = 4 are just symbols.

      Given 2 + 2 as an input, an adding machine can produce 4, without knowing what it means (or knowing anything).

      An infant can be trained to do it (with, say, the first few digits) without having any idea what they mean.

      We too can do it, and we also know what each of the symbols, and the string of them, means.

      Same is true for the words of natural language:

      “The cat is on the mat.”

      And every symbol in this posting.

      How the symbols get grounded for us we will get to in the next several weeks:

      PREVIEW:

      First, we have to learn how language is not just computation. Computation only has syntax (shapes, and shape-manipulation rules) (Weeks 1-3); language has semantics (Week 3, 5 and 7). And many of its words (“content words”) have referents.

      Then we have to learn what categories, category-learning, sensorimotor feature-abstraction and category-naming are (Week 6). The referents of the name of a content category are its members.

      And, finally, we have to learn what propositions (like “1 + 1 = 2,” “the cat is on the mat,” and “an apple is a round, red fruit”) are (Weeks 7 and 8): subject/predicate statements that state that the members of the subject category-name are members of the category having the features in the predicate category-name(s), and that such subject/predicate statements can be TRUE or FALSE.

      Delete
    2. Now that we have discussed the symbol grounding problem, I wanted to return to this reading and the confusions that it raised for me and many others.

      Melis’s question of “what it really means to attach meaning” to symbols can be answered by looking at natural language. As Prof. Harnad mentioned, computation is just syntax, while language is syntax and semantics– this is what we discussed most recently in Week 5.

      Searle said that if a computer can be given instructions to 'speak Chinese' it can do so without actually UNDERSTANDING, because that's what happened when he learned all the rules, script, etc. in the CRA. Computation does not need to know the meaning of symbols, but language does. The Symbol Grounding Problem pertains to MEANINGFUL words in a language… the “grounding” is how the word (aka the arbitrary symbol) in the head or in a T3 robot gets connected to the capacity of all that it can do with that word… GROUNDING is an essential condition for meaning, but meaning is NOT just grounding. It is a felt state and a sensorimotor capacity– what it feels like to do with all that information and that cognitive capacity.

      Delete
  11. One major takeaway of this reading is the need to think critically about the proposed answers to cognitive questions, especially the seemingly simple ones. It was also helpful in that it gave more context on the symbol-grounding problem. I think I needed a more concrete explanation in order to grasp the necessity of existing referents to creating meaningful thoughts. I was interested in Searle's thought experiment because I can't really imagine that any machine or human could fool native speakers of a language forever without at least a basic understanding of what the words they were saying meant or referred to. Surely they would be given away by obvious patterns in their responses. Also, if possible, Dr. Harnad, would you elaborate on your thoughts on the Mary's Room thought experiment?

    ReplyDelete
    Replies
    1. A. Turing Test::
      The TT is not about fooling anyone. It’s a test of whether the mechanism that you have reverse-engineered can really deliver the goods (i.e., do anything a normal thinking person can do).

      B. Color-blind Neuroscientist Mary:
      1. Mary is born color-blind.
      2. She grows up and becomes a neuroscientist, specializing in color vision.
      3. She learns to understand not only what colors she is told things are, but all of the processing of color in the brain, from the photons hitting the retina to the highest regions of the brain.
      4. She can discuss and explain every aspect of color and color perception with anyone, blind, color-blind or seeing.
      5. Then one day surgery repairs her vision and she is no longer color-blind.
      6. She looks for the first time at an apple, which, she knows, is red.
      7. Knowing all she already knew about which things are red and how red is processed by the brain, is she surprised at what red actually looks like, or did she know it already, from knowing what she know about color processing in the brain?

      This tale is told to “functionalists” who think that what it feels like to see color can be fully explained functionally -- by explaining it psychophysically (wave-lengths), behaviorally and neurally. If functionalism is true, Mary will not be surprised. If she is surprised, functionalism is false (or insufficient).

      That’s the Neuroscientist Mary koan. It’s not very illuminating. Jim Simmons has been studying bats’ sensory capacity to “see” the shape of objects with sonar. He understands its neurophysiology fully. He has even tried out the echolocation that blind people use to locate walls (by tapping with a cane and listening to the echo). He still doesn’t understand what it feels like to “see” with sonar.

      Moreover, Colorblind Neuroscientist Mary could never have become a neuroscientist, nor even a speaking/understanding human being, if some of her words (which ones? how many?) had not been grounded using the sensory capacity she did have.

      Helen Keller (see Wikipedia), who was born blind and deaf, had even less sensory capacity (just touch and movement), but that was enough to ground all the rest. She could read (Braile) and speak (touch) and write and learn about anything any highly intelligent person could. But you can be sure that if she suddenly had her seeing and hearing repaired, her experiences would be new for her. Yet because there are cross-connections as well as similarities between the senses in the brain, especially in relation to vision, shape (which can be both haptic and optic) and space, unified by movement), some aspects of the new inter-sensory relations from her newfound vision would be familiar to her, and she would even be able to point them out to us. But not because she knew neuroscience (Helen Keller did not) but because of the similarities between the senses (brighter, louder, more intense vibration, faster, slower, closer, further). Congenitally blind people know what “look” and “see” mean, but they don’t do it with their eyes.

      So, as often happens with these philosophers’ “thought experiments,” they miss the point.

      (Ask yourself if this is true too in Searle’s thought experiment.)

      [Interesting side-remark though: A T3 robot does not have to be sentient for its words to be grounded. Sensory processing does not need to feel like something in order to be able to ground internal symbols. If we knew that neuroscientist Mary, or Helen Keller, had to have sentient sensory capacity – i.e., if it had to feel like something for them to see, and we could explain how and why it had to -- then that would be a solution to the “”hard problem” (of explaining how it is necessary to FEEL in order to be able to DO all the things we can do). But no one has a clue of a clue as to whether or why that might be true.]

      Delete
  12. In this piece, it is argued that we must move away from modeling human thinking as computation (computationalism), as there lies a symbol-grounding problem in which the symbols that physical symbol systems (e.g., computers) represent and manipulate lack the proper ties to perceptual experience. At the end of the piece, there is the proposal to have a “full robotic version of the TT (T3), in which the symbolic capacities are grounded in sensorimotor capacities and the internal processes of the robot itself (Pylyshyn, 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”

    This proposal for a robotic version of the Turing Test (T3) that uses sensorimotor capacities (sensory input/motor output) to ground its symbolic capacities reminds me of a paper by Barsalou assigned in a computational psychology class I took last year. In this paper, Barsalou (1999) discusses the symbol-grounding problem and proposes the use of perceptual symbol systems whereby sensory input (symbols) can be stored directly in memory rather than transduced into a completely new representational system (arbitrary code to describe properties of represented and manipulated symbols). In this way, perceptual symbol systems would allow for computers, once activated by a probe, to generate an internal perceptual state through abstracting across repeated experiences– almost akin to neuroimaging studies where when asked to imagine using a hammer, participants show summed neural activity in corresponding sensorimotor areas (to what would be activated when actually using a hammer). Because I see that this piece (Harnad) was written in 2008 (post-Barsalou perceptual symbol systems), I am interested in hearing about where this paper and its proposal of perceptual symbol systems stand in terms of advancing (or not advancing) this discussion of how sensorimotor input can be integrated without the transduction of sensorimotor input by external programmers.

    ReplyDelete
    Replies
    1. The symbol grounding problem was formulated by me in 1990. Larry Barsalou’s 1999 paper appeared in the journal I edited, Behavioral and Brain Sciences, accompanied by 31 commentary articles plus Larry’s Response.

      I think that in that paper Larry mixed up (1) symbols, which are arbitrary shapes like 0 and 1, used by a computer in doing computation (symbol manipulation) and (2) sensory input signals. He did not ground symbols, he just used nonarbitrary sensory shapes in place of symbols, and not to do grounded computation, but to do sensorimotor (perceptual) processing.

      Since then there has been a lot of work on symbol grounding (close to 10,000 articles in google scholar), but (“Stevan Says”) most of them did not even understand the symbol grounding problem. Some progress has been made, though, thanks to “deep learning” by neural net models that find the sensory features in the sensory input that allow the net to categorize the input and name the category with an arbitrary symbol as a name.

      But this is still far from having created a grounded T3 robot. It can only do parts of the Turing Test (learning sensory categories by abstracting their features, giving them a grounded name, but not yet able to recombine the features’ names to define further categories).

      Delete
  13. This reading was a bit abstract for me and I had some difficulty understanding it, but the Chinese Room Experiment it mentioned was interesting to me, so I watched some videos on youtube and I had a question. We formed a language guidebook of our own in our minds through time and experience, it was more informative compared to the Chinese guidebook in the room. In the Chinese Room, we can say that the computer just outputs the correct output but does not understand Chinese. But why can we say that humans understand their own language, after all, our output is also based on a language guidebook and we have no way to give a correct response when we encounter something that does not exist in our language guidebook (for example, some unintelligible sentences or sentences with some unknown words), just like a computer.

    ReplyDelete
    Replies
    1. Neither recognizing things nor naming them is learned from a rulebook. How is it actually learned?

      Delete
    2. I would argue here that the difference lies in the fact that our human "guidebook" is based on mutual communicative values. The understanding comes from what is commonly decided to be associated with an idea or an object and in our interactions those are reinforced. It is then through this way that human language is given meaning. I think that is what might also tie into the fact that different languages come to different understandings on meanings/concepts. But then a question that comes to me is where does this all start from, the initial structure of language? I believe Chomsky speaks to an innate language equipment. It'll be interesting to explore this further.

      Delete
    3. Stay tuned. Neither recognizing things nor naming them is learned from a rulebook. And Chomsky is not about meaning but grammar (syntax). So the question remains: how do we learn to recognize and names things?

      Delete
    4. If we can start recognizing objects and concepts and associating them with words as a baby, it must require innate rules, in a similar fashion to Chomsky's universal grammar. We all end up with a similar mysterious internal system that associates stimuli with contextual cues in our memory. We are constantly building a repertoire of symbol-meaning associations. For example, if a race car drives past us, our visual system records its shape and colour, our auditive system processes the sound waves of the engine, all the while we somehow recall and reinforce the concept of a car. We learn that the associative regions of the cortex are responsible for this, without knowing the underlying principles. One day, if science understands the function of every brain region and can pinpoint each firing neuron in these regions, it seems we might have a pretty good understanding of brain firing patterns. Would understanding the neurological/physiological process that occurs with each thought or external symbol correspond to understanding cognition?

      Delete
    5. Rosalie, this is not reading 1b; please identify what you commenting on, and what does it have to do with computation? What is computation?

      Delete
  14. I found the point on neural networks in this paper very fascinating. The way I understood neural networks was that they were an attempt to assign meaning to “modules”, allowing the computer to have an actual understanding of the content it is computing. After reading this paper, I realized that according to Zenon, this is just a way of avoiding the symbol grounding problem. Rather than admitting the symbols are not necessarily grounded in this system, they are grounded in each other, therefore getting their meaning by association. For example, the computer understands “apples” are “red” not because it knows what an “apple” or “red” is, but just because it has been taught that the module “apples” and “red” are connected in a very specific way. Although this seems like an arbitrary connection, maybe it is as “grounded” as we can get. I wonder how we can connect this to our brains? Do we just know that “apples”are “red” because we have learned to associate these two things together? I think I know what an apple is, but I could not tell you how I know that without just describing the characteristics of an apple. I guess this is the whole “problem” of the “Symbol grounding problem”, and an example of the failures of introspection. Or maybe cognition is just a complex system of association and learning and that really is the only way that symbols are able to gain meaning. (If this is the case, it seems we should be able to program a computer to be fully “cognitive”…) Very cool and insightful paper!

    ReplyDelete
    Replies
    1. Neither association nor computation are enough to pass the Turing Test. The T3 robot has to learn to detect and abstract the sensorimotor features of things that allow the robot to categorize them (i.e., to do the right thing with the right kind of thing) and then name them (using an arbitrary name we all agree to use as its name). These names are then words, grounded symbols, that can be recombined into subject/predicate propositions to describe the features that define further categories that inherit their grounding from the already grounded words of which they are composed.

      Delete
  15. Reading Professor Stevan Harnad's Cohabitation: Computation at 70, Cognition at 20 reminds me of a philosophical concept proposed by Carl Jasper. In his book The Way to Wisdom, he coined the term "Comprehensive." In short, the understanding of anything inevitably suggests a dichotomy — I, the agent who tries to understand things (the "observer") and the things being understood (be it concepts, objects, imagery, anything, really). But the problem comes when we turn this dichotomy inwards, that is when we try to understand how we understand. This can be problematic because we are trying to understand cognition by using our cognition. This paradox is analogous to slapping your right hand with your right hand. And "Comprehensive," according to Carl Jasper, is a state of realization that englobes the dichotomy between the observer and the things being observed. But then again, after reading Professor Hanard's article, I can't help but ask what kind of entity (or intelligence?) houses the state of Comprehensive? I guess Jasper also couldn't escape the Homuncular reality of how our mind works.

    ReplyDelete
    Replies
    1. It's much simpler than that. We just want to figure out how organisms can do all the (cognitive, not vegetative) things they can do. Think of it as a reverse-engineering problem, not a philosophical or psychological one.

      Delete
  16. As I went a bit over in terms of word count with my previous comment, with this one I mostly am just interested in asking the question: what exactly would you take to be the many reasons why if computationalism were to be true, it might lead to a conceivable solution to the mind/body problem where are brain is in essence a particular sort of hardware? I understand the computational/dynamical distinction in the remainder of that paragraph to not be exactly identical with the hardware/software distinction, and so I'm curious to see what your response/thinking might be in a few more words.

    ReplyDelete
    Replies
    1. The mind-body problem is the hard problem of cogsci: how and why do organisms feel. If computationalism could solve that problem, as well as the easy problem of how and why they can do all the (cognitive) things they can do, then cogsci would be finished. But computationalism cannot even solve the easy problem (because of the symbol-grounding problem). So cognition calls for something more than computation.

      Delete
  17. Even if this is not my first time encountering Searle’s Chinese Room argument, I still would like to raise some questions. Although the people in the room do not understand Chinese and the symbol manipulation rules also have nothing to do with the understanding of Chinese, if considering the room as a whole, can we say the room is now in a state of understanding Chinese? Analogically, even if a single neuron cell is not conscious, the brain as a collection of these neurons is conscious. Can we say this Chinese room, just like our brain working as a whole system, is in a certain mind state where we can find the understanding?

    ReplyDelete
    Replies
    1. Brains understand, and brains are made of neurons. But what makes you think walls or rooms understand?

      Delete
  18. The point made in "Turing sets the agenda" section about TT passing program was particularly fascinating to me. It was not until I finish reading this part that perhaps human's ability to adapt and interpret symbols is the key to such passing program in Searle's Chinese room argument. It is only for users with minds that we are evaluating the passing of TT, instead of having a clear boundary for what is and what is not mind.

    ReplyDelete
    Replies
    1. An organism has a "mind" if it can feel. It feels like something to understand Chinese. Searle does not understand Chinese, even though he knows how to manipulate the symbols. Therefore symbol-manipulation (computation) does not explain language-understanding.

      Delete
    2. Yes, thank you. The concept is much clearer now.

      Delete
  19. This week’s reading provided critical insights and helped answer questions that in turn motivate us to further understand our “know-how” – how our minds work. By addressing the limitations of behaviourism – it explains the ‘what’, but could not address the ‘how’ – and of computation – it tells us ‘how’ but fails to address whether symbol manipulation through a mechanical process yields meaning and understanding; and finally how introspection may be too subjective for our own good. If our goal is to solve the easy-problem, how can we mold our understanding and continue to feed our motivations that will further address how our mind actually works?

    ReplyDelete
    Replies
    1. To solve the easy problem you have to figure out a way to reverse engineer our cognitive capacities. The TT is the test of whether you have succeeded,

      Delete
  20. "The link between the symbol and its referent is made by the brain of the user ". "Cohabitation: Computation at 70, Cognition at 20" explores how it is that we are able to create these links in our heads. Somehow, our brains are capable of generating an image of our third grade teacher in our mind and then, often promptly, this image allows us to remember this teacher's name and potentially other qualities about them as well. We are capable of both creating and identifying mental images.

    How could a Turing test possibly explain or show how we create these mental images? Is a Turing machine capable of this type of memory? Perhaps this question cannot be answered, as it is clear cognition cannot be explained solely by computation, it must incorporate sensorimotor and symbolic elements as well.

    ReplyDelete
    Replies
    1. A computer is certainly capable of such memories, and the Turing Test just requires us to ask, and for the answer to be correct (or as correct as most people's memory).

      Delete
    2. Would this not be one of the more accomplishable tasks that the Turing Test could complete, and could it not do so even more accurately than the human? Since human memory recall is an imprecise reconstruction of an event, a human would most probably perform worse than a computer who would be perfectly matching the event with the information it would need to recall?

      Delete
    3. So the goal is not to try and supersede the capacities of humans in terms of computation, but instead to try and replicate it and how it does it?

      Delete
  21. What was particularly striking to me about this reading is that it emphasized how little the average person knows about their cognitive processes. The examples about the third grade teacher, or the subtraction of 2 from 7, demonstrated that even the simplest cognitive functions exceed my realm of understanding, as I often internally leapt to the “easy” answers as I was reading. The sentence “the fact that our brains keep unfailingly delivering our answers to us on a platter tends to make us blind (neurologists would call it “anosognosic”) to the fact that there is something fundamental there that still needs to be accounted for” was especially profound, and highlighted a trap that many fall into when approaching problem-solving.

    ReplyDelete
  22. I found Pylyshyn’s take on discharging the homunculus to be rather intriguing. As was mentioned in a previous comment, when I was reading the section about the homuncular explanation, I thought of the film ‘Inside Out’ with the idea of the little man doing the cognition on our behalf and it did help me understand Zenon disregarded the homunculus theory because it would create an infinite loop in our mind. It also leads me to think about how easy it is to fall into the homunculus fallacy in our daily lives. For example, it was brought up in the text when discussing how an individual came to a conclusion, people often say “just that a little man in my head (the homunculus) does it for me” (249). The homunculus explanation like these are avoiding answering the question about sentience. This leads me to think about how we achieve sentience because I don’t really understand how computation theorists like Pylyshin would explain sentience.

    ReplyDelete
  23. I found this reading quite fascinating and it touches onto the questions I had from the last readings. I feel as though learning about the deconstruction of the endless mind-body debate being summarized as simply being computational has helped me understand more of what this class entails. I also liked the use of behaviourism as a lead to further discussion and as a tool for cognitive science, despite certain outdated beliefs, older schools of knowledge serve as somewhat of a stepping stone into more modern discussion. From what I’ve understood, grasping the idea of computation, or our ability to do what we can do, involves a complicated path to comprehending something quite straight forward.

    ReplyDelete
    Replies
    1. Computation alone is not the solution but it definitely helps. Is there an absolute solution at all?

      Delete
  24. I became a little discouraged in the process of this reading; if only because at the moment it doesn’t seem like there’s any promising solution to the question of cognition. With both the homunculus and the question of symbol grounding, any attempt to explain or answer the question of cognition only leads to a spiral of infinitely more questions. Particularly the issue of symbol-grounding is intriguing. I don’t know that I’m so much convinced that when asked to recall your third grader teacher, an image of them arrives with their name attached. Could it be possible that symbols could be grounded in something more abstract than images or referents, or is that antithetical to the concept of symbol grounding entirely?

    ReplyDelete
  25. “On the day we will have been able to put together a system that can do everything a human being can do… we will have come up with at least one viable explanation of cognition”. This sounds like a compelling reason to create a robot capable of autonomously mediating the connection between symbolic and sensorimotor capacities. However, it seems to me that such a bridge can only be made by a mechanism which dynamically implements computations. Computations for which, from my understanding, require a grounding for the symbols that the robot must interpret meaning from. A problem arises, does the creator of the robot mediate this connection or is there an ungrounded symbol system. The former is not autonomous any longer and the latter runs into a homunculus for which are both dead ends. Having the robot itself grounding the symbols must create an incomparable entity that navigates the environment out of its own extraneous volition. What would make a robot want to behave or compute like a human? Computations seem to me to be something that robots and humans would utilize different manipulation rules for when attempting to interpret semantics. Would they teach us about human cognition if they are robot computing?

    ReplyDelete
    Replies
    1. About what a robot can do, talk to Kayla (our M.I.T robot). And about whether computation alone can do it, stay tuned...

      Delete
  26. The reading Cohabitation: Computation at seventy, cognition at twenty opened my eyes to the extent of how difficult it has been to define cognition.

    In that sense, referring back to the third-grade example, it is said that “our brain has to do a computation, a computation that is invisible and impenetrable to introspection” and that this “computation is done by our heads implicitly”. This claim made me begin to question even more the concept of computation. That is, how exactly does computation carry through [using this same question as an exemplar] in a different perspective? For example, if one has a specific trauma to memory which in turn results in dissociative amnesia, surely the computation must become different or even would the computation even occur?

    Taking this a bit further, how would this concept be intertwined with the concept of
    memory errors and/or memory stability? For instance, when you ask me “who was my grade 3 teacher”. I had made a slight error in which I thought of my 4th-grade teacher. Moreover, let’s take the example of flashbulb memories. These tend to become unstable throughout time. How then can one relate to computation?








    ReplyDelete
    Replies
    1. Read Turing's 2a paper for the coming week. A T3 robot can make the same errors we make. Talk to Kayla for a while. Or scrutinize my lectures. Even a payroll program can make mistakes. Mistakes are much easier to model than correct performance!

      Delete
  27. I needed to re-read a couple of times some of the passages for my brain to properly register what I was reading. Due to my lack of background knowledge for this article.

    As some of the previous comments have already said. Defining cognition is not a trivial task. Especially since we do not understand why we do what we do. As seen in the article, the introspective approach fails to explain what we do, and the computational approach partially succeeds in its task but it fails when it is applied (i.e., Searle's Chinese room).

    However, I have an issue with Searle's thoughts after his thought experiment. Even though Searle did not understand the meaning of the words given to him, he was, nevertheless, able to properly compute the output with the help of the tools at his disposal. It might seem not cognitive since the knowledge does not come from within but he is still using cognition to apply the tools required to produce the desired output. Why does this mean that computation is non cognitive?
    Thank you!

    ReplyDelete
    Replies
    1. because Searle does not understand, like a child following the rules of addition and subtraction without understanding what it means. We know that (among other things) people, and computers, can compute. But can computation also do all the other things Kayla can do?

      Delete
    2. If looking from the cogsci reverse engineering point of view, if the computations are good enough, it may can do ALL the other things Kayla, as T3, can do. But again, I am thinking if this related to the other minds problem that we just don't know all the things Kayla can do therefore no matter how profound computations are, it's just symbol manipulation that can merely do the things Kayla can do.

      Delete
    3. A computer cannot walk. Walking is not symbol-manipulation. It can only model or simulate walking. Kayla walks.

      Delete
  28. I think this reading has a nice cyclical aspect to it, bringing us back to the start of the problem as to we still do not know how do we perform cognitive actions, even though we explore a lot oh different routes. This reminded me a lot of Marr’s tri-level hypothesis that I have seen in another class. His argument is that there are different explanatory tasks at different levels. The computational level is descried as the goal of the computation, which in our case is known as we know that the purpose of the brain’s computation, to allow us to act upon the world and to regulate our behavior. We also know about physical activity as we know how neurons work. However, we cannot explain the algorithmic level, the black-box problem. How is our brain doing this. As the text states, “the details of the physical [hardware] implementation of a function were independent of the functional level of explanation itself”. According to Marr’s theory, we need every level to describe thoroughly cognitive processes. Now we just need to try to find the algorithm to the brain!

    I also had another question on the side, which might be a stupid one, considering the Turing test. Do the participants know that they are testing a machine or has anyone ever tried to disguise the study as if they were only discussing with a human being from the start? I feel like that would make a big difference from the start

    ReplyDelete
    Replies
    1. If Marr was a computationalist, there are only two levels: computational (the algorithm) and the hardware that executes the algorithm. The performance of the task that the algorithm enables the organism to perform is not another “level.”

      Turing’s “imitation game” was just a way to set up the idea of Turing Test, first as a game. Testing T2 requires a chatbot that you can’t tell apart from anyone else you can text with, for a lifetime. But when you’re T-testing Kayla there’s no problem with your knowing she’s a robot. The test is only about whether she can do and say anything a real human can, for a lifetime.

      Delete
  29. I found this reading very insightful as I have read about Searle’s Chinese room before. I particularly enjoyed the part about introspection and evaluating how introspection, precisely the armchair theory, cannot be a solution for categorization. I thought it was very interesting how the imagery theorists separate categorization into steps. First, image and then identification then putting the words. However, this doesn’t explain how we are able to do this. This lack of knowledge and questioning is related to anosognosia. Thus, this made me think if we can’t elucidate the mechanism of cognition, where do we search for answers to our fundamental questions? Do the answers lie in the mechanisms of computation?

    "How do I come up with her picture? How do I identify her picture? Those are the real functional questions we are missing; and it is no doubt because of the anosognosia – the “picture completion” effect that comes with all conscious cognition -- that we don’t notice what we are missing: We are unaware of our cognitive blind spots – and we are mostly cognitively blind. "

    ReplyDelete
    Replies
    1. That people cannot reverse-engineer their own capacities is not the problem. The problem for cogsci is to figure it out, test it, and show what works. Right now, we're asking whether computation alone is enough.

      Delete
  30. I found the "Cohabitation: computation at 70, cognition at 20" reading interesting because it paves a clear roadmap for one route to studying cognition: building a robot that can pass the Turing Test in all of our behavioural capacities. What baffles me, however, about studying cognition is: how can this robot account for cognition in general when cognition is fundamentally plastic and, thus, varies from person to person based on genetics and experience? For example, someone who naturally enjoys completing logic puzzles may solve problems differently than someone with a natural affinity for painting landscapes. Additionally, someone who has experienced trauma may think differently than someone who has not (i.e. Their cognitive tendencies may be more driven by anxiety than the average person). In other words, surely we can only study the cognition of a machine that passes the Turing Test if it has a distinct set of "genetics" and "experiences"? If this is the case, then how do we extrapolate those results to human cognition in general?

    Perhaps a robot capable of experiencing trauma will have the ability to explain how cognition changes in the face of trauma one day, or perhaps these are questions that can only be answered after understanding how anxiety, as a cognitive process, works in the first place.

    ReplyDelete
    Replies
    1. Variation is easy to produce; you can already have it with reverse-engineered hearts, which need not be identical in every respect, or day to day, as long as they can all pump blood.

      But focus first on whether Kayla can do everything we can do before worrying about whether she can feel anxiety or trauma. Do the cogsci first and leave the clinical psychology for after you’ve nailed the easy problem.

      Delete
  31. As a cognitive science major, I had heard the name Donald Hebb many times due to his work in neuropsychology, but I was never familiar with his simple, yet ingenious method in separating the study of cognition from the study of behaviourism that is discussed in the opening paragraph of this paper. The notion that to study cognition, one must look inside the black box that deals with and is responsible for producing the input, the command and the output interests me a great deal because it seems like the study of cognitive science is trying to reveal a process that is extremely difficult to understand and that we know very little about.

    ReplyDelete
  32. Stevan Harnad: Categorization
    In this video, Pylyshyn's notion of imagery conversion to describe what is happening in the brain could be paralleled with what we now know as voxels which represent images in the brain This involves AI modelling of images represented in the brain. It's application can be extended to dreams where voxel patterns have been found to be predictive (at a higher probability than chance) of what a person could have seen in a dream. It could even represent a rendition, though blurry, of images in one's dreams.

    The differentiation between thinking and simple symbol manipulation is interesting because it states that the main difference is in the latter where what the symbol means does have an influence on what it is that we do with them. In systems that follow formal rules like mathematics, we can usually infer that they are grounded in some objective rule and manipulation system. What is particularly of note is the fact that in thinking, symbol manipulations don't always make sense, in other words, they don't maximize on the information available to the system. This can be seen in cases where individuals make what we would call an irrational decision. Meaning that something further than the symbol manipulation process exists downstream which dictates how the user will ultimately think and further behave. In that case, how do we formalize such a phenomenon? What could be said about the manipulations that occur past the point of simple computations.

    ReplyDelete
    Replies
    1. I believe you meant the latter does not have an influence. I also noted that distinction and I think it ties into a definition in the previous reading on 1a. What is Computation about algorithms with defined outputs constituting computation. The fact that “irrational” decisions can be made from thinking appears to be a flaw within itself in the proposition that cognition is computation because as you mentioned there seems to be another process influencing the thought process and subsequently the actions taken by an individual. I would like to note that this is keeping in mind the definition of computation as determined by set algorithms, removing the notion of “a leap of faith” into a random output after a series of states.

      Delete
    2. Kid-sib does not understand what either of these postings is saying about computation, or about how to solve the easy problem.

      Delete
  33. I have found the ‘Computation and Consciousness’ part very interesting. If a mental state is a computational state that, like a software, could be installed anywhere with the same capacities as the human brain, then if my mental state is reproduced perfectly on another symbol system, will we share the same consciousness? – I think this problem is what Nagel mentions in one of his papers where he introduces concepts like ‘mental connectedness’ and ‘mental continuity’ etc. Meanwhile, if the symbol system and I are connected, then what element connects us? If we are not, then what element makes us different? I think that we might be closer to the answer of whether it is the first case or the second case as humans progress toward a computer that could potentially pass the Turing Test: the GPT-3 chatbot convinced a human that it is conscious earlier this year.

    ReplyDelete
    Replies
    1. Let’s leave the sci-fi challenge of “beaming” someone into something else until we have a successful solution to cogsci’s easy problem

      Delete
  34. After the question posed in class regarding who our 3rd grade teacher was, I have been questioning much of what I do and how I do it. Prior, I took my behavioural capacity for granted. Even something as simple as remembering someone, when it comes down to it, I really can’t explain how I am able to make it happen. The reading mentioned the “mental imagery theory” that most of us see images when we introspect, which I did not realize was so universal (I figured that some people might just see words). I wonder what Helen Keller would respond when asked how she remembered her 3rd grade teacher?

    ReplyDelete
    Replies
    1. Cogsci can’t figure out what it’s trying to explain until it recognizes that it hasn’t done it yet.

      Delete
  35. Cognition has always been an intriguing topic for me. It seems very abstract and hard to define. Even after reading this paper, I am still unsure what it is exactly, but it helped me to understand what it is not — it is not merely computation; there is more to it. Could it possibly be that attaching meaning to things makes something cognitive? For example, when a robot plays chess with a human player, it knows all the rules and knows when it wins/lose/tie, but winning does not mean anything to the robots; only us human attach meaning to it. Does it automatically make us more cognitive? The end of the paper, “As to which components of its internal structures and process we will choose to call “cognitive”: Does it really matter? And can’t we wait till we get there to decide? “ raised a really interesting point. Our current technology is still very limited ( there will be much more advanced robots in the future). Would it make more sense to discuss which part is cognitive(if any) in the future?

    ReplyDelete
    Replies
    1. A toy chess-playing computer or robot does not know anything. It has only the know-how to follow rules – the way a rock has the know-how to fall if you drop it…

      Delete
  36. After reading this paper I felt similar to Searle in that we should abandon the turing test and instead focus on "studying the dynamics of the brain". I understand that much can be learned by reproducing different functions of the brain and examining how we reproduced them but it seems to me that can only take us so far. It also seems like taking that path would always lead to the same issue of not being able to claim that the AI actually feels something and ascribes meaning to the symbols it manipulates.

    ReplyDelete
    Replies
    1. “Studying the dynamics of the brain” has so far not give a clue as to how to reverse -engineer T3-capacity. Even if computation alone won’t work, that gives no clue of what will. (But don’t forget Weak AI (and the Strong Church-Turing Thesis): computational modelling may still be a useful tool. Think of pre-testing launching satellite stations by modelling them computationally before spending the money to build them and risking the human lives to test-launch them.

      Delete
  37. In reading this article I was struck by the approach of seemingly bracketing the hard problem of consciousness. This makes sense given the approach(es) generally used and the goal in mind, namely

    "...the question of how the mind actually does what it does."

    However, given this state of affairs, I wonder how it is possible to completely dismiss qualitative considerations of "likeness" and phenomenological intersubjectivity if we wish to understand the how. After all, a key point in Searle's Chinese Room thought experiment is that

    "The meanings [in a Turing test] are all just in the heads of the external users..."

    If we don't know what meanings are and how they might themselves cause the mind to function, maybe we also won't be able to explain some of the 'easy' problems either.

    ReplyDelete
    Replies
    1. What is on trial in Searle’s Chinese-Room Argument (which we have not yet gotten to! Week 3) is whether computation alone can pass (Chinese) T2, and, if so, whether that means T2 can understand (Chinese).

      It feels like something to understand (Chinese) and Searle does not feel that feeling. Therefore cognition is not cognition, even when you are executing the right algorithm.

      Moreover, when Searle executes the (Chinese) T2 algorithm, he knows how to manipulate the symbol 斑馬 (Bānmǎ), but he could not point out a real 斑馬(zebra) if asked in Chinese to do so. So his T2 algorithm would be useless in trying to upgrade to T3.

      Delete
  38. "There is still scope for a full functional explanation of cognition, just not a purely computational one. As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics: not propositions describing them nor computations simulating them, but the dynamic processes themselves, as in internal analog rotation; perhaps also real parallel distributed neural nets rather than just symbolic simulations of them. "
    I found the contrast of this view to computationalism is interesting. Searle showed the difficulty of computationalism is that the manipulation of symbols does not explain how meanings are made. In other words, inside the brain in the computationalism sense there can be no meaning. If we understand the algorithms, we know how to get A to B. In the real world, not only we need to get A to B, we also attach meanings to A and B, the question is, how?
    However, the advantage of computationalism is it views mental process as static processes that can be described as programs and symbols. The difficulty of the dynamic processes view is that how do we describe them? If we are able to describe these processes, then they are also computational because as we describe we transcribe dynamic processes to static descriptions. (This is just a provisional idea).

    ps: It seems I can't change the font to italics in this interface.

    ReplyDelete
    Replies
    1. “Dynamic” in this case means physical rather than computational: dynamics like movement, heat, electricity, chemistry, rather than symbols being manipulated like a recipe being executed by a symbol-manipulating machine,

      But symbol-manipulators are dynamic too, not static. It’s just that their physical dynamics (hardware details) are not relevant to what they are computing; only the software matters.

      Yes, symbol grounding and meaning will turn out to be important, but we are not there yet (Week 6 and onward).

      (Please always read all the commentaries and replies on a reading’s thread. Don’t just read the reading and then jump in. What you say may have been said, and replied to, already.)

      (You can use a few (but not all) html symbols for things like italics: < i > and < /i > (no space after < or before > ).

      Delete

  39. The reading anticipated questions I had from the last readings: why is the idea that the mind manipulates symbols purely on the basis of arbitrary shape more convincing than saying it does so partly on the basis of meaning; furthermore, how is it that symbolic shapes come to be grounded in real objects? I’m still not quite clear on how symbols are ever grounded. Is it through the process of categorization, where “the invariant features of the kind must be somehow extracted from the irrelevant variation” and from real objects an invariable symbolic shape emerges as the symbol for that category? In this case, the symbols cannot precede the categorization that creates them, so something non-computational must explain the ability to construct categories in the first place. Provided this question is actually relevant, are the “internal analogs of spatial or other sensorimotor dynamics” supposed to be a solution to this?

    ReplyDelete
    Replies
    1. It’s not the “mind” that manipulates symbols according to computationalism, it’s the brain, and the T2 reverse-engineered brain. The only time it’s your mind is doing it is when you are doing long division, or factoring a quadratic equation, or painting by numbers, or following a cooking recipe. The rest is happening inside your homunculus, invisibly to introspection.

      We’ll get to grounding by and by. You have some of the keywords, but not put together into something understandable. But I’m not sure you understand what a symbol is (in computation).

      To the extent that the brain does sensorimotor and analog processing (like a sundial – see prior replies). Yes, this precedes language and the meaning of words.

      Delete
    2. My understanding of a symbol in computation: a discrete shape that is recognized by a computer as signifying a certain operation specified by the computer's program. I think I am having trouble imagining what symbols the brain uses in its computations, and am confusing several definitions of symbol in trying to grasp how they apply to computations done by the brain. Are symbols to be found at the neurochemical level?

      Delete
    3. Any shape can be used as a symbol by a symbol-manipulating machine (e.g., a Turing Machine). But not everything is a TM (even though everything is simulable by a TM). In fact most things are not TMs.

      Delete
    4. Yes, still not clear on what the symbolic input to the brain is from a computationalist perspective.

      Delete
  40. This reading made me realize the endless process going through our minds (which some we still cannot truly explain) even during very simple daily tasks. I would never have thought the process running in the background of my mind while I think about my third-grade teacher, or while I do a basic math problem. It seems like the “how” questions never tend to end; there is always another “how” question we can ask related to our potential answer regarding cognition, and it seems like we tend to get stuck at some point because the mind is much more complicated than we think.

    ReplyDelete
  41. Please read previous comments and replies.

    ReplyDelete
  42. Computational at 70 and cognition at 20 makes a really important and intriguing point. Cognition, that is, how certain ‘thoughts’ and ‘decisions’ are made is a hard thing to understand and explain. He mentions that perhaps one may try to answer these questions by saying we memorized certain things or even computed it mentally but that would still not make sense since there are plenty of things that we do not know the algorithm for. I found that the article started to make me question a lot of things and it certainly increased my interest in this perhaps non answerable question ‘how we do what we do?’.

    ReplyDelete
  43. I thought the paper, “Cohabitation” provided a really great overview of how explanations of cognition have evolved over time. The proposed idea of scaling up the Turing Test to take into account sensory information to connect its internal symbols was especially interesting to me. Considering the main issue with the computation explanation is lacking symbol grounding, I thought it would be interesting if someone were to design a TT for a non human animal instead. This would allow us to choose a simpler organism that still has capacity for whatever we wish to focus our studies on (whether it be symbol grounding or feeling etc) without the more complex confounding factors such as human language. By understanding how a simpler organism feels, we can get closer understanding how a human can feel and solve the same cognitive problems. Even if the answer is outside of computation or dynamical processes for these simpler organisms, it will still provide clues for further understanding because of how all life forms are related to one another through evolution.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...