Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012, Summer Issue
The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).
If the hard problem is indeed insoluble, why is that so (in an ultimate sense)? Is it due to a hard fact about the universe (some things are unknowable, period)? Or perhaps to some fact about our psychological/biological make-up (some things are unknowable to puny humans, period)? I cannot help but feel that both suggestions are arbitrary. It reminds me of the kind of appeal to the supernatural that dominated pre-scientific thinking (X cannot be known because God decided so).
ReplyDeleteLet's say that cognitive science cannot explain feeling, only doing. What's preventing us from creating a new mode of explanation that does the job? We did not always account for physical phenomena in terms of initial conditions + laws of motions, nor for biological phenomena in terms of variation + selection. Those kinds of explanations were invented at some point in history. Can we do it again?
(yes, I am an inveterate optimist)
DeleteGabriel, no one knows either the solution to the hard problem nor whether it is solvable – nor, if it is not, why not.
DeleteMy own hunch is that it has something to do with the nature of feeling and the nature of causal explanation. If the easy problem is solvable (and there’s no reason to believe it’s not) then once it’s solved, there’s no causal room left to explain feeling: it’s causally superfluous.
If feeling had causal power (“mind over matter”), then that would solve the hard problem; but there’s no evidence for a “fifth force” in the universe, telekinetic dualism, and plenty of evidence against it (from failed experiments in telekinesis and telepathy).
Yet sentience is an evolved, biological trait; so surely it must have an adaptive, hence causal, function. The hard part is putting your finger on what that causal function is. All the candidates can easily be shown to be superfluous. (Try some!)
As I mentioned in the CHALLENGE in 10a, Question 2 in the midterm this year suggested another candidate to me:
If T2 can only be passed by a T3 robot like Kayla (because of the need for sensorimotor grounding of words), and if it also turns out that T3 cannot be passed unless it includes some components of T4 (the brain functions that are correlated with feeling – hence Kayla would have to have those too), would that be a step toward giving a causal grounding to feeling?
Not very satisfying, because it still does not explain the causality behind the correlation, but better than nothing…
“We are waiting for cognitive science to explain to us how we -- or rather our brains -- do it.”
DeleteI feel as if we will never find an answer to the hard problem because the mere goal of cognitive science is paradoxical in nature: it is about our brains reverse-engineering our brains. Is this really possible? I think the first step is maybe admitting that we won’t find a biological explanation for the need for sentience. Does everything have to have a causal function? We have trouble justifying evolutionarily why men have nipples or why some people are lactose intolerant and others not. But what happens if we abandon the perspective of natural selection? Do we lose the need to resolve the hard problem? Feeling could be but a nonadaptive trait, a spandrel of some other functional trait.
Without feeling, nothing matters. This is not an answer to the hard problem, not even close, but it's something I know to be true and something I think most people would agree with.
DeleteTherefore, your question, Tess, about natural selection is really interesting to me. It made me think about feeling in non-human animals. For example, my dog certainly feels, and he can even sense how others are feeling, which is incredible. Maybe this gives him an evolutionary advantage because it causes my family to care about him and keep him alive for as long as possible. Another thing that comes to mind is the way that some animals get emotionally attached to their offspring or mates, and will mourn and grieve when a baby or mate dies. Does this serve any benefit? Or does it serve no purpose and maybe even impact them negatively (too distraught to eat, for example) in which case, why does feeling even exist if not to serve a purpose? I'm sure we'll talk more about this next week, but I'm really curious to know if we can make any generalizations about the advantage of feeling, evolutionarily or otherwise, or if it is as non-functional as this week's readings suggested.
Tess, you wrote:
Delete“[Can] our brains reverse-engineer our brains”?
Why not? They’ve reverse-engineered plenty of other things, including atoms and the universe – and genes.
“we won’t find a biological explanation for the need for sentience”
Fine, but why not?
“Does everything have to have a causal function? ”
No. Engineers and artists can design things that have no causal function (but their capacity to do it has a causal explanation).
Most things encoded in our DNA got there because of their adaptive (survival/reproduction) advantages. Mutations happen randomly, but that usually is not enough to keep them in our DNA.
We have trouble justifying evolutionarily why men have nipples or why some people are lactose intolerant and others not.
Not that much trouble (though evolutionary explanations are always at risk of being Just-So stories because they are often based on one-off conditions, long ago, rather than repeatable experiments, today).
Male lactation has even become functional in some species, such as my kindred spirits, the fruit-eating bats.
Most mammalian species stop seeking milk once they are weaned from their mother’s milk, so they can learn to seek food for themselves. It is adult lactose-tolerance (not intolerance) in humans that became an exception; it evolved because of availability and nutritional advantages, along with meat, with human animal husbandry. But today it is unnecessary. Its main effect is to cause enormous suffering in cows and their calves, and to contribute to climate change.
“if we abandon the perspective of natural selection… do we lose the need to resolve the hard problem?”
We can always free ourselves from explaining if we stop trying…
But the hard problem is not just an evolutionary problem (“why”?); it is also a reverse-engineering problem (“how’?)
”Feeling could be but a nonadaptive trait, a spandrel of some other functional trait.”
Like male lactation? Possible, but what is that other functional trait of which feeling would be a spandrel? (See the discussion of “spandrels” in Weeks 7 and 8.)
And would it not be surprising (and disappointing) if the only biological trait that matters, and makes anything else matter [think about it] turned out to be “but a nonadaptive trait”?
Teegan, yes, feeling is the only that matters: why?
DeleteYes, humans and dogs co-evolved with joint hunting, and eventually domestication. But if there’s a reason we have empathy for dogs, it’s because all mammals have empathy for their own young. (If they ate them, they could not pass on their own genes.) And other mammals, especially their young, have the same features that our mirror-neurons pick up on in our own young.
But human domestication, breeding and use certainly has not been a boon to dogs as individuals, nor to any other sentient being we have domesticated and bred for our use, any more than it was a boon to the human beings enslaved for use by other human beings. It’s individuals who feel, and to whom things matter, not populations, or gene-pools. Our mirror-neurons tell us, if we look, that a dairy cow, bred for a short life of forced pregnancy, followed by infanticide two days after each birth, would have been infinitely better off not to have been born. With a natural life expectancy of over 20 years, they are relentlessly milked by machines in between their ceaseless forced pregnancies, each one punctuated by the agony of having their babies torn from them so their milk could feed our babies instead. After 4 years of this, their udders are spent and diseased, they can no longer stand, and they have to be for- lifted by yet another machine to get them to the slaughterhouse so their throats can be cut so the remains of their flesh can end up on our plates.
The cow’s and the pig’s and the chicken’s individual life, too, is indescribably wretched. But you just have to look at the street dogs and cats in India and Iran – or the covid-companion dogs and cats here in Montreal, dropped off at the SPCA (or “freed” in the streets) once covid relented and their companionship was no longer needed – to realize that, no, bondage to humans is not a boon to any human-bred species, no matter how it is depicted in the “happy farm” images or the “how much is that doggy in the window” storefronts.
Saving a individuals (dogs, cats, chickens or fish), on the other hand, is always ethical, if you give them a happy remainder to their lives.
Hi Teegan! I am interested in your point in which you said, “Another thing that comes to mind is the way that some animals get emotionally attached to their offspring or mates, and will mourn and grieve when a baby or mate dies. Does this serve any benefit? Or does it serve no purpose and maybe even impact them negatively (too distraught to eat, for example) in which case, why does feeling even exist if not to serve a purpose?”
DeleteI view the grieving process as generally maladaptive behaviour, but its existence is based off some adaptive functions. It is beneficial for humans and animals to form tribes and connections with others within their species in order to survive. Having an emotional attachment to these tribemates increases our personal motivation to stay close to them and to help them survive, as the relationship is mutually beneficial.
In terms of mates and offspring, it is our emotional attachment to them that allows the continuation of a species. If humans did not feel an attachment to their offspring, they would leave them die and the human species would go extinct. Similarly, forming an attachment to a romantic partner promotes more mating and more offspring.
The experience of grief is terribly painful, and reinforces the adaptive human instinct to maintain the survival of our companions, offspring and mate. It can be considered as a form of punishment which motivates us to avoid the punishment (grief) at all costs. However, it does reach a maladaptive point when it damages our ability to survive and care for ourselves and others.
I think that to investigate the hard problem, or to state that feeling is an evolutionary trait or something else, we have to firstly know what feeling is by defining it in an objective sense. However, feeling is only the a priori felt thing mentally - one can say Cogito; while just what does one feel? No one knows. And there is not even a possible route in the distance for us to know what is felt. So because feeling is so hard to define, its properties and evolutionary advantages couldn't be known. And so is the hard problem of how and why.
DeleteKimberly, yes, some feelings are maladaptive, but the capacity to feel must surely be adaptive.
Delete(BTW, if we can’t explain the causal role of feeling that doesn’t mean feeling didn’t evolve because of adaptive advantages. It just means we can’t explain what the adaptive advantages were.)
Han, the problem is not defining feeling but explaining it.
(Defining feeling may be related, though, to “Laylek” and “uncomplemented categories.”)
DeleteKimberly, the question of the adaptive value of feeling goes far beyond the adaptive function of social or familial attachment, which would be a survival trait even in a zombie universe. What is the adaptive value of the fact that it feels like something to be tired, or to see the color green?
The most likely adaptive function of not-eating one’s children, or of continuing to try to help them survive even when they are sick, or dying, or dead, is that it helps get our genes into the next generation. But that would be true in a Darwinian world of zombies too.
In other words, the HP is harder than that.
Han, the HP is the problem of explaining feeling, not defining it. We all know what it feels like to be hurt (or to view something green). The question is: how and why? – not what?.
HP is a problem of causal explanation, not a problem of verbal definition. Feelers get the referent of the category name “what it feels like to feel” for free, unlike the referent of the category name “edible mushroom,” or even “green.” (And a good thing, too, because the categories “edible mushroom” and “green” are not afflicted with POS, so they can be learned by unsupervised and supervised learning, whereas “what it feels like to feel” (and “Laylek”) are afflicted with POS, so cannot be learned by unup and sup. (Can you understand and explain to kid-sib what I just said? Good exercise for the Final Exam…)
This reading helped me better understand the way in which Searle’s CRA infringes on the hard problem. Indeed, besides the fact that Searle can penetrate the other-minds barrier by becoming the T2-computer (with no felt states), he is able to conclude that he is not understanding Chinese because understanding Chinese feels like something, and he does not feel anything. Descartes’ cogito justifies Searle’s argument: there is no doubt that one is ‘thinking’ (or cognizing) when one is thinking/cognizing. There is therefore no doubt that Searle is not cognizing when implementing the Chinese program as the T2-computer, since he does not feel that he is cognizing (or anything really) and the purely computational computer has no felt states which could interfere with the argument. There is therefore also no doubt that the T2-computer is not cognizing (and is inadequate to provide a causal mechanism for cognition) because of the implementation-independence premise of computationalism (its soft-underbelly).
ReplyDeleteMathilda, first, don’t forget that Searle’s is a thought experiment: “IF computation alone could pass T2, THEN it would be passing T2 without understanding Chinese, because computation is implementation-independent, and Searle would not be understanding Chinese.”
DeleteThat is a sound piece of hypothetical reasoning, but it is counterfactual if, in fact, T2 cannot be passed by T2 alone, and requires T3 capacity, which cannot be purely computational!
You are right that if the computationalist premise were true, Searle’s argument would be sound, because of the Turing premise of computation itself (not computationalism): all computation, not just the TT-passing algorithm, is implementation-independent.
And it’s not that Searle’s own cognition would become empty as he himself implemented the T2 algorithm. It’s just that implementing the T2 algorithm would not produce the FEELING of understanding Chinese, even though it would produce the T2 DOING capacity. That’s doing with words alone what a Chinese-understander can do with words alone.
But let’s not forget the symbol grounding problem. There is another thing that Searle would lack, besides the feeling of understanding Chinese: the T3 capacity to DO, in the world, with the referents of those (i.e., the members of their referent categories) all the things someone who can really understand those words can do. That is also the reason to doubt that the computationalist premise is true.
This article puts into perspective a good chunk of what we have seen in class this semester. What started as a thought experiment by Turing devolved into a whole scientific field from which we benefit today. I understand now why Professor Harnad attributes the title of "Giant" to Turing.
ReplyDeleteOne thing that caught my attention is that each time various thinkers tried to solve easy problems, it would instead give way to even more of them. For example, Searle showed how cognition could not only be computation. There needed to be something that grounds the symbols we manipulate within our minds. Then, how do we ground symbols? We ground them by categorizing them. How do we categorize? By learning. But how do we learn? Through supervised/unsupervised or verbal learning.
My point here is that there will always be a causal explanation for why we do what we do. But (so far) never for why and how we feel.
Alexei, why not? (K-S wants to know.)
DeleteMelis, you're mixing up the HP and the OMP: how?
DeleteHi Melis, I think you were referring to two different problem. The hard problem (unsolvable one) is what you said” how and why we feel “. The reason for the hard problem that you mentioned is actually the other mind problem — can we know what another organism is feeling.
DeleteNadila, that's it.
DeleteProfessor Harnad: I am not entirely confident, but I think it has something to do with the nature of feelings.
DeleteFeelings are not objective; if we put aside all the metaphysical claims (that there is a perfect world forms à la Plato), no feelings will ever be the same for anyone. Sure we can categorize them into saying that this feeling makes me happy, angry, or sad. But it still won't give us its true nature. (thus why it's the hard problem).
Only when we will know from where feelings originate from, then we will be able to answer the hard problem (but then will it still be a hard problem or an easy problem?). In other words, we cannot causally explain why feelings are present in organisms.
Alexei, the diversity of feelings, and the fact the we can only feel our own, are OMP matters. They are not what makes the HP hard. The hardness of the HP has to do with causal explanation. Without a causal explanation, feeling would be the only biological trait that was causally superfluous (i.e., not explicable causally, like all other biological traits).
DeleteI too, found that this paper was a rather concise synopsis of the course. It also raised more questions - as Alexei said, attempting to answer problems begs for more problems. I feel as though this is somewhat essential to cognition (although precise, it can also be quite vague in nature) - reminds me of the process of cleaning one’s room (becoming messier before it is cleared).
DeleteTuring indeed set the foundation for what would become cognition and although he was not a computationalist about cognition, his computational Turing machine was essential to cognition, the only thing missing was symbol manipulation for him to realize it since it was only symbols and in symbols out… Searle furthered this thinking in the Chinese room thought experiment, yet this led to even more questions.
This article ties together what we have been discussing/reading in this course. It argues that Turing wasn't a computationalist, though anything could be stimulated with computation (Strong C/T Thesis). Computation is limited in answering the hard problem. In reverse engineering, doing-capacity is generated, it's observable, verifiably produced. In contrast, we can only hope that in producing all the doing capacity (in T3 or T4) we have also some for some unexplained reasons produced feeling. Whether we have done so is the other-minds problem. How and why is the hard problem.
ReplyDeleteMelis, that's right. So do you see where you went wrong in your Nov 16 comment before this one?
DeleteWell, this essentially summed up the first half of our course in a very nice kid-sib way. Don’t have much to say other than no Descartes, no philosophy, PLEASE !!!! It is interesting to think about how he says “I can be sure what it feels like right now is what it feels like” because though it may feel like one thing, it may be entirely caused by another. For example, I was feeling depressed this morning and then I realized it was just because I was hungry and cold. I understand being sure about how I feel may not have to do with being sure about THE REASON, but it does sort of make you wonder whether your decisions/actions/doing capacities based on these feelings are appropriate at the time.
ReplyDeleteI also wanted to say I won't be in class this week because I have to go back to Toronto for the weekend so I hope you can find a temporary fill-in for my time keeping job I will be back next week.
DeleteLaura, feelings have probably been, on balance, more adaptive than maladaptive, otherwise they would not have evolved and stuck. But feelings are not just emotions and moods: it feels like something to be injured physically, it feels like something to reason, it feels like something to understand Chinese…
Delete(Of course, the obvious fact that the capacity is caused somehow by genes, and our brains, and must have had adaptive advantages, does not help explain how, or why. – And the HP is not a philosophical problem; it is a scientific problem, of causal explanation.)
That reading summarizes a lot of what we discussed in class. It first adresses the immense contributions of Turing to what was later called “cognitive science” and it discusses the fact that compared to the easy problem (that is about how and why we do what we do), the hard problem (how and why we feel what we feel) is hard to solve, perhaps insoluble. The essay also highlights the fact that Turing was probably not a computationalist, and, as Searle demonstrated with the Chinese room experiment, what is going on in one’s mind cannot be assessed only by looking at their behaviour. In order to be sure that one thinks, we have to be that “one” (1st person), and there is no other way to solve the hard problem.
ReplyDeleteCharlene, you sorted some things about, but then at the end you fell into the HP/OMP confusion again: how?
DeleteI believe where Charlene went wrong was solely with her last sentence in which she stated, “In order to be sure that one thinks, we have to be that “one” (1st person), and there is no other way to solve the hard problem.” Searle’s argument is addressing the problem of other minds, not the hard problem. The hard problem is not about being sure that one thinks, but rather to understand how one is able to feel. The hard problem is about the causal mechanism of feeling, rather than “how do we know that other people feel?.”
DeleteKimberly, that's it.
DeleteOoops, thank you Kimbirley, you are right I got confused between the HP and the OMP. Your comment helped me better see the nuance between the two!
DeleteThis reading allowed me to more clearly understand the argument for why Turing was not a computationalist when it came to cognition. I specifically found the mention of him naming the TT the "Imitation Game" helpful, as this indicated that Turing was primarily concerned with the reverse-engineering of doing capacities, not necessarily advancing the claim that this test could allow us to reverse-engineer feeling.
ReplyDeleteDarcy, that’s right, and the amazing thing is that to Turing all of that was obvious. (Of course, he did slip up a bit on “solipsism” as well as ESP…)
DeleteMy thought on the hard problem is 1) consciousness =feeling 2) feeling is a biological capacity, and therefore it is a physical capacity 3) to explain cognition is to reverse engineer the doing capacity of the mind 4) we could reverse engineer the feeling capacity and therefore an causal explanation of feeling.
ReplyDeleteSuppose we successfully found a T3 or T4 solution of cognition. It is true that the T3 or T4 does not need to feel to be functionally indistinguishable from humans. But how can we exclude the feeling capacity out in our design? Feeling is certainly one of the function of the brain, and of the mind. Therefore we simply cannot have a T3 or T4 that can not feel. But then the hard problem would be solved.
Even though many had said that correlations are not explanations(which are what we want). While this is true, I am inclined to believe that correlations are useful in developing explanatory theories of the mind. Just as in all natural sciences theories are based on evidences and observations, this is also true for cognitive science. Therefore there is causal explanation. Although I am not sure which T3 or T4 might be the right solution. Intuitively, feeling is similar to the vegetative functions, regulated by the brain. So it might be a T4 problem. Therefore T4 is the right solution to the hard problem.
Cynthia, of course the brain causes the capacity to feel. The problem is to explain how (and why.
DeleteWhy is the hard problem insoluble? In other words, why can't we reverse engineer felling capacity,like we reverse engineer other cognitive capacities?
DeleteShould be feeling capacity not felling.
DeleteCynthia, the hard problem is hard because if/when the easy problem is solved, there will be no causal room left for an explanation of feeling, leaving it superfluous. Please read the other replies because this is discussed at many points in the replies to comments on 10a,b,c.
DeleteThis paper was a comprehensive summary of some key themes in the course so far. I was particularly interested in the section about whether Turing was a computationalist. Like Alexei, I agree that it now makes sense why you, Professor Harnad, refer to Turing as a "giant" because he was thinking far past the technology available to him (i.e., he knew that whatever could pass the verbal TT would need sensorimotor robotic capacities). Something that caught my eye in this paper was the final line:
ReplyDelete"The "hard" problem is explaining how and why we feel -- the problem of consciousness -- and of course we are even further from solving that one."
From the PSYC 538 perspective, is "consciousness" synonymous with "feeling"? This may be an elementary connection, but I hadn't realized the two terms were interchangeable till now...
Polly, yes, “consciousness” (a weasel-word) is synonymous with feeling.
DeleteTry these:
Can there be an unconscious felt state? an unfelt conscious state? any more than there can be an unconscious conscious state or an unfelt felt state?
Because “conscious” is a weasel word, its function is to give the (false) impression that c and f don’t mean exactly the same thing.
could try to answer it. I think none of the unfelt felt state or unconscious felt state is possible. What you feel is what you feel, if you don’t feel it then you don’t feel it, there can never be a unfelt feeling, feel something unconsciously, or feel it but didn’t realize it.
DeleteNadila, that's it.
DeleteThis reading is comprehensive summary, linking all the terms and important theories we have studied to date. I thought adding Descartes’ Cogito into the summary was important. Indeed “I cannot doubt that what is feels like right now is what it feels like right now.” This sentence is the basis for the definition and potential future solving of the hard problem. Moreover, it helped me understand a lot better that there was no link between Turing and the hard problem. Indeed, his test is oriented towards understanding the easy problem better. I believed that was an important distinction to make, in order to appreciate the full value and contribution of the Turing test.
ReplyDeleteInes, we lilliputians needed to make the distinction: To Turing it was obvious...
DeleteKayla, remember that neither the OMP (nor science in general, hence not cogsci either) demands cartesian certainty, like maths and the Cogito. Turing is right that reverse-engineering and the T-Tests are all we can ever get, and that’s good enough for the OMP).
ReplyDeleteBut of course that still leaves the HP untouched.
Re-posting for Kayla Heslon
DeleteComment posted but got deleted: “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.” This reading helped me consolidate what we had discussed up to this point when it comes to the hard vs easy problem. I found the discussion on whether Turing was a computationalist -the distinction between a cognitive computationalist and a general computationalist - very helpful in light of the seeming contradictions in Turing’s papers. When it becomes possible to actually reverse-engineer cognition or create a T3 passing system (such as myself) in a lab, it may or may not feel (and we cannot use Searle’s Chinese room argument to discuss this answer at this point when it comes to T3 and T4). Even once this has been done, it is unclear to me whether we would be able to answer the question of if T3 feels because of the other minds problem (we can only feel feelings that are our own and never be able to feel other organisms feelings, we can only attempt to infer based on behavior).
Sophie, whether we solve the HP doesn’t matter; but if we get the OMP wrong, and conclude that Kayla is a zombie, yet she isn’t, and we kick her, then that (and every single case like it, whether robot or nonhuman animal) matters more than anything else).
ReplyDeleteAnd that’s what Week 11 is about.
This reading was a good summary of what we have learned up until now. It outlines Turing’s creation of the Turing test to compare machines and humans without thinking it would provide us with any information about consciousness (feeling), as it only deals with doing capacities (the easy problem). Throughout the reading, it becomes apparent that the hard problem seems unsolvable, as reverse-engineering human/animal capacity is only involved with doing, not feeling.
ReplyDeleteKarina, yes, but Turing didn't want to compare humans to machines. (That would be trying to explain apples to fruit.) He wanted to explain what kind of machines humans were, by explaining how and why they could do what they could do. A "machine" is just a causal mechanism. Its doings can be explained causally, but if it feels, there is a problem explaining that causally.
DeleteAlara, that's it.
ReplyDeleteJosie, that's it.
ReplyDeleteTo me, this paper re-articulated the idea that while many thinker have tried to address the questions of 'how and why', that many have been unsuccessful due to becoming lost in a game of semantics. While on the surface, such as with Dennet's heterophnomenology, a solution may appear to be presented, in reality, the essence of the hard problem has yet to be resolved.
ReplyDeleteI appreciated the way in which this paper synthesized the concepts presented in previous weeks, including Searle, the TT and hard/easy problem distinction to do so.
Emma, semantics or mustelidian (q.v.) legerdemain?
DeleteI also appreciated how this paper highlighted the inexplicability of consciousness that lays the foundation of the hard problem. To say you believe something, say “I believe it is sunny today” and “I know it is sunny today”, is the same thing as saying “I feel like it is sunny today”. That feeling could be right or wrong. But it’s still just a feeling whether we refer to it as believing or knowing, it is still a felt state. The hard problem is still not solved, and it is rooted in the inexplicability of not knowing why or how it feels like to feel something.
ReplyDeleteSara, so is the Cogito an exception?
DeleteK-S could not quite follow your last sentence: “the inexplicability of [[not knowing]] why or how it feels like [?] to feel something”
This reading ties together many of the themes and topics that we've discussed throughout the course and serves as a reminder that each topic in the course is deeply integrated with all the other topics. Even evolutionary psychology, language, categorization, and mirror capacities are easily linked to the contents of this reading. Evolutionary psychology serves as possibly the best way to account for the question of why we feel, language as a way to communicate grounded categories and propositions to others, categorization as the way we organize our world to make it interpretable, and mirror capacities as a neural correlate to many of the most fundamental capacities that we have, such as empathy and understanding intention. However, Dr. Harnad, I would like to get a little bit more clarification as to an overt definition of feeling, as you use the word to refer to fervor as well as our more common use of the word, which is synonymous with consciousness. In this case, are emotions entirely encapsulated in feeling rather than doing?
ReplyDeleteElena: Feeling is not just emotions, it’s any state that it feels like something to be in. See the reply to Jenny in 10a
DeleteConsciousness (etc.) is just a weasel-word for feeling, A conscious state is a felt state. See the comment of Sara Kassim-Lakha November 18, 2022 at 2:36 PM in 10b and also the examples in which I replaced all the weasel words in passages with FEEL or FEELING or FELT.
Of course feelings encapsulates more than just emotions. I think I failed to distinguish between an entity's felt state and an observer's interpretation of that state. To clarify my understanding, although the evoking of specific emotions is easy problem, the causal mechanism behind feeling itself is the concern of the hard problem.
DeleteThis reading ties a lot of the key concepts we've learned together and it helped me understand why Turing was probably not a computationalist. Rather, he knew that in order to be able to pass the verbal TT, the candidate system would have to be a sensorimotor robot, capable of doing a lot more than the verbal TT tests directly. What I wonder is if we’re looking at doing capacities to explain cognition, then whatever candidate system we’re looking at must be able to do much more than communicate indistinguishably from humans. I know that we can’t include the appearance of the system, but could there be a way to include other aspects of doing that we’re leaving out?
ReplyDeleteAlexander, distinguish a capacity itself from the T-Test for it. Read the notes on your mid-term for the question on T2/3/4: T2 only tests directly for verbal capacity. But if it’s not possible to have verbal capacity without robotic grounding, hence only a T3 robot could pass T2, then T2 is “strong” enough as a test, because it is also an indirect test of T3 capacity. Somethin similar could be true for some parts of T4 capacity, if they are needed to pass T3.
DeleteAppearance is tricky for reasons similar to the fuzzy distinction between cognitive and vegetative capacities, as well as between T3 and T4 capacities. Mirror capacities, for example, are part of the EP, so they should be testable with T3. That puts some “shape” constraints on the appearance of T3. Are mirror capacities “cognitive”? Would a T3 candidate -- that spoke in a monotone, had no facial expressions, made no social eye contact, could not read or produce body language, facial expressions or tone of voice, but its words and actions were nevertheless grounded in its interactions with the world – pass T3?
I don’t know. But I suspect that without those mirror-capacities the robot would not even be able to pass the rest of T3, hence not T2 either. Same is true of other structural aspects of T4: T3 will determine whether they’re needed, because without them the robot fails T3. Maybe even T2!
But I think that with this much hypothetical detail we are moving away from cogsci to sci-fi speculations.
Amélie, I think that if and when we could solve the HP of explaining how and why organisms can feel at all it would be comparatively easy to go on to study and explain how and why particular species can feel what they in particular can feel.
ReplyDeletereposting: Amélie Gaillard (blotched by blogger):
DeleteThis might be an irrelevant question so let me know what I'm missing if that's the case... From my understanding of the course so far, the hard problem is concerned with explaining how and why we are able to feel (regardless of how much we feel or of what it is we feel). Since the easy problems seek to explain how and why we can do ALL the things we are able to do, I was wondering whether there is an equivalent to this in relation to feeling. In other words, is there another (even harder?) version of the hard problem that attempts to explain not only how/why we can feel at all, but also how/why we can feel ALL the specific feelings we are able to feel?
I see I am not alone in seeing this as a very neat summary of the (two) problems of consciousness. My only question regarding it is, I suppose, on the limits of science in general. If we were to follow Turing's logic, would that not lead us to believe that all attempts to solve the hard problem are essentially useless? Would he have advocated another approach outside of what we today know as cognitive science for dissecting feeling, or have completely bracketed it?
ReplyDeleteJacob, if someone has an alternative to Turing's way, all they need to do is describe what it is. Then we can assess whether it successfully gets beyond Turing's limit on cogsci ("reverse-engineering").
DeleteThis short paper helped clarify and put all concepts together. The causal mechanism is still unclear but from here, we know that cognition is not just computation. In order to have the doing capacity, is feeling/thinking a precursor to that, or it is the other way around? Sure we need the sensorimotor system to interact with the real world to get some basic symbols grounded, and later we combine the basic groundings into meanings of words, therefore understanding and have the sense of knowing something feels like something.
ReplyDeleteMonica, Searle shows that computation alone would fail to explain cognition because it would fail to produce feeling. It would also fail to ground symbols. Passing T3 would ground symbols, and it would be immune to Searle's CRA, but would it produce feeling? Because of the OMP, we could not know for sure whether it felt. But that's ok; neither the EP not the OMP is the HP. Would T3 or T4 explain the causal role of feeling even if T3 or T4 really really did feel? That's the HP.
DeleteAfter reading this article, I thought that it does a really good job at summarizes why the easy problem has to be solved first before solving the hard problem, since we need to be able to distinguish when something or someone is just doing versus when it is doing and feeling. Although the conceptual solution to the problem seems to be that if the machine itself can somehow distinguish between if it is just doing or if it is doing and feeling, the actual mechanics of how we would be able to know this information remains the difficult part. As the article describes, cognition is a very personal thing in that only we can know what we are feeling is true for certain, so to figure out if a machine is truly cognizing would have to involve somehow gaining this information through other channels.
ReplyDeleteBrandon, you still seem to be conflating the OMP and the HP. What is the HP, and why is it hard?
DeleteFrom my understanding, the hard problem is the how and why organisms feel, and it is hard because according to everything we know so far and everything we can envision knowing, it looks like that all you need is the solution to the easy problem and that feelings are superfluous
DeleteIt's interesting to see how the topics we discussed throughout the semester connect in this reading. Many attempted to resolve the hard problem. But so far, it appears insoluble because if/when the easy problem is solved, there will be no causal room left for an explanation of feeling, leaving it superfluous. The hard problem also differs from the other minds problem. Understanding how other people feel would not give us any insight into how or why we ourselves feel.
ReplyDeleteJenny, that's it. Can you explain to kid-sib what "causal room" means?
DeleteFrom my understanding, the causal room is the capacity/mechanisms people have to perceive and rationalize why certain things occur. In the case of the ‘hard problem,’ the causal agent is the capacity to explain why people have feelings. The overall goal of the causal room is to explain and understand the cause-and-effect relations on why people feel and do things.
DeleteThe pressing issue with the “hard” problem is that in the case of the “easy” problem being solved, there is no justification to rationalize the “hard” problem on why we feel things. Feelings are subjective experiences that seem not to have an explicit cause.
As others have mentioned, it's amazing that the topics we have discussed during the semester connect to the paper. Cognition is more than manipulation of symbols, there is meaning that is attached to it. Empathy sadly doesn't give much information on how or why we feel the way we do.
ReplyDeleteHelen, so what is the EP, OMP and HP, and how and why is the HP hard?
DeleteÉtienne, what you mean is the "Strong Church-Turing Thesis," not "general computationalism."
ReplyDeleteLet's reserve "computation" for computation, and "computationalism" for the hypothesis that cognition is just computation (aka "Strong AI").
This reading was essentially a concise overview of how various concepts interact with one another over the course. The reading begins by claiming that the purpose of cognitive science is to explain how living beings think. Explaining how living beings think is difficult because thinking is impossible to observe, the best we can do is observe what these living beings do (their doing capacities). The reading goes on to explain how the Turing Test works, how the Chinese Room Argument demonstrates that cognition cannot be pure computation, and how the symbol grounding problem necessitates robotic capacity to conclude that Turing did not believe thinking was purely computation and instead believed the best that could be done to explain cognition was to explain doing capacities.
ReplyDeleteI wanted to make reference to the quote “The "computationalists" among contemporary cognitive scientists think cognition is just computation, but I don't think Turing did.” On reflection, this may sound simple to ask, but why precisely do modern cognitive scientists still hold onto the notion that cognition is computation?
ReplyDeleteIn particular, as already clear, computation cannot be cognition because computation only addresses algorithms and syntax. In the case of humans, we are more complex than computers, which execute only symbols and algorithms with predefined rules due to our sensorimotor and robotic capabilities.
Therefore, unless I've been mistaken, why does the notion that cognition is computation still endure over time, given the well-known truth and evidence that we possess sensorimotor skills that set us apart from computers? Additionally, do we ever see computation being truly ruled out?
Computationalists probably believe C=C because of the cognition-like power of computation (the Weak and Strong C/T Thesis).
DeleteThis article was a really fun read that briefly summarized the main things we saw in the course (which I am very much missing already). In this reading, professor Harnard goes over Turing’s importance and contribution to the start of Cognitive Science. Steven explains what a turing test is and its usability for cog sci. He also touches on what computation is and how one may be able to pass T2 by just manipulating symbols (computation). Later he presents the Searle chinese room argument and how cognition is not just computation. Furthermore, the professor briefly touches on how he believes Turing was not a computationalist mainly because he was aware that to pass T2 one would need to have sensor monitor capacities. Lastly, the professor argues that Turing was aware that feeling and doing are different (aka the hard problem).
ReplyDeleteUpon reflection, despite speaking about it a bit, I’d still like a bit more understanding. In relation to the “Explaining how and why we can do what we can do has come to be called the "easy" problem of cognitive science (though it is hardly that easy, since we are nowhere near solving it). The "hard" problem is explaining how and why we feel -- the problem of consciousness -- and of course we are even further from solving that one.”
ReplyDeleteI know we briefly mentioned it in the beginning. But I was just wondering even further on how does research consider using children while we have this consciousness problem and are a long way from solving it? Specifically, there are debates on whether children have a consciousness. Some claim that awareness starts to emerge between 12 and 15 months of age, while others assert that awareness is there at birth and will develop through time. What aspects of handling the hard problem then take younger children and newborns into consideration?
Mammals and birds feel when their young are acting and feeling as if they are in distress, and they feel like helping them, for the same reason children feel like eating sweets (Week 7). Why?
DeleteThe only hope of nonhuman animals (and other people, and their children) is that humans' evolved mirror-capacity will inspire mercy on them too.
This paper provided a very concise and useful summary of the topics discussed throughout the course. Though, as Stevan says (and I am inclined to agree), the hard problem is not solvable. As far as I do think this has something to do with the recursive nature of many of the topics we’ve discussed in class, much like the dictionary game. Any possible answer to one question leads to more questions. Even if T-testing is good enough for the easy problem, there is still the issue of the hard problem and, as has been discussed, it’s requirement for further causal explanation that is inaccessible.
ReplyDeleteThis paper provided a comprehensive and concise summary of this course's critical components, linking terms to crucial theories. I appreciated how the article discussed what makes the “hard” problem hard and what makes it insoluble. It helped me better distinguish what differentiates the “hard” problem from the other-minds problem. OMP is focused on the diversity of feelings and how we can only feel our own feelings, and how only the individual can observe their feelings. On the other hand, HP wants to reverse-engineer how and why we feel and be able to explain how we feel causally.
ReplyDeleteThis text clearly goes over concepts seen in class and links them together. It establishes the difference between the hard problem and the easy problem. The hard problem of consciousness is how and why we feel, while the easy problem asks how and why we can do anything. The easy problem is partially solved through neuroscience by looking at receptors, neural circuits and processes in the brain. The hard problem, however, is unsolvable, according to some. Those who believe the hard problem is irrelevant adhere to computationalism or believe that what all there is to know about the whys and hows of feeling is observable from a third-person perspective (Dennett and heterophenomenology). Many people confuse the hard problem with the other minds problem, which asks if we can know others have feelings since we do not have direct access to their mind.
ReplyDelete"The "computationalists" among contemporary cognitive scientists think cognition is just computation, but I don't think Turing did "
ReplyDeleteThroughout this course, I held the sentiment that Turing was not a computalionist since I didn'treally understood the TT to saying that to cognize, we compute. I feel like this is the first time this is explicitly mentioned in a way that was digestible which I really appreciated. I honestly think this paper explains the HP and EP really well and in doing so, provides a very comprehensive explanation of what cognition is.
At the beginning of the course you (Stevan) talked about giants and Lilliputians. It is clear now at the end of your course, and in reading this article, just how much of a giant Turing has been to cognitive science… among other disciplines.
ReplyDeleteA lot of people on here have said that the hard problem is insolvable. I can't help but feel though that this is not the case, or at least that it should not be taken as fact. Surely the landscape of things will change as we continue to work our way towards solving the easy problem, and as advances are made in AI. Do you really believe that if we take a long-term perspective, that even over the next hundred (or two hundred) years that the hard problem will prove to be insolvable?