Thursday, September 1, 2022

Closing Overview of Catergorization, Communication and Consciousness

Closing Overview of 
Categorization, Communication and Consciousness





13 comments:

  1. TOPICS COVERED 2022
    Questions welcome

    1. Absolute vs. Relative Judgment

    2. Abstraction

    3. AI (Strong/Weak)

    4. Algorithm

    5. Altruism and Kin Selection

    6. Analog vs Digital & Dynamic vs. Symbolic

    7. Anthropomorphism

    8. Approximation (Categorization)

    9. Approximation (Computation)

    10. Approximation (Language)

    11. Arbitrary Shape

    12. Artificial Life Simulations

    13. Behaviorism

    14. Blind Watchmaker

    15. Brain's Relevance

    16. Category Learning

    17. Categorical Perception

    18. Categorization

    19. Causal explanation

    20. Causality

    21. Certainty

    22. Chinese Room Argument

    23. Church/Turing Thesis (strong/weak)

    24. Cogito/Sentio (Descartes)

    25. Cognition

    26. Communication vs Language

    27. Computation

    28. Computationalism

    29. Consciousness/Feeling

    30. Correlation vs Causation

    31. Darwinian Survival Machines

    32. Degrees of Freedom

    33. Dictionary-Go-Round

    34. Discrimination vs Categorization

    35. Distal/Proximal Stimulus in Evolution

    36. Doing vs Feeling

    37. EEA (Environment of Evolutionary Adaptedness)

    38. Equivalence (strong/weak)

    39. Evolution (Darwinian and Baldwinian)

    40. Evolution of Language

    41. Evolutionary Psychology

    42. Explanation

    43. Explanatory Gap

    44. Explicit/Implicit Cognition

    45. Feature Extraction

    46. Felt States

    47. Funes the Memorious

    48. Gesture/Speech

    49. Grounding vs Meaning

    50. Heterophenomenology

    51. Hard vs. Easy vs. Other-Minds Problem

    52. Hardware/Software Distinction

    53. Hearsay

    54. Homunculus Problem

    55. Icon vs. Symbol

    56. Induction vs. Instruction

    57. Imitation

    58. Implementation-Independence

    59. Implicit vs. Explicit Learning

    60. Information

    61. Introspection

    62. Invariants

    63. Just-So Stories

    64. Language

    65. Lazy Evolution

    66. Learnability

    67. Mattering & Morality

    68. Meaning/Sense/Reference/Grounding

    69. "Meaning of Life"

    70. Mental Imagery

    71. Mental Rotation

    72. Mental States

    73. Mind-Reading

    74. Mind/Body Problem

    75. Minimal Grounding Set

    76. Mirror Neurons

    77. Naming vs Predicating

    78. Necessity

    79. Neural Nets

    80. Ordinary Grammar (OG)

    81. Other-Minds Problem in Other Species

    82. Pantomime/Pointing/Propositions

    83. "Pictures" vs "1000 Words"

    84. Poverty of the Stimulus

    85. Positive vs Negative Evidence

    86. Power of Computation

    87. Power of Language

    88. Proposition/Predication

    89. Psychokinesis (Telekinesis)

    90. Recoding & Chunking

    91. Reverse Engineering

    92. Robotics

    93. Scepticism

    94. Searle's Periscope

    95. Semantic Interpretability

    96. Sense vs Reference

    97. Sensorimotor Affordances

    98. Show vs Tell

    99. Simulation/Modelling

    100. Spandrels

    101. Spiders/Sex vs Learning/Language

    102. Supervised/Unsupervised Learning

    103. Symbol Grounding Problem

    104. Symbol System

    105. Syntax vs Semantics

    106. “System Reply”

    107. Turing Machine

    108. Turing Test

    109. Turing Equivalence

    110. Turing Indistinguishability

    111. t1/T2/T3/T4/

    112. Ugly Duckling Theorem

    113. Uncomplemented Categories (“Laylek”)

    114. Underdetermination

    115. Universal Grammar (UG)

    116. Vanishing Intersections

    117. Virtual Reality

    118. Volition

    119. Weasel Words

    120. Where/When vs How/Why

    121. Whorf Hypothesis (Strong vs Weak)

    122. Zombies

    ReplyDelete
  2. I just want to check my understanding of a few of these terms as well as ask some questions. To my understanding, implicit/explicit cognition refers to felt/unfelt states and implicit/explicit learning refers to the distinction between unsupervised and supervised/verbal learning, and semantic interpretability is the property of a language that allows people to understand what other people are saying (having consistent meanings attached to symbols). Also, I wanted to ask what context volition, scepticism, and hearsay were being used in.

    ReplyDelete
  3. Elena:
    "Implicit/explicit" has multiple meanings, not all of them consistent with one another. But, in general:

    Explicit cognition (thinking, knowledge) would be cognition that you can verbalize. You feel that you are thinking, and you can put in words what you are thinking (but you don't know how: your brain is doing it for you; you're waiting for cogsci to reverse-engineer that and explain how the brain does it).

    Implicit cognition would be processes going on in your head but you don't feel what’s going on; you only get the result, when your brain is done: e.g., the name of your 3rd grade schoolteacher, when asked.

    Implicit learning occurs when you don't feel you are learning; in explicit learning you do feel you are learning, and you can verbalize what you've learned (e.g., the names or descriptions of the distinctive features of a category).

    Unsupervised learning (passive exposure, no feedback) is usually implicit. It’s your brain detecting feature-feature correlations.

    Supervised learning is explicit in the sense that you feel you are learning something (or trying to). But if you are learning to categorize, you may succeed in learning without being able to say what the distinctive features are, even though your brain is detecting and using them. So you feel you’re learning a category (explicit) but you do not know the distinctive features your brain is using (implicit).

    If we understand what is said to us, verbally, then it is semantically interpretable to us. Same for words we say when we understand what we mean. We can’t speak without understanding the words we are saying.
    We can do computation (rule-based symbol-manipulation) by recipe, without understanding what it means. But even if the computation is semantically interpretable, and we understand what it means, we can’t use that meaning in doing the computation; we’re still just doing rule-based symbol-manipulation.

    “Volition” means doing things voluntarily, deliberately, because you “feel like doing it.” Otherwise, you may be doing it reflexively, or doing it without even feeling that you’re doing it.

    “Scepticism” is Descartes’ method of doubting anything you cannot be sure is true, e.g., the truths of science (“apples fall toward earth because of gravity”), whether others feel, or even whether the world you perceive really exists.

    The only two things you cannot doubt are the provable truths of mathematics, which are true “on pain of contradiction” with the axioms you have assumed.

    And the other thing not open to doubt is the Cogito: You cannot doubt that you are feeling when you are feeling.

    Hearsay (in the reading) refers to learning the distinctive features of a category directly, through verbal instruction (definition, description) rather than indirectly, through supervised or unsupervised learning (“induction”).

    “Context” means the context of alternatives among which you need to reduce uncertainty in order to do the correct thing. (E.g., the vegan sandwich machine, where your uncertainty is reduced by someone telling you verbally that the distinguishing feature that it’s under an even-numbered window). In category learning, the context is the set of confusable inputs for which you need to learn what to do with what kind of input, by detecting its distinguishing features.

    (If the context is widened with different inputs, as with the example of gold vs. fools-gold) you may have to revise the features in order to tighten the approximation, because all categories are underdetermined except mathematical ones: Why?)

    ReplyDelete
  4. Hello Professor,
    I'm through the first 40 terms on the list and I want to clear up a couple things.
    1. I understand that judgement is always in relation to what you've been exposed to, and thus it is relative. So then I'm wondering what absolute judgement is?
    2. I'm not super clear on the relationship between analog and digital to dynamic and symbolic. I understand analog = continuous and digital = discrete but I'm not make the connection to dynamic and symbolic.
    3. Artifical life simulations, Blind Watchmaker, Darwinian Survival Machines - I'm not finding these definitions anywhere? Could you remind me what these refer to.
    4. Degrees of freedom...I'm used to thinking about this in a statistical context so I'm having some trouble grasping it in a cog sci context.

    That's it (for now). I'll definitely have more as I get further through the list. Thank you!

    ReplyDelete
    Replies
    1. Teegan, pairwise discrimination (same/different, ABX) and similarity judgments are relative judgments. Categorization (what is this? What to do with this?) faced with one input alone, is an absolute judgment. But, as you say, the distinguishing features you use to decide what to do are “relative to” a prior sample (and approximate, and might change with a bigger sample tomorrow, as in the example of gold and fool’s gold).

      Analog/digital can mean continuous/discrete. But it can also be used for two different kinds of “computation,” one using the physical properties of the “analog computer” (e.g., a sundial), the other based on symbol manipulation by a digital computer (Turing Machine). (The analog computer is not really a computer at all (in Turing’s sense) but it could be approximated by a computational simulation (Strong Church/Turing Thesis) as closely as you wish.

      Analog also means the same thing as dynamic or physical (a sundial, an ice-cube, a bicycle). Symbolic means computational (executing the right symbol manipulation algorithm, independent of the dynamics of the digital computer doing it). The symbols are interpretable, so a computational model of a sundial is a symbol manipulator whose symbols and rules and manipulations can be interpreted (by us) as the properties of a real sundial (but they’re really just squiggles, squoggling).

      The toy A-life simulations of language evolution were the little pac-mans in the mushroom world (chapter 6a) who died out if they could not learn indirectly through language, but only the old, direct, sensormotor way (supervised learning).

      The “Blind Watchmaker” is Richard Dawkins’s metaphor for Darwinian evolution: it does not see into the future. Just picks winners blindly based on whether they succeed or fail to survive and reproduce. (Evolution is similar to supervised learning: do you see how?)

      Darwinian survival machines (zombies) would be what we would be if we did not feel. But, of course, feeling must have evolved too (even though the HP prevents us from being able to explain how or why). So we, too, are Darwinian survival machines! but sentient ones: machines that feel.

      Degrees of freedom in cogsci explanation are the same as in stats. If you understand how you lose one degree of freedom for N numbers once you’ve calculated their mean (and another if you also calculate the SD, then it’s the same as what happens to the degrees of freedom for explaining feeling (HP) when you’ve completely explained the EP: any further causal role you want to attribute to feeling has already been fulfilled by the solution to the EP: there no causal room left. Even if, in order to successfully pass TT, you had to include the parts of T4/5 that are correlated with people’s feelings, you won’t be able to explain causally how or why they’re needed – which is the HP all over again. As Turing said, the bottom line is explaining the doings (EP), but after T4/5 there is no causality left with which to explain feelings.

      Delete
  5. Hi Prof,
    After going through the topic list, a few of them still stood out to me and I was wondering if you could clarify those.

    - I'm still confused about 'a picture is worth a thousand words'. Is this referring to the transition from pantomime (iconic gestural communication) to propositional language (arbitrary gestural then verbal language)? If so, words and propositions seem more effective than pictures and gestures, since they evolved likely due to their adaptiveness, in part as they allow for verbal category learning in addition to unsupervised/supervised learning. But then, why is the statement not flipped around: a word is worth a thousand pictures? I'm also not sure how this statement relates to underdetermination, approximation, and mirror capacity?
    - Like Teegan, I couldn't find any reference to the 'Blind Watchmaker' in my notes.
    - In terms of 'certainty', I understood that we don't need certainty to solve the OMP and the HP, but what about the EP?
    - I might have thought too much about Searle's CRA and now I am confused as to why Searle thinks the lack of a felt state (understanding) is a limitation to solving the EP of cogsci. Indeed, we've said that feelings appear causally superfluous ie without them we still have all our doing abilities (all of cognition). So, how is a lack of understanding even a problem for solving the EP, since it's merely a felt state that shouldn't have an influence on cognitive capacities? (And if it does have an influence, how can we still assert that there is no apparent causal role of feelings?)
    - Turing only focuses on weak equivalence since the TT only tests for the presence of all doing capacities, not for a specific causal mechanism. I wrote in some of my notes that this is true for all TTs except T4, and in other notes that it is true for all TTs but T5. Could you clarify on where the cutoff for weak/strong equivalence stands?
    - How does 'sense' compare to 'reference' and 'grounding'? From my understanding, reference is the fact that symbols correspond to things in the world (ie referents), and grounding is what actively connects symbols to these referents, and is necessary but not sufficient for meaning. But I'm not sure where 'sense' stands in all of this.
    - I wonder if I'm missing content on 'mental imagery': is it just referring to thoughts in image form in contrast with propositional thoughts? I'm also not sure when mental rotation was mentioned.
    - I don't think I understood what an 'efference copy' is, if you can clarify.
    - For the evolution of language, I'm not fully making the distinction between the initial transition from iconic gestures to arbitrary gestures, and the following transition from pantomime to propositions. From my understanding, pantomime and imitation are examples of iconic gestures which transitioned into arbitrary gestures (ie gestural language, made up of propositions), and then changed medium to become vocal language. Am I mistaken, and arbitrary gestures need not be propositional? If so, could you give an example of an arbitrary but non-propositional gesture relative to an arbitrary and propositional gesture? And how does this relate to the transition from show to tell (I think telling requires propositional attitude)?
    - Is the power of computation its ability to simulate anything/everything in the world (ie strong CTT)?
    - Could you clarify how the concepts of recoding and chunking relate to other topics?
    - Does underdetermination refer to the fact that, in categorization, different sets of invariants could resolve the uncertainty (ie distinguish members from nonmembers) equally well?

    Thank you in advance for clarifying these points, and thank you again for your teaching this semester!
    Amélie

    ReplyDelete
    Replies
    1. Amélie, about propositions vs pantomime you are quite right.

      But the picture>1000-words metaphor should really be that the thing-that-is-described is always more than the words describing it, because (except in maths) the description (or definition) of a thing is always just approximate. The thing has more properties than can ever be described by a finite number of words (just as continuity can only be approximated discretely, and just as the distinctive features of a category so far may turn out to be not enough tomorrow, with a bigger sample).

      I usually switch at random between “Darwinian evolution” and “Blind Watchmaker” but the Dawkins work has not been assigned in the course this year. His expression “Blind Watchmaker” has become a generic “meme” (another one of Dawkins’s expressions). Google them!

      Searle just uses his Periscope to show that computationalism is wrong, because a computational T2 would not understand. He does not talk about T3, or about the EP or HP. He just says “study the brain to find the causal ‘power’”. But cogsci does not want power, it wants explanation, and Searle does not provide it. You are right that even T4 would not solve the HP, but Searle has no Periscope to prove that. Nor would his Periscope help, because even if the “Flying Spaghetti Monster”” (another of Dawkins’s memes!) appeared and guaranteed that T3 or T4 DOES feel, so the solution to the EP is correct, that still would not solve the HP.

      "in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a `To London' sign on the Dover cliffs pointing south..." Hexter (1979)

      (If Dawkins’s “FSM” guaranteed that MIT’s T3/T4 was a zombie, that would show that that particular solution was not the right solution to cogsci’s EP. But it would not help with the HP. Nor would FSM’s guarantee that Stanford’s T3/T4 does feel explain how or why MIT’s doesn’t and Stanford’s does. This is of course all sci-fi, since there is no FSM. But Searle’s thought-experiment is not sci-fi: it is a refutation of the premise that “Even if computation alone could pass T2, it would not feel.” Do you see that?)

      “Strong Equivalence” really only applies to computation (same I/O and same algorithm vs. only same I/O). With dynamical (analog) systems we saw in week 11 that different analog systems could produce the same I/O, and that’s okay in biology. (The “Blind Watchmaker” is an opportunist, and a bricoleur, and lazy, and does not care about “strong equivalence, just about survival and reproduction.)

      In cogsci “sense = meaning = grounded reference + what it feels like to mean.” “Sense” can also be used to refer to the different meanings of the same polysemous word (POMME d’arbre, POMME de douche). But “sense” mostly refers to fact that the “45th president of the United States” and “the owner of an eponymous Manhattan eyesore” both refer to one and the same psychopath. In other words, a category name differs from a description of the category’s distinguishing features, but they all refer to the same category. Each different sense is a different meaning, and each meaning feels different to understand or mean, but their referent is the same. (Most of this is irrelevant to this course.) Grounding pertains to (content) words (like “apple” or “eat”), rather than function words (like “if” or “not” or df/dt).

      Delete
    2. Amélie (part 2), we briefly discussed mental imagery in connection with the inability to find out how cognition works by simply introspecting about what’s going on in our minds. Mental images do not explain how we retrieve the name of our 3rd grade schoolteacher. They don’t even explain how we retrieve their image! And introspection is homuncular, because what cogsci needs is a causal mechanism, not a mind-dump. That’s why neither mental images nor mental propositions explain how the brain (or any mechanism) produces images or propositions. The brain’s “user” is the homunculus; cogsci needs to reverse-engineer what’s going on inside the homunculus that enables it to do what the user can do.

      On the other hand, Roger Shepard’s experiment on mental rotation of three-dimensional objects showed that analog rotation of mental images can occur in the brain, and be part of the reverse-engineering of cognitive capacities.

      An efference copy holds the visual scene still when you scan it with your eyes, by subtracting the brain’s outgoing motor command to the eye (the efference copy) from the incoming jiggle of the scene, leaving only the scene without its movement on the retina because of eye-movement, minus the effects of eye-movement.

      In the transition from gestural pantomime to gestural propositions (before the eventual transition to vocal propositions) there are two crucial steps: (1) the gradual transition from iconic gestural pantomime to arbitrary gestural pantomime -- which is digital gesturing, but still just pantomime -- and then to (2) the (gradual?) transition from arbitrary gestural pantomime to arbitrary gestural propositions.

      Transition (1) is easy to explain: Once you are communicating gesturally about the same objects and events with the same people, day in and day out, the same shared gestures, safely rooted through their iconic resemblance to the objects and events that they are copies of, they can gradually become shorter and simpler, still connected to their objects and events by long shared habit and familiarity, until the resemblance is no longer really needed at all any more, at least for the everyday gestures.

      But as the miming has all, or almost all, become arbitrary there needs to be a much more fundamental transition (2) to propositionality that is much harder to explain. (I certainly can’t explain it, though you’re free to try on the exam!) The hunch is that once the gestures are arbitrary, the possibility of a subject/predicate sequence (whether SVO, SOV, OSV etc., which happen to be OG parameters of UG) is higher. But this is really a change in “attitude” -- from showing (as in miming) to telling, propositionally, as in “the cat is on the mat.” An intermediate stage might be imperatives (requesting something) and interrogatives (inquiring about something). These are not propositions, which are statements of what is the case. But they are easily transformed into propositions and vice-versa. Imperatives are certainly already there in purposive nonverbal communication of other species; perhaps interrogatives too. But this P/P transition is still an open question. And it's still possible that the transition is “just” motivational rather than a new cognitive ability.

      Both the WC/T and SC/T theses are examples of the power of computation.

      Delete
    3. Amélie (part 3), perhaps the best example of recoding and chunking is language itself: Once I learn that “a horse with stripes” is called a “zebra,” I don’t have to keep calling it “a horse with stripes.” Every word in a dictionary is a re-chunked “abbreviation” of its much longer definition. (How is this related to the transition from pantomime to propositions.) You’ll find this in George Miller’s “Magical Number 7 +/- 2” in (Reading 6a).

      Yes, underdetermination includes the case where more than one set of features can distinguish a category, or the case where a set of features turns out to be insufficient once the sample of confusable alternatives is becomes bigger (as in Gold vs. Fool’s Gold).

      Delete
  6. Hello Professor,
    I was going through the 122 terms, and I was wondering if you could clarify some of them to me.
    16. Brain’s relevance – I don’t have anything in my notes referring to this, so I am not too sure what we mean here.
    27. Communication VS language – is communication the act of manipulating language between two social entities?
    34. Dictionary-Go-Round – I don’t remember what this term refers to.
    38. EEA (Environment of Evolutionary Adaptedness) – What is the EEA, and is it related to Baldwinian/Darwinian evolution?
    44. Explanatory gap - Is this referring to the fact that we don't have a correct explanation for cognition and its origins?
    51. Heterophenomenology – From what I understood, Dennett refers to this idea as an explicit third-person, scientific approach to the study of consciousness. But I don’t remember what he is trying to prove with the introduction of this technique.
    56. Icon VS Symbol – Does the distinction lie in the fact that icons are similar in shape to what they refer to and symbol’s shapes are arbitrary?
    66. Lazy evolution - Is lazy evolution Baldwinian evolution (presupposes the ability to learn something rather than learning the actual thing)?
    67. Learnability – What does Learnability refer to? Still Baldwinian evolution? Or supervised/Unsupervised/verbal learning? Or something else?
    96. Semantic interpretability – From what I understood, computation is semantically interpretable, but I am not quite sure why? Is it because we want machines to create output that can be interpreted by humans?
    102. Spiders/Sex vs Learning/Language – I have notes on sex and spiders, but they don’t really make sense to me now and I can’t see the link between learning and language.
    110. Turing equivalence – Is this referring to weak vs strong Church Turing equivalence?

    In addition,
    What is the link/difference between imitation, correlation, and explanation?

    Thank you for your time and your answers.

    ReplyDelete
    Replies
    1. Étienne:

      16. Brain’s relevance – I don’t have anything in my notes referring to this, so I am not too sure what we mean here.

      Fodor’s paper (4b)

      27. Communication VS language – is communication the act of manipulating language between two social entities?

      Language is one (of many) forms of communication. See the Replies in the Skywriting.

      34. Dictionary-Go-Round – I don’t remember what this term refers to.

      Symbol grounding: Weeks 5 and 8b. See the Replies in the Skywriting.

      38. EEA (Environment of Evolutionary Adaptedness) – What is the EEA, and is it related to Baldwinian/Darwinian evolution?

      EEA is the ancestral environment in which evolution took place – compared to the current environment.
      https://catcomconm22.blogspot.com/2021/08/7a-lewis-et-al-2017-evolutionary_30.html

      That’s ordinary Darwinian Evolution

      44. Explanatory gap - Is this referring to the fact that we don't have a correct explanation for cognition and its origins?

      It’s the HP.

      51. Heterophenomenology – From what I understood, Dennett refers to this idea as an explicit third-person, scientific approach to the study of consciousness. But I don’t remember what he is trying to prove with the introduction of this technique.

      DD is trying to prove that there is no HP. Please see the Skywritings and my Replies.

      56. Icon VS Symbol – Does the distinction lie in the fact that icons are similar in shape to what they refer to and symbol’s shapes are arbitrary?

      Yes. See the Replies in the Skywriting.

      66. Lazy evolution - Is lazy evolution Baldwinian evolution (presupposes the ability to learn something rather than learning the actual thing)?

      All evolution is lazy. What does “lazy” mean? And what is Baldwinian evolution?

      67. Learnability – What does Learnability refer to? Still Baldwinian evolution? Or supervised/Unsupervised/verbal learning? Or something else?

      All of those things, and more. See Weeks 7 to 9. See also the Replies in the Skywriting.

      96. Semantic interpretability – From what I understood, computation is semantically interpretable, but I am not quite sure why? Is it because we want machines to create output that can be interpreted by humans?

      It’s because we are only interested in algorithms that are interpretable by us (their users).

      102. Spiders/Sex vs Learning/Language – I have notes on sex and spiders, but they don’t really make sense to me now and I can’t see the link between learning and language.

      See Lecture on Week 7. The PPTs are in the video on Week 7. See the EEA link above.

      110. Turing equivalence – Is this referring to weak vs strong Church Turing equivalence?

      Weak/Strong Equivalence. The Weak/Strong Church/Turing Thesis is something else: What?

      What is the link/difference between imitation, correlation, and explanation?

      Correlation does not explain causation. Cogsci needs to explain causally. Imitation is not directly related to any of these (though for an A+ some students may be able to find one)

      Delete
  7. I just wanted to clarify a few more concepts. Are mental states the same as felt states? And is modelling the same as simulation? Finally, what context was the mind-body problem discussed in? Is it in relation to the idea that we need to have physical bodies to be able to interact with things in real life and ground concepts?

    ReplyDelete
  8. Elena, you asked:

    "Are mental states the same as felt states?

    Yes (“mind” and “mental” are weasel-words for “feeler” and “felt”).

    “And is modelling the same as simulation?

    Yes. (VR is just the special case of modelling functions computationally, with squiggles and squoggles, and then piping some of the modal’s output to gloves and goggles – or a screen.)

    “what context was the mind-body problem discussed in?

    HP.

    All Chalmers did was rename the “M-BP” the “HP”. The M-BP is sometimes also called the “mind-matter problem” (“M-MP”) or the “mental-physical problem” (“M-PP”).

    Since “mind” and “matter” are just weasel-words for feel/feeling/feeler/felt, a much more straightforward name for the M-BP would have been the “Feeling-function Problem” (F-fP), where “function” refers to every dynamic, causal property of sentient organisms (including their physical structure, anatomy and biochemistry, and of course all their vegetative and cognitive functions) other than feeling itself.

    There is no doubt – except for those who believe in the “supernatural” – that feeling itself, too, is some sort of dynamic, causal property, like all other biological functions. It is the difficulty (perhaps impossibility) in explaining the dynamic, causal function of feeling that makes it the “Hard Problem.”

    ”Is it in relation to the idea that we need to have physical bodies to be able to interact with things in real life and ground concepts?

    Related, of course. Organisms are bodies that can DO certain things. Their DOings are causally explainable, like all other causal functions (EP). But how/why sentient organisms FEEL is not causally explainable (or at least not yet).

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...