Friday, September 16, 2022

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or:

Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or:

https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.

 


 If you can't think of anything to skywrite, this might give you some ideas: 
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445. 
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and futureTopics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

183 comments:

  1. This week (5: Symbol Grounding) has 3 readings, of which you need only do two.

    Everyone should read this one first:

    A. Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335-346.

    and then either this one:

    B1. Harnad, S. (2003) The Symbol Grounding Problem. Encyclopedia of Cognitive Science. Nature Publishing Group. Macmillan

    or this one:

    B2. https://en.wikipedia.org/wiki/Symbol_grounding

    I suggest B1 rather than B2 because B2 is the Wikipedia verion of B1, and so B2 has been tampered with by Wikipedia "editors" (which means any anonymous person on the Internet who feels like it).

    (It is interesting to see what survives intact from the original (B1) in B2, and what has become scrambled.)

    The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what word meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental (i.e., felt) states are meaningful.

    There will only be one thread this week, but you should do at least 2 skywrites. And read the other skywrites and replies.

    ReplyDelete
  2. (Skywriting on B1)

    The paper was useful to me in expressing how groundedness is a necessary condition for meaning. Grounding is the process by which we are able to pick out the referents for symbols, which implies that it is inherently in the brain. Therefore, the meaning of a word on a page may be ungrounded while the meaning of a word in our brain would be grounded. It is also relevant to mention that grounding requires sensorimotor interactions with the world that must fit with how we interpretated those symbols. Linking this back the Turing test, only a hybrid T4-passing machine could fulfill the grounding requirement.

    However, I’m a little confused about the following extract from the natural language and language of thought section: “the connection must not be dependent only on the connections made by the brains of external interpreters like us”. I’m not sure what else the connection between symbols and their referents should depend on? Let me know if anyone can clarify.

    ReplyDelete
    Replies
    1. Amélie

      Let me try to sort out a few things. You wrote:

      Grounding is the process by which we are able to pick out the referents for symbols, which implies that it is inherently in the brain

      Two things:

      (1) The process of grounding has not yet been described in this paper. It will turn out to require category learning, to find out what to call what, and for that you your brain has to detect the features that distinguish the members of one category from those of another. Easy if there are only apples and bananas in the world. A bit harder if there are rabbits and hares (or, worse, identical twins).

      (2) I don’t know what you mean by “it” and “inherently.” The mechanism that gives you the capacity to learning to categorize and name things is in your brain. In the case of Kayla, it’s in whatever is in her head that gives her that capacity. What’s “inherent” (actually evolved, and inherited) is the capacity, not the grounding.

      the meaning of a word on a page may be ungrounded while the meaning of a word in our brain would be grounded

      Yes, but only if we have learned the features that distinguish the kinds of things we call “rabbits” from the kinds of things we call “hares.” The capacity to do that requires a mechanism, which needs to be reverse-engineered. (Fortunately, if this case, some progress has begun to be made, in robotics and AI.)

      It is also relevant to mention that grounding requires sensorimotor interactions with the world that must fit with how we interpretated those symbols

      If someone wrote to Searle in Chinese to ask Searle to SHOW them a “苹果”, Searle (following the T2-passing algorithm) could TELL you in Chinese: 苹果”是一种红色的圆形水 but he could not go to fridge and pick out a “苹果”, and SHOW it to you.

      That’s what it means to “fit with how we interpretated those symbols”; it means you can not only TELL the connection, in words, but show it, in actions, in the outside sensorimotor world of things.

      only a hybrid T4-passing machine could fulfill the grounding requirement

      Why only T4? T3 could do it too. Only a T2 (and only if it passes T2 through computation alone) couldn’t: Why not?

      However, I’m a little confused about the following extract from the natural language and language of thought section: “the connection must not be dependent only on the connections made by the brains of external interpreters like us”.

      When Searle’s Chinese interlocutor assumes that Searle knows what 苹果”是一种红色的圆形水 means, the word 苹果 is only grounded in his brain, not in Searle’s. When he interprets Searle’s Chinese words, they only mean something to him, not to Searle; there is no connection in Searle between Searle’s (Chinese words) and the things his grounded Chinese interlocutor can interpret them as referring to.

      Delete
    2. In response to your question, “Only a T2 (and only if it passes T2 through computation alone) couldn’t (fulfill the grounding problem): Why not?”…
      A T2 is a purely verbal machine which interacts with other verbal machines to achieve an indistinguishable verbal performance capacity (the capacity from which T2 is evaluated). To learn about the world, the only means it is given is thus verbal, and consequently relies on descriptions of the world in the form of language. The T2 can be told words, definitions of words or any abundance of information about each word. It may thus be able to verbally interact indistinguishably from humans by exchanging strings of these words. The strings that a T2 produces means something to the human interpreter but what meaning it has to the T2 is the question. T2 had to learn the verbal capacity presumably from a vast arsenal of words and definitions. Such information is much like what is written in a dictionary. For the T2 the word definitions or descriptions are merely just more words, which likewise need definitions, and those definitions introduce other words which need definitions and so on- with no eventual direct connection to what the words refer to. This is the idea behind the “merry-go-round” example presented in Harnad’s “the symbol grounding problem” paper.
      Humans and T3+ are positly equipped with the ability to directly define a foundation of elementary symbols for themselves, by ‘grounding’ them via sensory nonsymbolic representations. One example is iconic representations which can be thought of as a camera photograph, of a distal object, in your brain. Such an ability requires the sensory ability of sight. A capacity that T2 does not have.

      Delete
    3. Sepand, good reply! Just a few little points: You’re mixing up T2 with GPT-3, which is not an attempt to pass T2. The T-tests are supposed to be testing the reverse-engineering of human capacities, providing mechanisms (including algorithms) that both produce and explain the capacity.

      GPT-3 produces interpretable verbal performance from the frequencies and correlations in huge written texts. It is not an explanation but something (not cognition) that needs to be explained. (Computational T2-passers are so far just hypothetical.) The dictionary-go-round is just a way of illustrating the problem of grounding, and why computation alone cannot do it.

      Among the resources computation does not have are optical, acoustical and tactile transduction [not vision, hearing and somesthesis] and motor function [not movement] -- plus any other dynamical or analog functions other than the [irrelevant] hardware functions that implement the computations [symbol manipulations].

      Delete
    4. Thank you for your response!

      By 'it', I meant the process of grounding. I think I misused the term 'inherently', I meant to express that grounding necessarily occurs in the brain, not anything about inheritance.

      I also returned to my notes to clarify the T3/T4 confusion, thank you for pointing that out. To recap what Sepand explained very clearly, a T2 machine could not account for grounding since it does not have any sensorimotor capacities, being a purely verbal device. This means that the T2 machine would be unable to show the connection between words and their meaning, by pointing things out in the outside sensorimotor world of things for example.

      Delete
  3. (Skywriting on A)

    The closest we can get to solving the symbol grounding problem is by combining the symbolic and connectionist approaches. Connectionism can explain how exposure and feedback allow us to connect objects to symbols through learning, by relying on consistent patterns of sensory projections. The symbolic approach shows that once we ground elementary symbols in this way, the rest of the symbol strings of a natural language will inherit the grounding of these elementary grounded symbols that they are composed of. Together, these two seemingly conflicting approaches provide a hybrid system that is the best candidate for solving the grounding problem.

    ReplyDelete
  4. (Skywriting 5A)
    A possible solution to the symbol grounding problem discussed in this paper rests on the idea of a hybrid system which grounds our representations in a bottom-up way. Analog sensory projections (iconic representations) are processed through dynamic connectionists networks, which reduce these projections to their invariant features (categorical representations). These nonsymbolic representations point to the object to which they refer to explicitly (they are grounded), and they are connected to the elementary symbols (symbolic representations) which make up a symbol system.
    In other words, a symbolic representation (made up of symbol strings of a natural language) inherits the grounded quality of the set of elementary symbols (accessible by their name), which are themselves grounded by their iconic and categorical representation. These elementary sets of symbols are put in relation through symbol compositions (eg: “Zebra” = “horse” & “stripes”).

    ReplyDelete
    Replies
    1. These are definitely points that I have thought about, especially as you have mentioned in class the work some of your students do in the lab: figuring out how many words need to be grounded in order to ground all other words available to us (if I understood correctly).
      One of the missing links here is the way categorical representations are made, ie the way iconic representations are reduced to their invariant features: how exactly is this done? In other classes, we have seen different models such as the exemplar theory, the prototype theory, or rule-based/boundary theories. Are any of these relevant to the categorical representations we discuss here? I think probably not, as they do not seem to address the way we assign names (symbols) to the said categories.
      I also wonder how the grounding of elements used in symbol compositions is done (the “&”, “=”, etc). Is this innate? Do these symbols have to be grounded as well?

      Delete
    2. 3. Since we are talking about sensorimotor learning, none of the three theories you mention is sufficient:

      Exemplars: You can’t learn a category just from seeing one, or a few examples. You need lots of exposure to members and nonmembers of a category (unsupervised learning) and, most important, you also need trial and error with corrective feedback (supervised learning). And in your head you need a learning mechanism that is able to learn (through unsupervised and supervised learning) to detect the features (invariants) that distinguish the category members from the nonmembers.

      “Prototypes” (typical or average examples) are not a solution either: you have to learn to detect and abstract the features that distinguish the members from the nonmembers. (And it’s more like a dynamic feature-detector than a “representation,” which is a weasel-word (often homuncular). (What do you think a representation is?) Passive exposure (unsupervised learning) can bring sensory projections (shadows cast repeatedly on your sensory surfaces) more into focus, but they are still iconic until they pass through the feature-filter of supervised learning. They are “reduced” to just their invariant features.

      Rules: Features only become “rules” once you have verbal learning that can describe or define the feature that distinguish the members from the non-members. Verbal representations are the “symbolic representations” And for that you first need a minimal grounding set. The boundaries separating categories are determined by their distinguishing features. To make the features of a category into a rule, you have to learn them explicitly as named sensorimotor categories too. (To learn that “apples” are round and red, you just have to learn to detect round and red as sensorimotor features when categorizing apples [vs. pears]; but to be able to describe a tomato verbally (as “red” and “soft”) you have to have learned “red” explicitly, too, as a named category, not just as an implicit feature that your feature-detectors abstract as a feature of an apple.

      4. Only “content words” need to be grounded (nouns, verbs, adjectives, adverbs). “Function” words (like: if, and, the, or) are learned (through both unsupervised and supervised learning, and sometimes eventually through explicit verbal rules) through their use. Like the symbols of computation, function words are purely syntactic, not semantic,“apple”; function words have no referents. But their use can still be defined verbally (as long as all the content words used in their definitions are already grounded).

      Delete
    3. I wanted to reply to this because I was also thinking about the groundedness of certain categories of words, whether it be content words or function words. What do we make of ideas or abstract words like “faith”, “concept”, that have no referents but that mean something to us? What are these learned through?

      Another question I have was about the distinction between the symbol vs connectivist model: Isn’t backpropagation and the delta rule just another way to say symbol manipulation? If the network of nodes is learning, but not manipulating symbols, then what do we do as humans when we learn? And for words like “faith” or “concept,” what are the iconic and symbolic representations of these? There is no minimal grounding set here, and I have a hard time seeing how backpropagation of nodes could lead to understanding of this. It seems to me that, even though it would be ideal to have a hybrid of both approaches, there is still lacunae.

      Delete
    4. Tess, first, “faith” does have a referent: You can point to members and non-members of the category: “Witnesses of Jehovah” is a faith, “Yankee fan-club” is not.

      I suppose you mean the feeling of faith. That’s a little trickier, because you are the only one who feels your own faith. But others do have theirs, so they know more or less what you are referring to (thanks to their mirror capacities).

      Some philosophers (notably Wittgenstein) think you can’t have a vocabulary referring too your private experiences because supervised learning requires either feedback from a social community to correct you if you categorize wrongly (which no one else can do if you’re talking about a feeling only you can feel) or at least feedback from your stomach to let you know that the mushroom, which you ate because you thought it was edible, is actually poisonous.

      “Concept” is another weasel word, but if you take it to mean “category,” it’s not a problem, and if you take it to mean “idea,” it becomes another example of a state that only you can feel, but we all know what you are referring to.

      But the best answer to your question is that many so-called “abstract” categories [actually, all categories are abstract, as we’ll learn next week (6), including “apple,” and, obviously, “red” and “round” – abstractness is just a matter of degree]. But I’ll ground a category for you right now that you could never point to, and not one that’s just in your head, yet it can be perfectly well grounded – except not grounded by directly, by unsupervised or supervised learning: grounded indirectly, by verbal learning (once all of the words that are used to define it are grounded): A “Peekaboo Unicorn”. (I think I’ve already mentioned it in class this year.)

      A Peekaboo Unicorn is a Unicorn, which is a horse with one horn, which is fictional; they do not exist, but you can see pictures and statues of them, and can disguise a horse to look as if she has a horn. So Unicorn is perfectly well grounded – as well grounded as “zebra” = a striped horse (approximately).

      But a Peekaboo Unicorn, unlike a Unicorn, is not fictional. It really exists, and it looks just like a unicorn, but it vanishes without a trace if anyone tries to look at it (whether with their eyes or a camera).

      Yet now you know exactly what the referent of “Peekaboo Unicorn,” thanks to the string of words with which I described it (as long as you know what unicorn, eyes, camera, and vanish, etc. refer to).

      So, for a lot of categories, the referent is defined indirectly by grounded words in a proposition, or a series of propositions, rather than directly through your senses.

      For your other question, yes, a neural net can be implemented as a real network of neuron-like nodes, spread out, interconnected, and with activations that can get stronger or weaker. Or it can be implemented as almost all neural nets these days are – computationally. As a learning algorithm.

      Both real and simulated neural nets can learn (they find features that tell different categories apart), so if you put either one inside the head of a real robot, either one can ground its sensorimotor categories – as long as the robot has sensorimotor capacities, which have to be dynamic, not computational (otherwise it’s just a simulated robot, which is like a simulated ice-cube).

      I’m sure Kayla has neural nets that can learn inside her head, connecting her words to the categories they refer to using the features they’ve learned. And it doesn’t matter whether they are real or simulated neural nets; they’re just a part of the hybrid system that is a robot.

      Delete
  5. Skywriting 5.B1
    This reading specified the sensorimotor component which is essential for symbol grounding, especially in the first step of the bottom-up approach: the analog sensory projections. Our interaction with the world is essential for symbol-grounding, otherwise no iconic representations (which make analog copies of the sensory(motor) features we encounter) would exist. This would in turn prevent these representations to be categorized and integrated into a symbolic system.
    According to this view, only a T3 robot (and up) which has sensorimotor interactions with the external world would have the capacity of picking out a referent, ie of grounding words.

    But what we are looking to understand here, of which symbol grounding is only one (essential) property, is the conscious meaning of words. The second property for this ability is therefore consciousness: this is what poses a serious problem to the consideration of computers (among other things) as comparable to a dynamic system like the brain rather than a static paper page. A T3 robot interacting with the world could have symbol grounding, but we are still unsure of whether this capacity is enough to know that conscious meaning is present: the T3 robot with the capacity of symbol-grounding words could possibly still be a ‘zombie’.
    The conclusion to this problem comes back around to Turing’s methodological approach to the Turing Test: as of now, it is the best way to figure out the capacity of how we relate symbols to their meanings (without having to address the hard problem of consciousness).

    ReplyDelete
    Replies
    1. (sorry for the long post, will rewrite and shorten if necessary)

      Delete
    2. Mathilda, you’ve answered almost all your own questions.

      For words to have referents in the world, they need at least T3 grounding.

      But grounding is not enough to guarantee that T3’s words have meaning, because it feels like something to say and mean something, and it also feels like something to understand that meaning. A T3 (or T4) robot that can communicate with humans in words, indistinguishably from any other human, and that can interact with the referents of its words and sentences in the world indistinguishably from any other human is grounded.

      But, because of the other-minds problem, none of us can do any better than to reverse-engineer T3 (or T4) in reducing the uncertainty about whether a T3 or T4 (or any of our fellow-human beings) really feels rather than just moves. This is true even despite the help from our remarkable “mind-reading” mirror-capacities as we T-test one another, including any T3s or T4s among us, every day. Turing says (and he’s right) that reverse-engineering and T-testing are the best we can ever hope to do, so there’s no point in holding out for a guarantee that anyone else feels. We certainly don’t worry with one another that no one but oneself feels!

      So the difference between meaning and insentient T3/T4 grounding can never be detected (observed). But that’s not the hard problem! Even if there could be some sort of failsafe guarantee that T3 or T4 is not a zombie, but can really feel, and so her words really have meaning and not just grounding, we still would not know how or why she can FEEL rather than just DO. Feeling seems to be causally superfluous. Once the easy problem is successfully reverse-engineered, there are no more causal degrees of freedom to explain the causal mechanism of feeling. (More on that in Week 10.)

      Delete
    3. B1:
      Regarding the general thrust of this article, it does make sense to limit ourselves to T3/T4 level-models, and if we were to explain all apsects of their function, perhaps we could say feelings have no causal role in our brain's functioning.

      "...if the system's behavioral capacities are lifesize, it's as close as we can ever hope to get", and indeed, this would be as close as we probably would ever need to be.

      However, I wonder how we know feeling would play no causal role in our mind's functioning, and that it could be simply ancillary, or why it is even assumed. From an evolutionary perspective this is confusing- many animals bseides humans have subjective states. Why would these capacities exist in such diverse ways and forms if they played no causal role in categorization and cognition?

      I suppose that this is somewhat irrelevant to the task of Cognitive Science- there probably are physical correlates for emotional states that are 'easy', but this might not necessitate causation as such. After all, it seems plausible that the intentionality (in the weasel-word sense) of symbolic reasoning is the reinforcement and data-at-hand for this sort of cognitive activity. It is the content, and there is no form independent of it. If we can figure out that stimulus-response mechanism in a 'lifesize' manner, that would be largely irrelevant. But why do we assume we can?

      Delete
    4. Jacob, you are right that feeling must have had an adaptive function, otherwise why would it have evolved in so many species (and how)?

      But the hard part is answering these questions, by reverse-engineering why, and how.

      Every attempt so far has failed when we try to sort out the causal function of feeling: Answers like “Because if we didn’t feel then we wouldn’t be able to do X” have all failed – for any T3 or T4 capacity X you choose (the most common one is “the ability to avoid damage if we did not feel pain”). The feeling always slips through the sieve: Nociceptors detect the damage and the motor system can react to it. Why feel it?

      [Yet there is one potential explanation (and it’s the one that all of our grannies, and most of us, believe), The only trouble is that there is no objective evidence in support of this explanation, so it’s almost certainly wrong, it only feels right:

      It’s 5th-force dualism (or telekinetic dualism): In addition to the four fundamental causal forces in the universe (electricity, gravity, strong nuclear, weak nuclear), the four that explain all other causal effects everywhere in the universe, there’s a fifth force, and that’s the force of our “free will”: We can do things not just because we are impelled by those other four forces, but also voluntarily, by choice, because we feel like it. We’ll discuss this in Week 10.]

      The hard problem is not irrelevant to cogsci. If any science is responsible for explain why and how organisms feel, it is cogsci. It is just that there has been no progress at all, and not even any ideas on how to make progress. On the other hand, there is progress on the easy problem, and it is much clearer how progress can be made.

      I think at the end of your last paragraph you are asking why we think we will be able to solve the easy problem completely without explaining feeling too. It is more likely that as we reverse-engineer DOing capacity, some of the mechanism will also be causing feeling, but we will not know it, because of the other-minds problem, so we will be unable to explain it.

      Let’s leave this for Week 10. For now, grounding language is enough to connect words with their referents. If it does not generate feeling, hence felt meaning and felt understanding in Kayla, we will not know it.

      Delete
  6. As I understand it, the symbol grounding problem seems to be presently intractable, and there's very little in our scientific analysis of mind that could change that (at least on the horizon). Even if it were solved, it wouldn't be proof of actual consciousness, Dr. Hardad points out in his conclusion.
    Referents must refer to sensorimotor capacities if that wish to remain grounded and avoid an infinite regress, but there's no way of knowing (at least at present) whether there is a sensorimotor quale accompanying manipulation in an artificial system. The hybrid system of symbolic manipulation with sensimotor grounding proposed in this paper seems to be our best bet, however it does raise a few questions.
    Namely, how do we map 'sensimotor capabilities' to meaning without the quale; i.e. how can we create or simulate subjective categorization without a subject? How do we even know what kinds of sensimotor capabilities are involved in more abstract processes of categorization? Mathilda's questions on different kinds of representational schemata was an excellent formulation of this point- I would say that it is relevant, especially as languages are not identical (even if they may share a universal grammar), and can affect the way one processes and categorizes via (i.e. via bilingualism).

    ReplyDelete
    Replies
    1. P.S. This is my 5A skywriting.

      Delete
    2. Jacob,

      What is the symbol grounding problem?

      Why would you think it was is intractable?

      Grounding symbols concerns cogsci’s easy problem of reverse-engineering DOing, not the hard problem of reverse-engineering FEELing.

      There is no “proof” in cogsci (or any sci), just evidence. There’s proof only in maths (Descartes)

      What is a referent?

      Referents don’t refer, they are what words refer to. And words don’t refer to sensorimotor capacities, they refer to things, like “apples,” or “apps.” It is through our sensorimotor capacities that we learn what “apple” or “apps” refer to.

      There would be an infinite regress in trying to learn the meaning of a word from a dictionary or a verbal description if the words were ungrounded in your head (as in the CRA), but words can be grounded. Please read the other comments and replies, and then read 5b).

      I’m not sure where the weasel-word “quale” came from, but I guess you mean there’s no way of knowing whether Kayla (T3) feels. But being able to passT3 does show that her words are grounded. A solution to the symbol grounding would not be a solution to either the other-minds problem (of whether and what Kayla feels), or to the hard problem (of how and why Kayla feels [if she feels]). It would only be a solution to the problem of how she can pick out and interact with the things in the word her words refer to.

      (I couldn’t quite follow your last paragraph.)

      Delete
    3. (Skywriting #1)

      . What is the symbol grounding problem?
      The symbol grounding problem is the problem of understanding how the symbols in a symbol system, be it static (e.g., the words in a sheet of paper) or dynamic (e.g., the words in a computer), connect directly with (i.e., are grounded in) their referents. Computationalism fails to address this problem, because computation is just formal symbol manipulation, which doesn't take into account what symbols refer to nor their meaning. Harnard argues that groundedness can be achieved by a hybrid symbolic-sensorimotor system that reliably connects symbols to their referents via its sensorimotor interactions with the world.

      . Is the symbol grounding problem intractable?
      No. The symbol grounding problem can be solved by constructing a T3 robot that interacts with the world Turing-indistinguishably from the way a regular person does. It therefore falls well within the overall project of cognitive sciences – that of reverse engineering what human being are capable of doing. The problem of meaning, however, may be intractable, insofar as it intersects with the hard problem of consciousness (that of explaining "feeling").

      . What are referents?
      Referents are the objects in the external world that the symbols in a symbol system refer to. The object "apple", for example, is the referent of the string of characters "apple".

      Delete
  7. Skywriting 5A

    From what I understood, the symbol grounding problem is about connecting a symbol to a referent. For example, when I think of a language that I don’t understand such as Chinese, it is only a string of meaningless symbols for me, until someone gives me a Chinese/English dictionary so I can interpret and associate the symbols to things in the real world for which I know the words in English. Associating a symbol to its referent is inherent in the human brain, and we do that through categorization and discrimination based on our lived experience. However, I don’t really know how we choose what belongs to one category in order to create a meaning through the associations we make.

    ReplyDelete
    Replies
    1. I mostly agree with the point you’ve clarified regarding how categories are formed, but this caused me to raise a question (which you can disregard if irrelevant). What occurs if we are not given corrective feedback in time for the wrong category to be formed? A simple example would be forming the belief that somebody’s name is “Emily” and going about this belief for some time until, at some point a few months later you are corrected and you find out their name was “Emma”. However, as opposed to correcting yourself, you still constantly refer to this person as “Emily”. So, despite receiving feedback that your original name was wrong, you still continue incorrectly using it. Is there another mechanism that could overrule the functional categorization?

      Delete
    2. Karina, don't forget that categorizing has two components: distinguishing the members from the non-members, and naming them. In your example you obviously recognized and interacted with this person, knowing who she was, and not mixing her up with another person. You just (partly) got her name wrong.

      So I'd say getting the name right is much less important than getting the membership (and interactions) right. Both EmX variants were being grounded; you just missed some of the details of the name. With people we know many interactions can go on, confirming the membership each time by actions, but with no name spoken.

      But remember that grounding word meaning is least concerned with proper name grounding (a category with only one member!). Symbol grounding is concerned with grounding content-word categories (nouns, verbs, adjectives), with many members (usually an infinite number) and inter-confusable categories.

      Chinese people have no trouble identifying Chinese people. If they are new to Canada (and have not watched a lot of American movies) they may initially have some trouble telling apart non-Chinese individuals. (And vice-versa when Canadians first visit China.) But human face recognition is a specialized, innate human ability.

      Delete
    3. Thank you for the further clarification! I considered that symbol grounding may be more broad than the example I had thought of, and now it is definitely clear!

      Delete
  8. Skywriting for 5a

    Although I found the Symbol Grounding Problem a bit confusing, this is what I understood from this paper: an evident solution to this problem still hasn’t been found, which makes sense to me, since we still have many unanswered questions about how the brain does what it does, therefore attaching a meaning to a symbol should also be one of these cases. The closest we can get to solving this problem is with a hybrid system that combines symbolism and connectionism. My question here is about subcategories. Learning that a zebra = horse & stripes makes sense. But how do we learn the subcategories for a dog, for example? How can I differentiate between a labrador, a poodle, and a bulldog? In this case, I can learn their physical properties to differentiate them, but would that be enough to identify them?

    ReplyDelete
    Replies
    1. Alara, machine learning is already beginning to make progress with category learning (though it’s far from TT-scale yet).

      A T3 robot like Kayla combines not only symbols and neural nets, but sensorimotor (robotic) capacities.

      Subcategories are learned the same way as any category: trial and error with corrective feedback from the consequences of whether you’ve done the correct or incorrect action, or used the correct or incorrect name.

      Once enough words’ referents are grounded, however, verbal learning becomes possible. How?

      And language makes learning new categories as well as subcategories easier. (Although all categories are not strictly hierarchical, most, perhaps all can be made into subcategories as well as superordinate categories -- by re-combining them, or their features: e.g., "Let's agree to call husky-dog crosses that are grey 'malamutes' and husky-dog crosses that are white 'samoyeds'." Or "Let's call all dogs and wolves 'canids'".)

      Cogsci does not do taxonomy or metaphysics, but not all potential categories are biological natural kinds, like species or subspecies. Electrons and neutrinos are physical natural kinds; integers, and even/odd numbers are formal or platonic kinds; chairs, tables and furniture are human-made artifacts; seraphim and cherubim, hobbits and ents, and unicorns and peekaboo-unicorns are human fictional creatures. In general, any combination of things that humans can categorize by treating or naming them differently is a potential category, and there is an infinite number of them, including all their subsets and suprasets…

      Delete

  9. “... if the meanings of symbols in a symbol system are extrinsic rather than intrinsic like the meanings in our heads, then they are not a viable model for the meaning in our heads: Cognition cannot be just symbol manipulation.” This quote really helped me understand the main argument against cognition as just symbol manipulation. Symbols in a symbol system can carry meaning that is not known to the one doing the symbol manipulation, as is the case in Searle's chinese room. Cognitive symbols have to have an internal meaning that is understood by the thinker, not just an external, arbitrary meaning. If I understand correctly, the symbol grounding problem is trying to identify how these internal representations gain their meaning. This paper argues that this must be a bottom-up process, which grounds the symbols in both sensorimotor processes and then categorizes them in relation to other symbols. (Amelie explained this system quite well!)

    ReplyDelete
    Replies
    1. Category learning ("doing the correct thing...") is the way words get grounded. But grounding is not meaning. What is the difference?

      Delete
    2. From my understanding, symbol meanings result from correctly connecting the symbol system to the world, which is also known as picking out a referent. This would suggest that meanings are in our heads. The (5b) reading also mentions meaning in the wider sense as the referent and manner of picking it out, suggesting that meaning is related to consciousness as mental states are meaningful. The grounding of word meanings in our heads would mediate between the symbolic words (ie on an external page) and their external referents, and this concept is linked to understanding. The meaning of a word on a computer screen is ungrounded, whereas the meaning of a word in our head is grounded as we understand them.

      Delete
    3. Karina, the capacity and outcome of learning what words refer to is in our heads too. And all felt states (not just word meaning) are in our heads too.

      Delete
    4. To add on to the question above (also my skywriting for 5.a)
      If I understand correctly, grounding is not meaning: “ whether its symbols would have meaning rather than just grounding is something that even the robotic Turing Test -- hence cognitive science itself -- cannot determine, or explain.”

      So words can be grounded without necessarily having an attached meaning but they need to be grounded to have meaning. So can we say that grounding is a necessary but not sufficient condition for meaning to be attached to symbols.

      I, as a T3 robot, can live my life interacting correctly with grounded words (such as responding to this post) but whether I fully understand, and feel what it feels like to understand- what these words truly mean- is something that no one can be sure of.

      Delete
    5. A T3 robot needs to have sensorimotor capacity to interact with its environment. Thus, grounding is necessary to pass T3, if I'm not mistaken. T2 is not grounded.
      I'm wondering: does Kayla follow the same symbol-grounding "protocol" as me when I learn new concepts? As in, the "sensorimotor --> category --> category learning" pipeline that helps us connect symbols to its referent.
      And if not, is that a requirement for T4+?

      Delete
    6. Kayla, yes, whether you feel is the O-MP, and only you can know (not us). But part of being a T3-passer is talking the same way about whether you feel as each of us talks about whether we feel: We say we do (and we believe one another, and it’s true). It’s an interesting question, though, why a T3 zombie would talk about feeling at all… and that’s related to why the HP is so hard.

      Teegan, the only thing required of a T3-passer is that it have capacities indistinguishable from ours.

      Delete
    7. Right okay. Then, why do we say that T3 is grounded?

      Delete
    8. What I’m having a bit of trouble conceptualizing is grounding in robots versus grounding in humans. The 5B reading says, “One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents. This […] is what the hitherto undefined term “grounding” refers to.” But we still say T3 robots are grounded?
      I think where I’m going wrong is that I’m confusing what “grounding” means. Meaning requires grounding but grounding does not require meaning. This is what Dr. Harnad is getting at when he says that a T3 robot could be a “zombie,” but we would still say that it’s grounded.
      The human brain can ground symbols inherently, and pick out referents, but how it does this we do not know. A robot is only grounded through its sensorimotor abilities, which is an important difference.
      I’d welcome any other perspectives because I’m having trouble “grounding” this concept, haha bad joke.

      Delete
    9. I think you pretty much have the idea! To rephrase in the case it helps: if I understand correctly, computers are unable to ground the symbols they use because computers on their own cannot sense, and therefore have no way to connect the symbols they manipulate with phenomena they can experience in the real world, or more abstract concepts that can be generalized out of them. By adding sensorimotor capacities to the computer in a way that gives rise to a T3 robot, the robot will be able to ground symbols through its interaction with the world. I'm not fully sure about the idea that "the human brain can ground symbols inherently" - we still ground symbols through sensorimotor interaction. However, we also know that we can "feel" the results of those sensations and such which isn't necessarily the case for a T3 robot.

      Delete
    10. Zahur, good replies to Teegan’s questions. Grounding is a capacity – to connect symbols to the things that they are interpretable (by us) as referring to.

      Kayla’s words are grounded by her capacity to interact (DO) with those things that her words (can be interpreted by us to) refer to, indistinguishably from what we humans are doing when we interact with the things we mean when we refer to them with the words we speak or think.

      Because of the OMP we cannot be sure whether Kayla means anything with her words, because we cannot be sure she FEELs at all. (Turing says – and he’s right – “don’t worry too much because T-testing is the closest we can get even with one another, to solving the OMP.”)

      The HP (of explaining how and why we FEEL rather than just DO) might be altogether beyond explanation – other than the explanation of how and why we can DO what we can do; i.e., the solution to the EP.)

      (“Inherent” is a weasel-word. In the context of grounding, it usually just means “innate,” which refers here to category feature-detectors [like frogs’ bug-detectors] that are genetically evolved and inherited rather than learned. It can also mean what Searle meant by his weasel-word “intrinsic” – which is that when I myself say or think something I know [Cogito] that I mean something [even if that something is wrong!] There is no OMP with our own words and thoughts!)

      Delete
  10. 5B
    The articles helped me understand why meaning is not the same as reference (which at first I had a bit of difficulty understanding) and how it relates to the symbol grounding problem (even though the process of grounding isn't explained yet) . As far as I understand, the referent of a word is the thing it refers to. And usually, that thing is not an individual (unless the word is a proper name); it's a kind of thing: a category. For example, "Apple" refers to any "red fruit", being "red" and a "fruit" are features of the apples, and they're also categories, and they have names. So if you already know what those names (in the category) refer to, then the definition gives you a new name, and you know what it refers to. But if you don't know what "red"... refers to and you look it up, you don't know what the words defining its features refer to; then you're back to the symbol grounding problem. So the meaning is not the same thing as reference, it's not the same thing as grounding either.

    ReplyDelete
    Replies
    1. Melis, good job! More about how organisms learn categories next week (6a, 8b).

      Delete
    2. For 5(a):
      From previous weeks we have already established that both computers and human can manipulate symbols based on rules, but only humans know what those symbols are referring to (but computers don't). The symbol grounding problem is basically how is it we knows what are symbols referring to. From the reading symbols could be grounded by iconic and categorical learning. The iconic representations are just raw data, whereas though trial and error we learn elementary categories. These categories are elementary symbols. From there derives the symbol system that we talked about in mathematics, computation and language. The symbols system manipulate the elementary symbols based on some rules, and because the elementary symbols are grounded the symbol system is grounded. Therefore solving the symbol grounding problem.

      5(b)
      From this reading and the replies, I am still not sure about why meaning is not the same as grounding. The confusion is about " then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself". If I understood the reading 5(a) correctly, picking out the referent is the same as symbol grounding, with three corresponding steps: iconic, elementary categorization, and higher-level symbol system. Then what is the difference? Is it about the felt state of meaning? I am confused because I think consciousness is just the silver lining of meaning. If a T3 is grounded and mean what it means then it really means whatever it means. It just different from our meanings that are conscious. (I just realized I answered my question while writing. The difference of conscious meaning and unconscious meaning is exactly what other replies meant in their skywritings).

      Delete
    3. Sorry I forgot to reference the quote. The quote is from the 5(b1) reading.

      Delete
    4. A typo in "If a T3 is grounded and *means what it means then it really means whatever it means."

      Delete
    5. 5B1. A quote from this article that stuck with me is: “These feature-detectors must either be inborn or learned.”

      In reverse engineering why we feel, I keep on doing this thought experiment: If someone is born on a deserted island but never sees another human nor a mirror, will they ever know they are humans, distinguishable from other things or species on the island? How would this human’s sensorimotor interaction with the world bring it groundedness if it is not taught symbols and their referents? And would this hypothetical human feel fear if it saw a tiger, without knowing what a tiger is?

      Evolution plays an important part in the things we feel. For example, when we are born, we already have attraction and withdrawal, and probably pain. Can some feelings be independent of symbol-grounding, of referents, of meaning?

      Delete
    6. Yumeng, it’s not categories that are elementary symbols: it’s their names, which are arbitrarily shaped words connected to their referents in the world through sensorimotor interactions via learned feature-detectors inside their heads (which I misnamed with the homuncular weasel-word categorical “representations”).

      Think of an apple in the world. It projects a shadow (iconic) on your retina and brain. Your internal neural nets can learn features from this iconic shadow by sensorimotor trial-and-error interactions, guided by error-correcting feedback from the consequences of how you categorize or miscategorize apples in your interactions with them (“supervised learning” or “reinforcement learning”). The nets learn to connect the arbitrary name “apple” to its referent, the members of the category you have learned by detecting the features that distinguish apples from bananas. That is how the name of the category is grounded (and why Searle does not know what 蘋果 means.

      Computational and mathematical symbols do not have a symbol-grounding problem. Their arbitrarily shaped symbols can be manipulated according to algorithms (recipes) that operate on (arbitrary) symbol shape, not symbol meaning. It is only the symbols of language (and verbal thinking and learning) that need to be grounded.

      Once enough categories like “apple” have been grounded directly through unsupervised and supervised sensorimotor category learning, the names of the categories and their features can be combined and recombined in propositions defining and describing the features of new categories that the speaker/writer already knows but the hearer/reader doesn’t. This grounds the new categories indirectly, through words. (“Learning through instruction rather than though induction, unsupervised or supervised.”)

      This third way of learning categories gave our species a revolutionary advantage over all other species, allowing it to create technology, science, art and culture – but, tragically, it also allowed it to create slavery, wars, the domination and decimation of all other species, pollution, resource depletion, and now perhaps the heat death of the entire planet.

      Grounding is the robotic connection of words to their referents in the world “apple” to apples. Meaning (and hence understanding) consists of that – plus also the fact that it feels like something to say or think or mean or understand “apple.” If there were no feeling, if humans, as well as all our victims, were all just insentient T3 zombies, then the revolutionary advantage of humans’ evolving language would not have been tragic (and there would be no hard problem, any more than there is on a lifeless planet with colossal volcanos or a star exploding, or imploding into a black hole.)

      Nothing would matter in an insentient world. Nothing for anything to matter to.

      Tess, a normal biological human baby on a desert island would grow up feral, with no language, but with all the other doing and feeling capacity of a primate. Why not? If raised by another species, she would feel she was one of them.

      No words – nothing to ground. But, of course, she would have plenty of categories – but un-named. Apples would just be red, round things she could eat. No name for red or round either.

      If there were predators, of course she would learn to fear and to try to avoid/escape them, like all the other creatures on the island.

      And she could only show, imitate and mime, not tell. (And she could learn to distinguish edible from inedible mushrooms through trial-and-error learning supervised by corrections from her indigestion…)

      Feelings are inborn, only their names are not.

      Delete
    7. Reply to Professor Harnad:

      From the readings and the comments, it is clear that the lexicon of every language can be reduced to around 1,000 words that require symbol grounding to give meanings. The meaning of other words can be composed of these words. People who do not learn any language have no knowledge of symbols, so there is no grounding, but I wonder can they group categories as people who know language do? E.g., do they have the ability to view zebras as striped horses and giraffes as long-necked deer?

      Delete
    8. Han, yes, other species can categorize too (innately or through learning), and that capacity preceded (and was essential, along with purposive communication, pointing and and pantomime for) the invention and evolution of language.

      Delete
  11. 5A
    I still struggle with the concept of categorical representations. To my understanding, categorical representations are unchangeable and innate features of an object that we may discern with our senses that are selectively narrowed icons. Categorical representation does not say that it isn’t something else, but rather just contains features to prove what it in fact is. What kind of information is this? Does each categorical representation point to a million other categorical representations just to say “I’m not this”? How does one distinguish definitively between different types of things -- could such a process be so reducible?

    ReplyDelete
    Replies
    1. Melis, the weasel-word in all this (my fault) is "representations." See the other replies in this thread (and it will be clearer next week): Categorical "representations" are really learned (or innate) feature detectors, Iconic "representations" are the sensory projection onto your retina without (or before) being filtered by category feature-detectors.

      Delete
  12. Symbols is a symbol-manipulation system are arbitrary, but in combination with others can make sense to us (as humans, as interpreters), but not the system itself. In other words, we can understand their meaning, but the system cannot. A system that is implementation-independent is purely computational. In class, and in previous readings, we already understand that cognition is not purely computational. So how do our minds assign meaning to these symbols? The paper discusses the necessity of “grounding” of symbols in a system. By grounding, we mean a system that dependent on a direct connection to the referents of its symbols, and using external information about the world relating to the symbols’ interpretations. This paper was not trying to suggest that grounding IS meaning. Instead, it points out that a symbol-system cannot explain how are brains are capable of understanding the meaning of words / symbols without some form of connection between symbols and their meanings (by way of direct grounding).

    ReplyDelete
    Replies
    1. Sara, close, but the sensorimotor connection (grounding) of words is with their referents, not their meanings. Meaning something is what it feels like to be able to refer to it.

      Delete
    2. When saying that "meaning something is what it feels like to be able to refer to it", then why is a word like justice ungrounded if it feels like something when something is just or unjust?

      Delete
    3. Emma, first, “justice” is grounded for you, indirectly, once someone (or a book) defines or describes to you what it refers to (its “features”). Knowing that, you can go on to pick out examples of what is and is not “just” when you see them, or when someone describes an example in words.

      Very abstract categories are not ungrounded, it’s just that it’s hard, and often impossible, to ground them directly, through trial-and-error sensorimotor experience. Even “redness” cannot be grounded directly, although “red” can. “Truth” can’t, “true” can.

      But indirect verbal grounding is still grounding. (And that’s the point, about the revolutionary power of verbal grounding.)

      Delete
  13. Symbols and symbol systems are defined and explained in the depth in this article; “A symbol system is a set of symbols and syntactic rules for manipulating them on the basis of their shapes (not their meanings). The symbols are systematically interpretable as having meanings and referents, but their shape is arbitrary in relation to their meanings and the shape of their referents.” It is explained that languages are also considered to be symbol systems.

    My issue and question about this explanation is that for pictorial languages, such as hieroglyphics, the shape of the symbol does have meaning. For those languages, they are interpretable even to those who do not understand the language because of the readability of their shape. Is considered an exception to the definition of a symbol system, or is the shape of the symbols in these languages still considered to be arbitrary?

    ReplyDelete
    Replies
    1. Hi Kimberly! This is a really good question and got me thinking about the same thing. I think hieroglyphs could be thought of in the same way as an image. For example, if I draw a bird and you can associate it with a real 'bird' intrinsically, it acts the same as a me holding up a bird to show you. It still just a symbol representing a bird. However, I see what your'e saying in the arbitrary sense. Correct me if I'm wrong, but I believe hieroglyphs are only written, you can't speak "hieroglyphian", so maybe they cannot be classified in the same way spoken languages, such as English are. I think a distinction could be made in the sense of a spoken language and a written one. In this case maybe it is not necessary that the symbols be arbitrary, it is pictorial representation that functions to communicate an object or scenario, negating the need for a 'word'/ arbitrary symbol for the object at all.

      Delete
    2. Kimberly and Sophie, Egyptian hieroglyphics are much more iconic than Chinese characters, and in both languages the meanings of unknown words can be partly guessed from their written form. But the written form of a language is not the language. It is parasitic on the spoken form (or, in the case of the sign language of the deaf, its signed form), in which the shape of the words is arbitrary (with a little left-over iconicity in sign language). (And, yes, Egyptian was a spoken language; the hieroglyphics were just its written form.)

      But if you think the iconicity of hieroglyphics, or Chinese characters, or sign language gets you far in understanding what is being said, have a look at each of them on google or youtube, and see for yourself how much you can understand from just looking at it.

      And, for a point of comparison, compare that with how much you understand from pantomime. And remember that pantomime is not a language. It is not making statements (subject/predicate, true/ false propositions). It is not TELLing, it’s just SHOWing. Recognizing what object a picture or gesture resembles is not the same thing as understanding the meaning of a word in a spoken, written, or signed sentence.

      From Icon to Symbol. It is specifically through the transition from showing to telling that language began, and one of the keys to that was to abandon or ignore iconicity and treat the movements or sounds --formerly used in imitating or copying things -- as arbitrary symbols whose connection to their referents is no longer based on similarity between symbol and referent: it has become instead just a formal convention that all the community’s symbol-users share when they have a certain referent in mind. Any residual similarity would be a left-over from iconic communication and representation, which is showing, not telling.

      And, importantly, during the transition from communication by imitation to propositional language, the iconicity -- the similarity between the copy and the thing it is a copy of – would be the origin of the connection between the word and its referent .

      But once the iconic connection has been made, and the actions are being shared by a communicating community, the actions can gradually shrink for speed and efficiency, to just arbitrary shared symbols (words), retaining their connection to their original referents.

      (Remember, though, that most words refer not to individuals or to specific individual events, but to categories – kinds – and those categories have to be learned by trial and error, with corrective feedback, to be able to detect the features that distinguish the members from the non-members (Week 6). So symbol grounding does not happen magically by just pointing at or imitating an individual thing, as in pantomime. You need to learn what kind is being pointed to.)

      From Pantomime to Propositions. The other key to the transition was propositionality itself (the “propositional attitude” that underlies telling, in contrast to showing) (Weeks 8 and 9).

      If there is some left-over iconicity in signed, spoken or written words, it may help children guess the referent of a word they have not seen before, but it will not convey the proposition of which it is just one piece.

      Delete
    3. I was wondering the same thing, as what I had grasped was the example of symbols as numerals (1,2,3) being part of a symbol system (arithmetic). This consists of shape-based rules, where 2 for us is 'two', but it's shape has no way of being connected to 'two'.. so the symbols only take on meaning in our minds. However, this was harder to grasp for things such as sign language and hieroglyphs - this explanation helped.

      Delete
  14. Skywriting for B1

    I found this reading much more simple and clear than 5a. Especially the “Natural Language and the Language of Thought” section gave me a better understanding of how language may convey its meaning, since language is probably the most complex symbol system for the symbol grounding problem. In lectures, we have mostly talked about a computation being implementation-independent, however this section also mentions implementation-dependance, which is required for picking out referents for symbols. So (if I’m interpreting this correctly) if this capacity is implementation dependent, it must mean that the brain is the only hardware that can execute this kind of capacity, which would be evidence that attaching meaning to words is not just computation, and there's something more going on in the brain.

    ReplyDelete
    Replies
    1. Alara, a hybrid analog/computational system like T3 (Kayla) would be implementation-dependent even if it did not have to be T4. Sensorimotor function is implementation-dependent. Searle cannot become your eyes or your legs.

      Delete
    2. Hi Alara, the comparison between cryptologists and the Chinese/Chinese Dictionary-Go-Round in the 5A reading helped me grasp why picking out referents is implementation-dependent and requires T3 capacity. Cryptologists are able to ascribe meaning to arbitrary symbols of ancient languages and secret codes based on their past experiences and use of universal grammar. Similarly, children learn their first language by ascribing meaning to meaningless sounds based on their sensorimotor exposure. In contrast, trying to learn an ancient language without knowing any other language, using just a dictionary, is impossible because you have no prior exposure or feedback to base the rest on. A computer is the same way: it has no sensorimotor exposure and therefore no referents to connect encountered symbols to. This comparison shows the necessity of sensorimotor capacity/experience in symbol grounding, indicating that symbol grounding requires T3 capacity and is thus implementation-dependent.

      Delete
    3. Josie, translating and decrypting are for second languages. Only your first language needs direct grounding.

      Words evolved for communicating, but then also became useful for thinking. To learn the meaning of a word you also need feedback on what others use it to refer to. You can learn what apples are by learning what kind of thing to do with apples. But the name can’t be a private one: it depends also on what everyone else is calling “apple.”

      Delete
    4. Thank you for the clarification! I believe I misinterpreted it at first thought. I have a couple of questions though: since sensory motor capacities also involve the brain, how does a T3 robot gain its sensory motor functions? Is it just possible with the robotic capacities they have and computation? What are the sensory motor functions of a T3 robot dependent on? Perhaps it is implementation-dependent because the sensory motor functions cannot be executed on a computer? Thanks in advance!

      Delete
    5. b>Alara, sensory and motor functions are not symbol-manipulation. T3 has them, T2 (if it’s just a computer) does not.

      Delete
  15. 5A: “Many symbolists believe that cognition, being symbol-manipulation, is an autonomous functional module that need only be hooked up to peripheral devices in order to "see" the world of objects to which its symbols refer (or, rather, to which they can be systematically interpreted as referring).[11] Unfortunately, this radically underestimates the difficulty of picking out the objects, events and states of affairs in the world that symbols refer to, i.e., it trivializes the symbol grounding problem.”

    From my understanding, symbolists’ argument that symbol-grounding can occur by hooking up the machine to peripheral devices trivializes the symbol-grounding problem because this does not allow for a better understanding of *how* are we able to discriminate and categorize objects we come into contact with. However, one thing that this excerpt made me think about was how other sensory modalities come into play with iconic /categorical representations. If iconic representations are sensory projections onto your retina before being filtered from category feature-detectors (innate process), what is the explanation behind sensory modalities other than vision? I guess my question is how iconic representations occur for other perceptions (auditory, tactile, proprioception, etc.), since categorical representations would still be the same for all (causal relations between objects from experience would all occur for each modality).

    ReplyDelete
    Replies
    1. Hi Darcy! That's a great question and I was wondering the same thing. I'm not sure how it relates to tactile sensations or proprioception, but I believe that the idea of symbol-grounding can be related to auditory understanding. Spoken and written language bear many similarities. A word in spoken form is just as representative of something as it is in written form. The syllables and their respective sounds can be considered to be symbols of the meaning. The sound and the shape of that sound are just as arbitrary as a written shape.

      Delete
    2. Darcy, all sensory modalities have receptors (retina, cochlea, skin) and they can all be connected to feature-learning nets. The learning capacity is innate and can be multimodal too (and usually is), but the features of each category, and the categories we end up learning, are not innate (and they differ from person to person).

      Kimberly, what we learn to categorize is things (objects, living things, features, events, places, states). Those can also become the referents of our words (“apples”), which are arbitrarily shaped symbols. But grounded symbols are not “symbols of meaning.” They refer to things; and it feels like something to know what things they refer to. The feeling of knowing what a word refers to is what we usually mean when we say we know what it means.

      The other thing we mean by “meaning” is the verbal definition or description of the features of the referent of the word. An “apple” is “a round, red fruit.” But, of course, to understand the word’s definition or the referent’s description, you have to know what each of its words refers to (and you also have to understand propositions and predication). Definitions usually tell you the features that allow you to recognize the referent – features that you would have had to learn the hard way, though trial-and-error and corrective feedback, if there had not been someone else who knew the features, and could tell them to you.

      Delete
    3. Darcy, when thinking about how this applies to all sensory modalities, I found it helpful to remember that while all modalities may be qualitatively different in the "external world", following transduction via their receptors, they are all processed in the same way, through neural action potentials. In this way, your categorizations include information from all modalities.

      Delete
  16. In 5B1, we learn that sensorimotor capacity is necessary for symbol grounding and meaning. Humans have sensorimotor capacity, and our varying experiences interacting with the world result in individuals interpreting different meanings for the same thing. For example, my interpretation of the word “snow” might be different from that of another individual’s, because of our separate sensorimotor experiences associated with it. We have differing referents connected to the same symbol. In the same way, a T3 robot can interact with the world and develop its own “groundings,” meaning that even if multiple T3 robots were reverse-engineered using the same software, we would be able to distinguish them from each other because they would develop differing referent-symbol connections. On the other hand, I think multiple T2 computers with the same software would be indistinguishable from one another because they would have no sensorimotor experience to enable symbol grounding.

    ReplyDelete
    Replies
    1. Josie, meanings differ for the same word because sensorimotor groundings may differ. Grounding is based on detecting the features that distinguish one category from another. The distinguishing features are not necessarily exhaustive or unique. Different features could pick out the same referent.

      Categorization is approximate. “An apple is a round red fruit” is obviously too weak. It misses green apples and it miscategorizes tomatoes as apples. Google’s definition is better --“the round fruit of a tree of the rose family, which typically has thin red or green skin and crisp flesh” – but no set of features is sufficient, except in mathematics, where the definition is formal and syntactic. Although “any integer that is divisible by two” will always pick out “even numbers,” even-numbers have other features too. And for natural-kind categories, such as “apples” (or “ducks” or “gold” or even “edible mushrooms”), we may discover new kinds tomorrow that make the features that have served us well until now, fail us. (We’ll talk about this next week, in the context of mushroom-picking, the Watanabe’s “ugly-duckling” theorem, and Kripke’s “Naming and Necessity.”)
      (Not only can people and T3s have different feature-groundings for the same referent, but there’s no reason different T2s can’t have different definitions for the same word. Individual differences are trivial, and inevitable, in people as surely as in pots.)

      Delete
    2. Professor, your response addresses a question that came to me when reading the papers for this week.
      The first reading said that semantic interpretability must be coupled with systematicity (along with explicit presentation and syntactic manipulability). Is this why you can say that individual differences in feature grounding are trivial? Because it isn't the features used to ground the symbol that are important, but it is the fact that it is done systematically through categorization that allow for its interpretation and grounding.

      Delete
    3. Emma, if the members of a category can be distinguished from the members of another category using different sets of features, it doesn’t matter if one person is categorizing using one set and the other is using another—except of course if one day you run into a case where one set indicates it’s a member and and the other indicates it’s not. Then it’s time to revise and update features. And this can happen regardless of whether you learned the category directly, by sensorimotor learning, or indirectly, by verbal learning.

      All categories (except formal mathematical ones, like “even numbers” (feature: divisible by 2), are approximate. Their features may serve you faithfully for years, then suddenly you fnd yourself up against a case where the approximation was not good enough. This happens most with scientific categories, always being revised with new evidence, but it can happen in ordinary conversation too: For a while you think the two of you are talking about the same thing, then it turns out you’re not. (Can you think of examples? They happen often with little kids looking at picture books and naming the animals…)

      Delete
  17. comment on 5a)

    The reading was helpful to round off everything we have been discussing in class and further defining the different schools of thought in cognitive science. As usual, Dr. Harnad suggests that the answer is likely a mix of both symbolism and connectivism.

    I am curious to understand further what Dr. Harnad means when referring to an interpretation as fixed. He says it is contingent on passing one formal test (is it a symbol) and one behavioural test (discrimination, identification and description). If an interpretation is fixed, its meaning is not just coming from our heads but is intrinsic to the symbol system itself. Though I may be missing the nuissance of this statement, is this not the same argument that comprises the systems reply to Searles CRA (question mark)

    this is french keyboard and it wont let me do a question mark sorry !

    ReplyDelete
    Replies
    1. Laura, the interpretation of a speaker’s words is “fixed” by how they fit with the speaker’s actions in the world. If you’re wondering what someone is referring to when they say “I’m going to the bank,” see if they head off toward the financial district or the river.

      Delete
  18. What I found most prominent in this reading was how the problem of symbol interpretation points directly to the necessity of consciousness. By understanding the concept of squiggles and squaggles pointing to more squiggles and squaggles, or the symbol grounding problem, we can clearly understand that groundedness is not sufficient for meaning. In accepting the nod to the importance of consciousness, we know that robots that pass the turing test will still not have meaning in their interpretations and thus are not sufficient in modeling cognition. So, I am wondering why there is still such a large focus on robotics and AI in the field of cognitive science (question mark)

    ReplyDelete
    Replies
    1. Hi Laura, I found Prof Harnad's (above) reply to Mathilda helpful in explaining this. Bottom line of his comment is that we will not get farther to explaining consciousness, as the other-minds barrier will always prevent the reverse-engineering of a T3/T4 system where we know that it feels meaning rather than just grounds symbols ("does").

      Delete
  19. The paper by Harnad (1990) (5.a), brings forward originally the symbol grounding problem as the problem of how “ the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols.”
    This revolves around the difference between meaning and grounding. This connects to my previous skywriting on the difference between the two- grounding being a necessary but not sufficient aspect of meaning (my 5.b skywriting wrongly labeled 5.a).

    The paper brings forward two main differing ways to examine the problem, symbolic and connectionist, which should be used to look at the symbol grounding problem simultaneously.

    Whether meaning can ever be fully explicated is in question however ("no guarantee that our model has captured subjective meaning, of course"). So it seems that even as we have addressed the SGP theoretically can it be said that the SGP problem is not actually solved?

    ReplyDelete
    Replies
    1. Kayla, the SGP is solved if and when we have successfully reversed-engineered a T3 robot like you. The SGP is part of the easy problem (EP) of cogsci. Explaining feeling (and hence meaning) is the hard problem.

      (Yes, solving the SGP problem is probably a necessary condition for meaning, but not a sufficient one.)

      Delete
  20. Skywriting 5.B

    Reading B2 helped me understand better the symbol grounding problem. One thing that became clearer is the requirements for grounding. According to Dr. Harnad, the two requirements are the capacity to pick referents and consciousness. From what I understood, the capacity to pick referents refers to the ability to relate the word we read in a book for example, to objects, events, or actions that exist in the real world. On the other side, consciousness is also what makes that connection between symbols and referents possible. From what I read in the article, it sounds impossible for computers to make that connection since what is mediating that connection is the mind, something that computers are deprived of.

    ReplyDelete
  21. I found the second reading (5B1) helpful in linking all of the concepts from the past weeks together. Searle's CRA is one argument leveraged against computationalism; the manipulation of the Chinese symbols by Searle in his thought experiment do not provide him the sufficient ability to understand them, therefore proving T2 systems performing symbol manipulation are also unable to come to full understanding by means of the implementation-independence tenet of computationalism. Searle’s CRA points towards the idea that grounding of symbols may be necessary for understanding, as there must be another mechanism present to allow Searle to feel meaning. However, grounding by means of iconic and categorical representation can aid in explaining how this meaning can potentially arise, but it does not explain how we can feel the meaning that result from grounding symbols (understanding). Thus, grounding symbols will not provide us the means to get closer to solving the hard problem of consciousness.

    ReplyDelete
    Replies
    1. Darcy, but Turing says (and I think he's right) that reverse-engineering and T-testing doing-capacity is as close as we can get.

      And computation alone doesn't just fail to produce "full" understanding: It doesn't produce any.

      Delete
  22. Comment for 5B2

    “This is still no guarantee that our model has captured subjective meaning, of course. But if the system's behavioral capacities are lifesize, it's as close as we can ever hope to get.” This quote from reading 5B2 and previous sky posts helped clarify “meaning” for me. Meaning is conscious, but we are not conscious of how our brains has the capacity to mean or understand a sentence. Although grounding and referents are necessary for meaning, they are not sufficient in explaining the mechanisms in the brain of what it feels like something to mean or understand a sentence.

    ReplyDelete
    Replies
    1. Sara, we're not just unconscious of how we can feel; we're also unconscious of why; and the HP (Week 10) gives us an idea why it's so hard (maybe impossible)

      Delete
  23. Skywriting 5a

    I understand that icons can be used to discriminate categories based on the same/different judgements we make based on the sameness or difference of these iconic representations, but couldn't icons also help to identify categories that are more distinguished from others and for categories that contain few members/objects? I think that categorical representations are useful for very big categories and those that contain many elements of other categories, but icons can identify things in some cases. If we could gather analogs of the many shapes of objects in a category and develop various mental images of them, this would allow us to have great specificity as to what the category entails.

    ReplyDelete
    Replies
    1. Alexander, good conjectures, but see other replies about the “exemplar” theory of category learning – and read next week’s wonderful short story by Borges about “Funes el memorioso” who could remember every icon he ever experienced: http://vigeland.caltech.edu/ist4/lectures/funes%20borges.pdf

      Delete
  24. (Skywriting #2)

    Harnard defines "grounding" as a system's capacity to directly connect its symbols to what they refer to (i.e., their referents). He then argues that hybrid symbolic + sensorimotor system would be grounded, owning to its sensorimotor interactions with the world (by which I assume he means "physical world").

    My question concerns whether sensorimotor capacities are sufficient for grounding all symbols human beings can understand. Two possible counterexamples are the cases of mathematical symbols ("1", "2") and moral concepts ("justice", "equality"). It seems to me that there are no objects in the external world (at least, objects that we can physically interact with) that can serve as the referents to those symbols. Nor is it plausible that they can be understood as the composition of more basic, sensorimotorly grounded, symbols, in a similar way that "zebra" can be constructed from the composition of "horses" + "stripes".

    To focus on the case of moral concepts, which I'm more familiar with, we have known for centuries that they are not descriptive (or at least not purely descriptive), but normative. When I say something like "the US tax code is unjust", I do not mean "most people I have encountered in my experience attribute the feature 'unjustness' to the US tax code", or anything like it. Rather, I'm claiming that code falls short of certain evaluative standards which justice, properly conceived, entails. And there is no object in the physical word (nor any combination of descriptive features / properties in the physical) that can act as the referent for "justice, properly conceived". In short, if the is/ought distinction is to be maintained, it seems hopeless to claim that sensorimotor interactions can account for the groundedness of moral concepts.

    ReplyDelete
    Replies
    1. It just occurred to be that I could have made the same point much more succinctly: would a T3 robot be capable of converse with a human indefinitely on the topics of mathematics and moral philosophy?

      Delete
    2. Hi Gabriel, this is an interesting query. I believe that a T3 robot could be capable of conversing with a human indefinitely about math and moral philosophy because, since their cognitive/behavioural capacities are indistinguishable from that of a human and their thinking is grounded in sensorimotor experience, they can discuss anything that a human could at length. This may seem like a rather simplistic reply to your thoughtful question, but I do not see any reason why the abstract categories that you mentioned are any more abstract than other categories. For example, professor Harnad mentioned in a reply above that: "all categories are abstract [...] abstractness is just a matter of degree." By this logic, a T3-passing robot that can speak about any category is also capable of speaking about math and moral philosophy at length. I hope this helps clarify your question.

      Delete
    3. Gabriel, I think Polly has given you the right answer.

      There is only one “world,” and that world is physical. That world also contains words (symbols, on paper, in computers, in people’s brains). And the interactions between words, speakers, and the things words are interpretable by thinking speakers as being about.

      Some grounding (of words, in their referents) is sensorimotor. But once you have enough words grounded directly (through unsup and sup learning), the rest can all be grounded indirectly: verbally. Words grounded verbally inherit their grounding from the grounded words in their definitions (or descriptions, or explanations). Please read the other comments and replies on this thread.

      Numbers are grounded verbally, using already grounded words, and this can be purely syntactic, with formal rules, stated in words, for manipulating the mathematical symbols without reference to their meanings. But of course, even mathematicians use the sensorimotor grounding of their numerical and geometrical terms (3, add, =, triangle, more) in their creative thinking, even if not in their formal symbolic proofs.

      But, besides that, you need to give some more thought to indirect grounding through words before being too confident that "justice" or "equality" cannot be grounded verbally the same way “zebra” = “horse” + “stripes” can. After all wasn’t your whole paragraph on moral and “normative” concepts just a series of verbal propositions?

      You ask “would a T3 robot be capable of conversing with a human indefinitely on the topics of mathematics and moral philosophy?” Of course: why not? That’s what T-testing means. Failing TT means your reverse-engineering failed. Try again. If all T3 attempts fail, go to T4.

      (But I don’t think T-testing would have to go as far as the esoterica of philosophical discourse to fail; T3 failure would be evident much earlier along the line; after all, there are plenty of generic T3-passing humans whose eyes glaze over in any attempt to converse about maths or philosophy – or, for me, sports, computer-games, “true-love” comic-strips, sci-fi, supernatural beliefs… These are matters of taste, and sometimes (as in art, athletics, and higher algebra) special skills, not essential parts of generic TT capacity. T-testing is for Lilliputian cognitive capacity, not Einsteinian!)

      Delete
    4. I agree that the abstractness of a category is just a matter of degree, and the abstractness shouldn't matter in how we ground the term. However, it got me thinking about how indirectly grounded terms can be explained differently using basic symbols. For example, no one would argue if some object is an apple or not, but different people always tend to have different opinions in explaining what "justice" means. Maybe this is due to the environment we grew up in and the type of stimulus we received for different responses. These associations shape how we indirectly define different words.

      Delete
    5. Just fyi, my "objection" has nothing to do with the abstractness of moral concepts (though they certainly have that property as well), but their normativity. Moral concepts and propositions are ought-statements (they tell us about how things ought to be), and ought-statements cannot be derived from is-statements (that is, statements about how things are).

      Let me give an example. Suppose you are a consequentialist, meaning that you believe that all moral concepts can be reduced to considerations about the goodness of the outcomes of actions. I can see how the concept of 'outcomes' can be grounded in sensorimotor experience, but what about the concept of 'goodness'? There is no such thing as 'goodness' particles in the universe you can interact with. Maybe you can say that you learn what goodness is by experience particular actions that are good. But that is just goodness in the is/descriptive sense – you are interacting with actions that are deemed good by those in your environment –, and tells you nothing about the goodness in the ought/normative sense of that action.

      I guess what I'm trying to say is that the concept of "goodness" or "oughtness" has no referent (at least no physical referent).

      Delete
    6. Gabriel, the “is/ought distinction” and “goodness” are of interest in ethics, but in cogsci the question of what their referent is is much simpler. Can you identify examples of what is and is not the “is/ought distinction” or of “goodness”? (Yes.) Can you define or describe the “is/ought distinction” and “goodness” in words? (Yes.) Are the words in your definition or description grounded (whether directly, or indirectly, through verbal definition or description). (Yes). That’s the end of the story for grounding.

      Think about it. Resist going back into the ethical issues – although all of those are in English too, so all their content words should be grounded too (unless they are weasel-words, in which case the discussion may be incoherent).

      And think of what you mean by a “physical” referent. All that’s needed to ground a category-name is a referent, of which you can say, “yes, that’s a member” and “no, that’s not a member.” When the referent is an apple, it is a physical object, you can see it with your eyes and manipulate it with your hands and mouth – and, if you have language, you can also define, describe and talk about it.

      When the referent is a “peekaboo apple” that vanishes without a trace whenever a human’s sensory surfaces begin to move in its direction, or whenever the human’s brain processes move toward thinking about it, the word “peekaboo apple” is perfectly understandable, indeed meaningful; and it certainly has a referent (though, like a unicorn, one of the features of the referent is that is fictional). (“Leprechaun” and “Bambi” have referents too.) A “tachyon” has a referent too, even though, according to current physics, nothing can travel faster than the speed of light.

      We’re talking about reference here, not about existence, or “physicality,” or even sensorimotor perceptibility. And reference is a feature of words, not the world (except inasmuch as thinking, speaking organisms are a feature of the world.

      Now, let’s leave ontology and get back to cogsci!

      Delete
  25. It is interesting for me to think about turning "ungrounded" meanings of words into "grounded" meanings in my head. I read in the comments above that only for first language that we need grounding, and for second language is really about decrypting and translating. It is possible for a person that learns the second language so well that he becomes bilingual, then the language can be said needing grounding?

    ReplyDelete
    Replies
    1. Hi Monica,

      I think it certainly is possible, and the difference between 'languages where symbol grounding exists' and 'languages where there is just translating' is not that strict a borderline. I think there can be symbol grounding in the second language. Take myself as an example. I learned what kale is after arriving to Canada; I recognize that plant as the English word 'kale' and the Mandarin word does not come to me naturally though it is my first language. So I think it is evidence that I have grounded the English word 'kale' to the mental representation of kale plant but not the Mandarin Chinese word 羽衣甘蓝. The latter is translated from the former, though the former is in English, my second language.

      After doing the readings and reading your comment, I now tend to think that the ability of symbol grounding in a new language could be proof of mastery in that language.

      Delete
    2. This is an interesting discussion that has crossed my mind as well, especially the part that Han mentioned about the ability of symbol grounding in a new language being proof of mastery in that language. As someone who speaks three languages natively, I have had similar experiences to your kale example. Sometimes I feel as though forgetting one word in one language will make me forget that word in the other two, even though I have a picture of it in my mind. I guess that means I've grounded the mental representation in my mind but not verbally?

      Delete
    3. This is very interesting. Prof Harnad mentioned that learning a word in another language can require a new set of symbols and meanings, but the concept has already been grounded in the native language (thus learning a new language requires only decryption and translation). Does the word in a new language (represented by a different set of symbols) not need to be regrounded? Its meaning (the rule of tying the symbol to its referent) will likely involve understanding the new symbol through one’s native language, but the brain still needs to make a brand new connection between the meaning and the new word.

      I would have to disagree that grounding in a language is proof of mastery. Say you ground the English word “kale” in its meaning, but that is one of the few words you learn. You would not consider yourself as mastering English.

      Delete
    4. Symbol grounding in language definitely seems to range in a spectrum. I would agree with what Rosalie said about grounding of one word in a language would not show mastery of that language. To propose an alternative definition, maybe mastery of a language as shown by grounding could be shown by using other words in that language (that are grounded) as a way to ground a new word in that language. Essentially, being able to use a monolingual dictionary of that language to look up a new word and having every word in that definition grounded instead of pulling from translation.

      Discussion about how concepts are grounded in your native language made me wonder about how concepts are grounded when children grow up learning multiple languages from a very young age (perhaps in Ariane's case). I did not personally have this experience, but I would imagine the word that comes to mind first when presented a concept would depend on the scenario where one most frequently uses that word. Not the best example, but many immigrant families coming to US or Canada use a "home" language and an "outside" language (English) for school or work. Words or phrases used more frequently at home like "mop", "clean your dishes", "make your bed" would be grounded in the "home" language but words like "math", "pencil", "recess" would be grounded in the "outside" language. Even presenting the same concept in different scenarios might result in a different language coming to mind (ex: being shown a table in a place that reminds someone of home vs a table in an outside location). I am interested in hearing more about others experiences learning multiple languages from a very young age, and what their experience with how each of their languages are grounded.

      Delete
    5. Sophearah, I think you are right that if you hardly know a language, grounding one word in it does not “ground” the whole language. But, to the extent that grounding enough of the right words directly in a language to be able to define all the rest of the words in that language (the Minset) is what is the key to the power of language, knowing multiple languages is more a social than a cognitive necessity.

      Very good questions about multilingualism (and Quebec is a good place to ask them! – though India, Switzerland and many polyglot regions of Africa are too). Try Google-Scholaring “Wallace Lambert” at McGill, who wrote about “compound” vs. “coordinate” bilinguals years ago. And also look at the literature on pidgin and creole languages.

      Delete
  26. In Harnard (2003), categorical learning is said to be based on "feedback from the consequences of correct and incorrect categorization", which to me is very much like a Pavlovian behavioural learning scheme. It is no doubt that it is an effective way of learning, but I am wondering if there is something universal in human beings that we can understand groundedness of meanings in our first language? This may be the second property - consciousness - but again, what is consciousness? What I had in mind is a capacity that we are born with, something like UG.

    ReplyDelete
    Replies
    1. Monica, supervised/reinforcement learning from trial/error/feedback is more like Thorndikian or Skinnerian learning than Pavlovian, which is more like unsupervised learning.

      Explaining consciousness (feeling) is the "hard problem" (HP). And Universal Grammar (UG: what is that?) is only relevant in that it is not learnable, hence must be innate.

      Delete
    2. Yes, thank you for clarifying, I went back and refreshed my memory on Skinnerian learning. UG by definition is true of all languages and not learnable.

      Delete
  27. This week's readings nicely complemented each other, helping solidify my knowledge of symbol grounding and why it is a "problem". A symbol is grounded when it picks out a referent, a connection made possible thanks to the sensorimotor capacities of the thinker. This grounding must be sensorimotor otherwise symbols would just refer to other symbols in an endless, meaningless cycle. According to this line of thinking, anything less than a T3-passing robot cannot ground symbols and, therefore, cannot "understand" because it simply manipulates symbols based on shape, as per Searle's Chinese Room Argument.

    When Paper 5B mentions that "groundedness" may not be sufficient for meaning, it provides the following example about a zombie-like TT-passing robot: "with no one home, feeling feelings, meaning meanings". I don't understand how this quote demonstrates the argument of this paragraph. Surely if the zombie is "feeling feelings" and "meaning meanings", then symbol grounding IS sufficient for meaning?

    Some clarification would be very appreciated as I think I'm misunderstanding the sentence, thank you!

    ReplyDelete
    Replies
    1. Hello @polly, I believe the quote may have been misinterpreted syntactically. I think it just means to say that 'there is no one home to feel any feelings or mean any meanings'. So it describes the zombie as NOT feeling feelings or meaning meanings. Hope this clarifies :)

      Delete
  28. The explanation of categorical representations as being stripped down to "those [features] that reliably distinguish members from nonmembers of a category" reminds me of the concept of a natural class (and distinctive features in general) used in phonology which are exactly that, using (mostly) binary features referring to the placement of the tongue in the mouth, the shape of the mouth, etc. These features are thus mostly dependent on actual physical phenomena, but their interaction is subject to formal rules that are largely ungrounded. These physical phenomena are of course continuous, and so positing the role of connectionism as taking care of learning to categorize them in terms of discrete features makes a good deal of sense.

    ReplyDelete
    Replies
    1. Zahur, yes, there are similarities between phonemic categories and their features, and other categories and their features. (And next week’s “categorical perception” was first highlighted in speech perception. And, yes, categories along a continuum are special, in phonology as well as color perception.)

      Delete
  29. From age 0, as I attempt to navigate the world around me as best I can, what am I doing? I form icons (snapshots) of everything I see and discern their degrees of similarity. Further, I am continually detecting, through sensory interactions with the things that project the icons, the congruency of the icons’ features. At this point I can tell blackness from whiteness. Then I must actively reduce icons to the repeated features that I’ve learned they have— *when I am awake I sense* whiteness, movement, sounds; *when I am sleeping, I sense* blackness, stillness, silence. I miraculously learn language (week 9). The *awake* features I began to observe repeatedly each day, when the sun was out, or when the blinds were opened is now prescribed a verbal name: “awake”. The name “awake” has a shape that is arbitrary compared to the sensory feelings of seeing whiteness, producing movement, or hearing sounds, but the name is an elementary symbol that I have grounded in these sensory interactions. Distinguishing whether the features of further icons are invariant to the features of my elementary symbols is what it is to identify or categorize.

    ReplyDelete
    Replies
    1. Sepand, a bit fanciful (because category names usually come from supervised learning, not unsupervised).

      Delete
  30. 5a. This reading made me realize that Tech Giants's deep learning algorithms, widely used today, are based on the connectionist approach to cognition. Deep Learning utilizes neural networks to extract features of raw data to then transform them into suitable features that a program can understand, which can then place them into distinct categories.
    As the paper stated, connectionism should not be an opponent to symbolism. Computers may only have the connectionist approach, but alive beings have both ingrained in their minds. Even though the connectionist system somewhat solves how we categorize things on a surface level. How and why symbols are grounded within our minds remains a mystery.

    ReplyDelete
    Replies
    1. Alexei, this, too, was a bit fanciful. Neural nets are good at learning features, whether in cogsci or industry.

      Delete
  31. Skywriting to 5A

    I believe we talked about the symbol grounding problem during the first class of the term and for some reason, it really struck me. I’ve linked a lot of the readings to this problem because it seems to be a core issue in a lot of the matters discussed with relation to AI and cognition. I found both connectionism and symbolism very interesting approaches to the symbol grounding problem. For what I understood, connectionism is a dynamic system where feedback allows us to adapt and learn from previous connections made. On the other hand, the symbolic approach reposes on the idea that once a set of elementary symbols is grounded, the rest of the symbol strings will be generated by symbol composition. These theories are presented as contrasting theories. However, when put together, these theories seem to be the closest we can get to solving the symbol grounding problem.
    “Table 1 summarizes the relative strengths and weaknesses of connectionism and symbolism, the two current rival candidates for explaining all of cognition single-handedly. Their respective strengths will be put to cooperative rather than competing use in our hybrid model, thereby also remedying some of their respective weaknesses.”

    ReplyDelete
    Replies
    1. Ines, not quite:

      Neural nets (“connectionism”) can be implemented either dynamically or simulated computationally (“symbolically”), either way, what they are good at is learning to detect features that distinguish one patter from another.

      “Hybrid” means both computational and noncomputational. (Please read the other replies to comments in this thread.)

      Delete
  32. Reading 5b

    The second reading on the symbol grounding problem helped me understand its relation to a lot of the topics we've covered. I've learned that the property that allows us to ground symbols is that brain possesses the capacity to pick out their referents, which requires that we have sensorimotor capacities, such that we have the capacity to interact autonomously with objects and properties in the world that its symbols are interpretable by humans as referring to. So, I understand from this that for a symbol to be grounded, the symbols need to be connected directly to their referents in that our sensorimotor interactions with the world would match the symbols' interpretations.

    ReplyDelete
    Replies
    1. Alexander, but you missed what makes the connection: trial and error learning that detects the features distinguishing the members of a category from members of other categories. (Please read the other replies to comments in this thread.)

      Delete
  33. Reading A: Perhaps the most intriguing aspect of the symbol grounding problem is the notion that in order for one to truly “understand” Chinese as a first language, it seems practically impossible for this learning to occur via only a Chinese/Chinese dictionary, at least from a typical language learning perspective. In my intro linguistics class, we learned the ways in which one learns a language as one’s “first” language, and the process almost always begins when one is at a very young age, when the brain is more malleable. Furthermore, it often involves a deep immersion in the language, meaning one learns the language through all the senses: reading it, hearing it spoken, speaking it themselves and writing it themselves.

    ReplyDelete
    Replies
    1. I think to add on to your point of learning language through all the senses is exactly what is being referred to by the "iconic representations" discussed in the first reading. It is sensorimotor inputs that allow for names to be attributed to different objects or things out there in the world that we interact with.

      Delete
    2. What has to come before learning what things words refer to is learning to categorize what things are, from sensorimotor interactions with them.

      Delete
  34. Reading 5b: It makes sense that in order for the meaningless strings of symbols to become meaningful thoughts, machines need to be able to interact with the world in terms of objects, events, actions etc. by sensorimotor capacities such as robotics, however the idea that consciousness is also required for symbols grounding is far more complex. At its core, the only way we can know if the symbol grounding via robotic TT capacity is enough for conscious meaning is to solve the hard problem of consciousness, specifically devise a method to figure out if any of the physical states that the machine is capable of exhibits conscious meaning.

    ReplyDelete
    Replies
    1. It feels like something to think or understand or mean what words mean, but it's not clear whether or why feeling is needed for grounding what words refer to. That's part of the "Hard Problem."

      Delete
    2. Ah I see, thank you for the clarification professor.

      Delete
  35. After re-reading your first posted paper, I wanted to ask if you could explain the difference between positive and negative interconnections?

    ReplyDelete
    Replies
    1. I think the positive and negative interconnections refer to neural networks’ weight. And weight (which can be both positive and negative) are essentially reflecting how important input is. Because when a neural net is trained on a training set, it is initialized with a set of weights, which will later be optimized during the training period. Negative weight means increasing this input will decrease the output.

      Delete
  36. 5b1. Human beings can ground the symbols observed in the world in their heads and give them meanings. But what about any other living entities? Let's say we take my friend's cute dog "Isha" as an example. Whenever Isha wanted to go out, she would stand near her harness or even bring it to my friend if Isha could not see where my friend was at home.

    Sure some may say that it's because Isha has been conditioned to wear the harness before going out. But is it still conditioning when my friend asks Isha to bring her the harness so they can go on a walk?

    In this view, Isha has grounded the word "harness" into her own understanding that harness = go on a walk. Or is this, again, only more complex conditioning? But if that is the case, where do we draw the line between purely cognitive functions and simple habits developed from conditioning?

    ReplyDelete
    Replies
    1. Isha feels, and she understands, but neither purposive communication nor the association between words and things is language, any more than pantomime is.

      Delete
  37. Grounding a sufficient number of independent atomic symbols allows meaning to be preserved through atomic symbol combinations (and it is presumably at the level of combination that computation). From what I understand, symbol grounding is purely a bottom up operation of recalling iconic representations from memory in order to compare novel objects to ones previously encountered, and constructing categories by abstraction iconic representations to their 'essential features'. At what stage in human development does this grounding process occur? It seems that very early on children are able to produce extremely categorical iconic representations (e.g. stick figures), and much less able to produce representations similar to any one object within a category.

    ReplyDelete
    Replies
    1. In another psychology class I’m taking, we learned that the process of constructing category is an innate human capability, meaning that it occurs at a very early age, before perhaps we even develop the language to express our knowledge of categories. For example, a child might see a duck at the pond, and learn from hearing their parents that this animal is called a duck. From then on, every time the child sees a bird that slightly resembles a duck, they might call it a “duck duck” or something similar, in which they are unknowingly grouping the creature into a category. In this way, I think that symbol grounding is also an innate capability of our species, that is if we think of it in the way that you have defined.

      Delete
    2. The capacity to learn categories -- to learn to DO the right thing with the right kind of thing -- evolved long before language. It also comes earlier in development.

      Delete
  38. 5A - So far, from what has been discussed in classes, I am starting to see where topics intersect. Firstly, as read in 5A, the symbol grounding problem is concerned with determining how words that can be classified as symbols gain meaning - which then conforms to the question of how we characterize meaning. Here, CRA is connected based on how "Searle challenges the core assumption of symbolic AI that a symbol system able to generate behavior indistinguishable from that of a person must have a mind.". Specifically, the computer, in such a case, was not "understanding Chinese as one would understanding English symbols." In the first class, the discussion of T2 vs. T3 was a topic continuously brought up in the following classes. At first, I couldn't see a connection and struggled. But now seem to see a better understanding. As T3 is seen as the Turing test of sensorimotor skills, then, in this case, T3 should be a better approximation of the Turing test as we would need sensorimotor skills to be able to make meanings out of symbols - unlike T2, which was only concerned with penpal activity which basically is not enough. Please correct me if I'm wrong, as I believe I still need more understanding to grasp where all concepts intersect fully.

    ReplyDelete
    Replies
    1. Maira, distinguish "reference" from "meaning." It feels like something to refer to something, but how and why feeling is needed to refer (or to DO anything) is the HP.

      Delete
  39. 5B - This reading had better solidified what I had read from 5A. The same concepts were discussed, but in a manner that helped me better understand concepts; I was a bit fuzzy in the previous reading (ex: grounding vs. ungrounded). However, a part of the reading I did want to gain a bit more clarification on would be based on the discussion of categorization. Specifically, a categorizer will be able to distinguish sensorimotor features. Still, often this does not come as simple nature - aka this takes a focus on learning through means of trial and error. Though, I’d like some clarification or further understanding on this specific portion in such cases of a person who might have memory deficits or was born blind. How would such cases of categorization and learning be altered?

    ReplyDelete
    Replies
    1. Hi Maira. I think it is more about the difference between categorical and iconic representations. Steven mentioned in the previous comment that categorical representation could be learned or innate feature detectors. On the other hand, iconic representation is "the sensory projection onto your retina without (or before) being filtered by category feature-detectors." So I think your question about a person born blind would more directly affect the iconic representation than categorical because categorizing could be either learned or it can also be something you're born with. While iconic has to first project to your retina(I assume), not sure how it will work, but they have some kind of mental representation, maybe? It probably won't affect them much in terms of memory deficit because they usually lose their episodic memory, not their cognitive ability to categorize.

      Delete
    2. The ”shapes” of objects can be projected onto many different sensory surfaces (retina, cochlea, skin surface, nose, tongue -- and even motor surfaces, if your movement can imitate the object’s shape). That’s all “iconic,” preserving a “shadow” of the object’s shape. But once some of its sensorimotor features are selectively abstracted to distinguish which objects belong to the category we call “tomatoes” and which ones belong to “persimmons,” it’s no longer the whole shape icon that matters but the distinguishing features that have been abstracted.

      Delete
  40. Something we do is understand meanings; computationalism is demonstrably (via Searle's periscope) inadequate at explaining this faculty. Computational ability seems to be a necessary condition for producing novel symbol combinations that remain interpretable, but a symbol system in itself is an insufficient explanation for the ability to interpret said system. The symbol grounding problem emerges from this result as a way to problematize the necessary conditions of initial meaning attribution.

    ReplyDelete
    Replies
    1. Please accept my apologies for the late comments

      Delete
    2. Kid-sib couldn't understand your comment...

      Delete
  41. 5B.
    I really enjoyed this reading. I think it linked together all the subjects that we previously covered. It put the 5A argument about the symbol grounding problem in context with other topics such as the Chinese Room Argument and the Turing Test. This also allowed me to understand certain concepts in a broader scheme. I never really understood the concept of “implementation-independent”. It helped to put it in the software-hardware analogy. I understood that an ‘implementation-independent’ is basically hardware independent. It can implement and execute the same program in a different context.
    “A computational theory is a theory at the software level. It is essentially a computer program: a set of rules for manipulating symbols. And software is "implementation-independent." That means that whatever it is that a program is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the dynamical system implementing the computation are irrelevant to the computation itself, which is purely formal; any hardware that can run the computation will do, and all physical implementations of that particular computer program are equivalent, computationally.”

    ReplyDelete
    Replies
    1. In what way is T3 (Kayla) "hardware-dependent"?

      Delete
    2. In what way is T3 (Kayla) "hardware-dependent"?

      First, grounding refers to the brain's capacity to pick out symbols' referents. A symbol system (T2) alone is not capable of such a capacity because picking out referents is not a computational property (implementation-dependent); it is a dynamical property (implementation-dependent).

      A software is "hardware/implementation-independent" in that it refers to what a program does (symbol manipulation); a software can do the same manipulation regardless of what hardware on which it is executed. In other words, the physical details of the system implementing the computation are irrelevant to the computation itself. Kayla (T3) is "hardware-dependent" in the way that she is a hybrid (analog/computational) system. That being said, the physical details of her dynamic system that's implementing the computation are relevant to the computation itself because they directly affect her sensorimotor interactions with everything she can interact with.

      Delete
  42. This week's readings helped me distinguish between "reference," "grounding," and "meaning." "A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property."
    The reference of a symbol is an actual thing outside the symbol system. "Grounding" is the sensorimotor feature that allows us to form connections with the referents of those symbols (i.e. a real apple and the word "apple"). However, "meaning" is much more than "grounding" and "reference." It is the standing for objects and description of affairs and a feeling of understanding that's internal to us.

    ReplyDelete
    Replies
    1. Jenny, words can refer to things, or features of thing, or states of affairs of things (such as the cat being on the mat). Kayla is a grounded T3 robot. Her words have referents and she can show them by interactions with the things they refer too. Her words have meaning if it feels like something to refer to those things with those words. Otherwise the words have only grounding, not meaning.

      Delete
  43. I’m not sure if this is the right way to interpret it, but I think since there is the easy and hard problem of consciousness, there should also be an analogous set of problems in grounding. And the easy problem of symbol grounding would be how we can explain the function of meaning, and the hard problem would be how something physical gives rise to meaning. And symbolic AI and connectionism both contribute some answers to the easy problem of symbol grounding.

    I think one of the biggest limits of connectionism is its lack of explainability for its behavior. It is difficult to understand why and how it produces the results because there is no clear, identifiable logic and rules for determining the structure of the neural network. In other words, it is not particularly good at rule-based processing like symbolic AI, which is necessary for a higher form of thought.

    ReplyDelete
    Replies
    1. Nadila, there’s only one hard problem, and that is to explain how and why organisms FEEL (rather than just DO).. It’s the same problem when you are asking how and why injury feels painful or how and why words feels meaningful.

      The problem of explaining how a neural net encodes the features it detects when it is learning categories is something different, but, inasmuch as it is part of the reverse-engineering of how organisms learn categories, and inasmuch as neural nets are a candidate mechanism that might be able to do that, it is part of the easy problem of cogsci, not the hard problem. The details of how neural nets encode and store what they encode and store while they are learning to detect the features that distinguish categories is also part of the easy part, but perhaps a slightly more “vegetative” part, in the way the physiology of the neuron’s action potential is more vegetative than cognitive. The important thing is that they are a mechanism that can help make a robot successfully learn to categorize. (Whether they can do it all the way up to TT-scale remains to be seen; probably not, at least not the current crop.)

      Delete
  44. Sorry for the late comments, very busy week.
    The paper by Harnad entitled “the symbol grounding problem” highlights the idea that the symbol grounding problem is to be examined under two different lenses, symbolic and connectionist. The view that I found the most interesting is connectionism. It states that “cognition is not symbol manipulation but dynamic patterns of activity in a multilayered network of nodes or units with weighted positive and negative interconnections”. I think this view build on the idea of seeing AI systems as black-box problems. The system learns and is able to recognize patterns but no one really knows why a particular weight shifted at that particular time. I feel like no one can give an explicit prediction of what the backpropagation algorithm will change in the neural network, these changes are all done rather arbitrarily. It is not clear, once again, if these models are actually understanding or they are just manipulating squiggles and squoggles, producing the desired output but without showing a real thorough sign of understanding the symbol groundings and meaning.

    ReplyDelete
    Replies
    1. Étienne, neural nets ("connectionist") can be implemented dynamically (as real, parallel/distributed networks) or they can be simulated computationally as learning algorithms (see this reply above to Tess. What matters with neural nets is that they can really learn to categorize by detecting features in the input, not whether they are implemented dynamically or computationally (symbolically). What has to be dynamic, and cannot be replaced in T3 by a computational simulation, is (at least) sensorimotor function. (See also Searle’s incorrect “Chinese Gym” Argument, discussed in class and in Reading 3b.

      About the (non)-problem of finding out how neural nets encode the results of their successful learning, see this reply above to Nadila. Cogsci’s reverse-engineering has succeeded once you have designed a mechanism that can pass TT, irrespective of whether you know how to localize backpropagation’s successful job.

      Delete
  45. 5B

    This reading on the symbol-grounding problem highlights the idea that we cannot define what meaning is as we do not know where words or mental states get their meaning from, how the symbols in a symbol system connect with their referent.
    In the description of robotics and categorization, there seems to be something that I cannot wrap my head around. I understand that learning of categories is based on trial and error, which is guided by the feedback from the reward of a correct categorization. However, I don’t understand the idea behind grounding categories with sensorimotor information. “the categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the non-members”. I believe some categories and concepts cannot be grounded through sensorimotor information. If you look at an abstract category of things you would take with you in case of a fire, then I don’t see how this can be grounded with the help of sensorimotor interpretation.
    I might be misunderstanding the idea of “sensorimotor features of the members of a category”, if so please let me know.

    ReplyDelete
    Replies
    1. Hi Etienne,
      The category of “things you would take in a fire” (assuming by things you mean like your passport, your phone, whatever is of value to you, idk what people prioritise in fires ahaha) is not the same categorization that is being referred to here. In this case it is the purpose of identification and grouping of related stimuli but based on (ie.) visual similarities in which a label could be assigned to describe those stimuli overall.
      So take for example category “dog”, there are various kinds of dog breeds, sizes, sounds etc.. but whether it be a chihuahua or a husky, there are features that we identify across both that we count towards the category of “dog”(say 4 legs, a tail, fur…).
      Sensorimotor features of members refers to what you receive as input visually, auditorily, through touch etc… that is common across a number of objects.
      When it comes to distinguishing for non-members through sensorimotor features, say you were looking at the category for “breed”, ( in the example presented let’s say you heard the dogs bark) the feature that would distinguish the chihuahua and husky as non-members of the same breed could be pitch of their barks.

      Delete
    2. Étienne, please read the other replies concerning the difference between grounding and meaning as well as the difference between direct sensorimotor grounding and indirect verbal grounding (“zebra” “peekaboo unicorn” “justice” “truth”)

      Hassanatou, you are closer, but “on the fly” definitions (which we would never bother to give a name or put in a dictionary), like “where I was and what I was doing at 2pm August 22nd 2022,” which we might text to one another, nevertheless describes a (singular) indirectly grounded verbal category. Since it only has one member it is a bit like an individual’s proper name, except it’s a name we would only use once. Your “fire list” is like that, but maybe we might use it more than once…

      About subcategories, see other replies about samoyeds, malamutes, rabbits and hare(s).

      Delete
  46. ".....but the crucial compositional property is missing". I am still struggling to fully grasp what this section was trying to explain of what is lacking in connectionism. My understanding is that the non-symbolic part of this hybrid model is based on sensorimotor inputs that we then choose to attribute a “label” to. While it is not made clear how many of these identifications are needed to provide us with all combinations that we would need to refer to every construct/object in the world, the idea is that not everything SHOULD be identified in this way to allow for links to be made between the “labels” (basic words for baseline identification) we decide upon? If that understanding is correct then this would be to allow for broader concepts to be characterized and relations to be made to move away from the arbitrariness.

    ReplyDelete
    Replies
    1. Hassanatou, a neural net (“connectionism”) is an algorithm that can learn categories by detecting their distinguishing sensorimotor features; it can be implemented computationally or as a real distributed set of physical units with activations and connections. It is not a TT-passer, just a candidate plug-in inside a TT-passer, able to take its inputs and detect the features in them that enable the TT-passer to categorize them by doing the right thing with the right kind of thing.

      This is not a matter of “symbolism” (computation, Strong IA) versus “computationalism” (neural nets). Neural nets are components in a grounded TT-passer. There are other components too, some of them computational, some of them dynamic (e.g., sensorimotor). The category-feature detector component can be either computational or dynamic, but the sensorimotor components have to be dynamic.

      Kayla (T3) is a hybrid (computational/noncomputational) system.

      “Labels” are just names of categories. But they’re just one of things humans can do with categories: name them. But humans, and nonhuman organisms too, have categories that are not based or dependent on naming the members. They can be based on eating or not-eating the (vegetal!) members. And no one, ever, will “have” all possible categories. We organisms only learn the ones we need, which are an infinitesimal fraction of the (infinite) number of potential categories there are (but that’s not cogsci: it’s physics and metaphysics). Language will not name or define them all either (not enough time!) – but it can potentially name or define all the categories we terrestrial organisms could need, while the earth lasts…

      [We know what the arbitrariness of the shape of a symbol means by now: it does not resemble its referent (if it has a referent). I suggested, for reflection, pondering whether the symbols in a symbol system, say the words of a language (e.g., “apple”) can still be said to be “arbitrary” if the symbol is “grounded” by a shape-based sensorimotor connection (feature-detectors) to its referents?]

      Delete
  47. I apologize for my very late comment, but I just wanted to address a confusion I was left with after reading 5A. While reading about the Symbol Grounding problem, I kept returning to points I made in my comment on reading 4A last week. In my previous comment and in last week’s class, we talked about this idea that language is arbitrary in the sense that it is just a “symbol system.” Essentially, the words we use to label things are arbitrary, but as was mentioned in the reading last week, we can argue that the GESTURES associated with these words are less-so. What I don’t fully understand is how this relates to the symbol grounding problem, and more specifically, the concept of ‘grounding’ when it comes to natural language… could we argue that the gestured-language that we spoke about last week is what grounds these arbitrary labels? Is that the sensorimotor aspect of it? Or am I misunderstanding this idea?

    ReplyDelete
    Replies
    1. Anaïs, gesture (mime, imitation) is not language, though it may have been a precursor that led to language, because of the similarity between the gestures and what they were imitating. But sign language is not gesture, and the residual similarity is irrelevant in language.

      Symbol grounding is connecting words to their referents through category learning. The connection between word and referent through learned category feature-detectors is not the same thing as the similarity between mimed gestures and what they are copies of. But if language did begin in the gestural modality, there is a historical link. Between the two periods, however (G and L) gesture had to become conventionalized and arbitrary in shape, and the proposition had to be born. (What is a proposition?)

      Delete
  48. Upon reading 5B, I have a better understanding of what the sensorimotor aspect of symbol grounding entails. However, after reading several other comments and reflecting on many of the categories that I know and make use of every day, I fell upon the idea of emotions… I believe another student referred to the idea of 'faith' and I’d like to propose the category of 'happy' or 'happiness…' How is it that we can understand and categorize different versions of ‘happy’/‘happiness’ when this is a completely subjective experience– or rather, subjective FEELING. How can feelings be categorized?

    ReplyDelete
  49. A: Symbols need to be grounded because it is impossible for a group of arbitrary and meaningless symbols to gain meaning, no matter how interconnected they are, if at least one of the symbols has actual meaning. This reading suggests that this is done through a combination of grounding symbols that refer to things that are physically available and understanding symbols that are not in terms of already-meaningful symbols. Furthermore, it seems from the previous comments that 'abstractness' is a continuous variable, possibly determined by how directly a word is grounded. For example, a beauty is more abstract than a unicorn because less of the words used to define beauty have direct referents. The reading also introduces 'iconic representations' and 'categorical representations' which can be illustrated in an example in which one needs to distinguish red apples from bananas in a bowl of fruit. The 'iconic representation' applies to a specific banana and incorporates every piece of information available to the senses about that banana whereas the categorical representation can simply be boiled down to 'yellow', as that is all that is necessary to distinguish between a red apple and a banana.

    ReplyDelete
    Replies
    1. I understand that every word except the one being defined has to be grounded, but to my understanding, not all of them have to be grounded to a physical referent. Furthermore, would it be correct to say that 'yellow' and 'curved' would be enough to serve as a categorical representation for a machine that was built to distinguish red apples and bananas? Or would the categorical representation consist of all of the dimensions on which the bananas and apples differ?

      Delete
  50. B: This reading elaborates on the contents of the original symbol grounding paper and goes further in connecting it with intelligence and consciousness. The reading helped me understand that the reason that we can only learn categories by trial and error or verbal description is because symbols have no inherent meaning and the external world largely does not have pre-established categories for us to discover. Rather, categories are defined by parameters that are important to the person categorizing. The discussion of robotics allows us to phrase the question in a way that is is able to be researched by cognitive science. It challenges us to break down the process of symbol grounding and its role in intelligence and confirm or reject our hypotheses based on whether the robot can behave intelligently. However, it makes me wonder how a robot's development of the ability to process increasingly abstract concepts would differ from a humans and whether it would be necessary for its development process to mirror that of children, if our goal is only to create an intelligent robot.

    ReplyDelete
    Replies
    1. Elena, the TT and learning was discussed in Week 2a and 2b.

      A successfully reverse-engineered T3 robot (Kayla) would not just have to be able to behave “intelligently,” but indistinguishably from any of us.

      Delete
    2. I understand the idea that Kayla is capable of what any of us can do and that my question is very similar to one mentioned in the Turing paper. However, it is a question of if the number of grounded terms that a child understands affects their ability to understand abstract concepts and how many directly grounded terms are necessary to be able to grasp abstract concepts.

      Delete
  51. After yesterday's class I just wanted to attempt to summarize/clarify: what is meaning in the case of language?

    Is it correct to say that meaning is what it FEELS like to refer to a words referent including language, categorization, identification, and reference?

    ReplyDelete
    Replies
    1. Laura, yes that's it (but kid-sib's uncertainty would not be reduce by saying just that!).

      Delete
  52. 5A (Sorry it’s late!)
    I found the description of a hybrid bottom-up and top-down system for symbol grounding to be very helpful as it grounded (ha) the concept in terms I was familiar with from previous psych classes. Like Mathilda mentioned above, I was also tempted to propose the various models of categorization like prototypes, exemplar, etc, but the replies were very helpful in putting me off that line of thought. It is the combining of the sensorimotor learning and the iconic representations of things that make for a promising answer to the symbol grounding problem, and those models do not account for both of those in equal measure.

    ReplyDelete
    Replies
    1. Julia, the most important ingredients are the unsupervised and supervised learning. (Why?)

      Delete
    2. Unsupervised learning is mere exposure. Our ability to do unsupervised learning is based on the assumption that the input's "affordances" are already salient enough and that with the right categorization mechanism, we will be able to "ground" a symbol with its referent from repeated exposure without the help of corrective trial-and-error feedback. This is true in most cases, except in context-dependent categorization. In other words, unsupervised learning ceases to be helpful when different ways of clustering the same sensory affordances are correct. To learn the meaning of a word, we also need corrective feedback as to how such a word is used across different ambiguous contexts. All this is to say both supervised and unsupervised learning contribute to Kayla's indistinguishability.

      Delete
  53. 5B
    The clarification of the definition of meaning was very helpful in this reading. Meaning in symbols as something that is intrinsically tied to feeling, because it results from sensorimotor exploration I think is a very succinct and comprehensible way to describe it. However, some people above mentioned hieroglyphics as a symbol system in which the shapes are not arbitrary. While written language in the replies was described as being parasitic on the spoken form, and therefore not entirely indicative of meaning, it does raise a question for me. A tally mark system for counting is, I’d consider, a symbol system, as it has syntactic rules (four marks, and then the fifth across), and yet it does literally depict the concept it is referring to, more so than the numerals 1, 2, etc., do. My instinct however, is to say that it’s a moot point, as tally marks don’t comprise a language so much as a means for counting, and grounding symbols isn’t necessary for computation like they are for computation, in which the shapes are arbitrary (even if their shape does coincide with the meaning we attribute to them in our heads). It’s just something interesting that came to mind.

    ReplyDelete
    Replies
    1. Julia, good point. Tally-marking is iconic, and it helps with counting (but it’s not too good for naming increasingly big numbers, nor theorems, formulas and proofs). (It’s still irrelevant, though, what notation you use to make the statement “5 + 5 = 10” – and all mathematical propositions are also English (and French and Piraha)

      Delete
  54. 5A
    T2 is not grounded because unlike T3 it lacks the ability to interact with the world and learn the connection between the string “apple” and what an actual apple feels like, tastes like, what one can use it for…etc. or in other words, the category it belongs to. Connecting the arbitrary string “apple” to the real-world object that is an apple is called grounding. If a T3 passing robot were able to do symbol grounding, that would not automatically mean that the string “apple” means something to the T3 robot. One question I have is how we determine what characteristics/traits of apples are required to be a part of the category apple. Is the criteria for a successful category creation only that it distinguishes from nonmembers?

    ReplyDelete
    Replies
    1. Tyler, you have summarized well. Now, what are the three ways to learn a new category, and how are they related to one another?

      To ground a (content-) word (what is that?) is to “know” what category it refers to. This means being able to recognize which things are and are not its members, and to know what to DO with its members. For “apples,” this means being able identify them, hold them, eat them – and to name as well as describe the,

      The members of a category are distinguishable from the non-members (that is, distinguishable from the members of other categories) by the category’s distinguishing features (“invariants”).

      Distinguishing features can be sensory, motor, and sensorimotor
      (including “affordances”: what is an “affordance?) Features can also be categories, with names and affordances.

      Explain how we distinguish edible from inedible mushrooms through the three ways in which we can learn categories.

      The features of a category are approximate: what does that mean?

      How are categories and their features related to information?

      The features that distinguish apples from bananas may not be the same features as the ones that distinguish apples from tennis balls.

      The features that distinguish apples from non-apples is called “the context of confusable alternatives” – the other things from which you have to be able to pick out apples and do the right things with them.

      The context of confusable alternatives for a category is not fixed. It can grow. Its distinguishing features may need to be revised, tightening the approximation, reducing the uncertainty.

      Delete
    2. The three ways to learn a new category are through unsupervised learning (basically just exposure), supervised learning (trial and error category learning with positive or negative feedback), and finally you must through this process learn the invariants of your category.
      We distinguish between edible and inedible mushrooms firstly by being exposed to a variety of different information since birth pertaining to mushrooms. This knowledge however is unlikely to be sufficient to distinguish reliably between the two similar categories, and reward learning is needed to solve this problem. You need not have to eat a bunch of poisonous mushrooms (although that would help) to learn in this way, any feedback on what edible mushrooms look like, where they grow, what the texture they have will do. It is only through this process that you can learn the invariants of the two categories. The features of a category are approximations because they are different for every person and category, but similar.

      Delete
  55. 5B
    Thanks to the examination in this reading of the symbol grounding problem I now understand that the higher (symbolic) level of brain functioning can be derived from the lower (sensory) levels. Furthermore the implication that we should seek to model cognition in this way, from the bottom up. This reading (and the discussion posts) have also partly answered my previous question by clearing up that categories are approximations and that mirror neurons play a critical role in the successful communication of categories with others.

    ReplyDelete
    Replies
    1. Tyler, we don’t know what mirror neurons do. We just know that their activity is correlated with when we are using our mirror capacities: What are those? And what role do they play in communicating categories to others?

      Delete
    2. Our mirror neuron capacities allow us to do things like imitation learning and more broadly to understand how others feel by repeating the same mechanisms they are displaying within ourselves. This helps in the supervised and unsupervised learning stages of category learning in the sense that it provides us with a template on how to act with a new object for example (based on how others have been observed to do so)

      Delete
  56. In Turing's article (reading 2a), I made my skywriting about the author’s comment on making a robot achieve the mind of a child. Harnad replied “computer is missing is the sensorimotor capacity to ground its symbols”. At first, this wasn't so clear why, but now I can understand why the sensorimotor capacity is needed in order to ground symbols. This article (5b) was extremely helpful to me in understanding how symbol grounding is actually not related to mental states. In other words, it is not needed to ‘feel’ something in order to ground that something to a physical thing.

    ReplyDelete
  57. 5a. Categories evolve and depend on individual assimilation of them. They can be learned either directly through experience or indirectly by verbal intake, but as long as people categorize coherently then they are able to interact with others. All categories are abstract, and a T3 Turing machine could make use of sensory and motor abilities to correctly interact with categories in the world. However, we can’t know if the processes to form meaning in our minds require consciousness or not.

    ReplyDelete
  58. 5B. The reading touches on how our ability to discriminate different inputs is dependent on "iconic representations" of said inputs. I do struggle to understand how discrimination can occur with identification because i thought we would need to be identify certain characteristics or factors in order to separate items apart, especially if we are using icons as references. Is it because iconic and categorical representations are sensory and are not considered symbols? I understand the aspect that icons are “too unselective” because there is too any of them and they are not considered symbols due to only have causal relations.

    ReplyDelete
  59. 5A. I do find the symbol grounding problem reading confusing. From a basic perspective, what I understand the symbol grounding problem to be is that it dissects how symbols get meanings and then tries to break down what meaning. What further complicates the problem is the question of what consciousness is and how computationalism impacts it because cognition is considered computations. The problem of consciousness occurs because in order for word/symbols to form meanings they need a reference point which requires consciousness to use their internal resources to pick out the needed reference.

    ReplyDelete
  60. 5b: In the Wikipedia article, I find Woods’ idea of procedural semantics in natural language evocative. For example, Woods stipulates that nouns are, by definition, a procedure to generate instances. Where he goes wrong, according to our understanding of symbols as inherently ungrounded, is by saying that nouns (or prepositions or verbs) have a formal meaning integrated into the structure of the word (the symbol). Woods says that a computer can reason about language in the same way as humans, which we obviously know is false because computation is insufficient for cognition. This Wiki article reinforces the idea that the mind is required to ground symbols.

    ReplyDelete
  61. Something that is often mentioned in class is the idea that it feels like something to hear that "the cat is on the mat" and I find that interesting especially in the context of symbols as this quality is not transmutable to something like a conventional computer even if you provide it with the same input. In other words, something has to happen in order for us to enter this felt state and that involves grounding the statements that we hear. When we talk about computation and the brain as a computational system utilizing some form of symbol manipulation, for me, the comparison always went back to mathematics which helped me in a way understand what it means to say that the brain is a computational system. However, I feel like this goes against the ideas mentioned here as language requires grounding whereas I feel as though this is not the case for mathematics so this one to one comparison really limited my ability to fully understand this idea.

    ReplyDelete
  62. This separation between what a word refers to and what a word means is really interesting to me because it introduces the idea of ambiguity wherein I might have a different understanding of a given word or statement than someone else. This means that while in many things in this world, there is a right and wrong answer, here, in questions of what one should feel in response to a statement or how one should understand a statement, there isn’t really a right or wrong answer since everyone approaches these matters differently. This introduces a problem in terms of trying to mimic this quality in machines or even trying to explain it because there isn’t really one central idea that can be referred back to. I feel like it would interesting to further explore this ambiguity.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...