Friday, September 9, 2022

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.
Alternative sites: 1, 2.



The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is HardBehavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de ChomskyÀ babord: Revue sociale es politique 52.

85 comments:

  1. The fact that there are conditions that affect language, such as stroke or Specific Language Impairment, and do not affect other aspects of general intelligence or cognitive functioning suggests that language development is indeed a distinct component in our brains and not simply part of general intelligence. In addition to this point, there are cases in which intact language coexists with severe retardation or other cognitive difficulties. It makes sense that language would be distinct as there are specific regions devoted to language in the left hemisphere of the brain and because we’ve undergone a large trade-off in the form of increased choking risk due to the higher location of our larynx. Surely, language vocalization was important.

    ReplyDelete
    Replies
    1. Yes, it's clear language is special, both neurologically and genetically, but it isn't an independent "organ." It has many components, as Pinker & Bloom pointed out. All together, they produce a capacity unique to humans, so it makes some sense for Turing to have considered it on its own, with T2; the universal expressive power of propositions might also make it seem like an autonomous capacity. But the need for groundedness reminds us that it is not really autonomous from sensorimotor capacity, any more than learning itself is. (That's why I think T3 capacity is needed to pass T2.)

      Delete
    2. Notice, though, Pinker's examples of learning here are all examples of OG, not UG, which is also what is ignored in P & B. What's going on?

      Delete
    3. This should be expected because UG is impossible to learn. Because UG mistakes are not naturally made, there is no negative evidence of UG. Negative evidence is necessary for all kinds of learning, both supervised and unsupervised learning, because there is no way to differentiate between positive and negative evidence when only provided negative evidence. Therefore, UG is just unlearnable. On the contrast, OG is possible to learn because people do grow up making mistakes (creating negative evidence) that are corrected over time through both supervised learning (being told something said was incorrect) and unsupervised learning (learning patterns between what is correct and incorrect).

      P&B likely ignored UG because it is difficult to address since there is so little known about it. It’s known that UG cannot be learned for the reasons stated above, so it must have evolved. UG evolution is difficult to cover because there is no consensus on how or why UG evolved.

      Delete
  2. Benjamin Whorf (1956) argued that the categories and relations that we use to understand the world come from our particular language, such that speakers of different languages conceptualize the world in different ways. This seems intuitive to think about, as people do think about the world in different ways due to differing cultures, but it is not language itself that produces those different ways of seeing the world. Different languages are not distinct forms of thinking; they are different forms of communicating the same things. The symbols (words) are arbitrary and are interpretable to us as something. Those interpretations can be the same in different languages with different symbols, and the number of units/words used doesn’t have to be the same in order for the meaning to be the same. There isn’t anything anyone can say in one language that they cannot say in another.

    ReplyDelete
    Replies
    1. Alexander, correct. But most words are content-words, with referents (like “apple” or “swimming” or “true”). And most referents are categories (kinds), not individuals. And categories have features that distinguish their members from other categories with which they could be confused. And all (non-formal) categories are “underdetermined” (what is that?) by their features, which are approximate (like those of “gold”: how? why?).

      So languages can differ in which categories they lexicalize (i.e., name, and put in their dictionaries), and even in the features (also potential categories) by which they pick out and define their categories. So although it remains true that all languages can say everything that can be said in any other language, they can’t do it by translating word for word (as you correctly point out). But this leaves room for a true (though weak) Whorfian effect:

      The “Strong Whorf/Sapir Hypothesis,” that language determines your perception of reality is wrong. But the “Weak Whorf/Sapir Hypothesis” that “language” can influence your perception is correct. And learned “categorical perception” (CP) effects (what are those?) are examples.

      But don’t confuse:

      (1) learned CP effects on perception (“the ‘edible’ mushrooms are red or orange and striped or dotted, and the ‘inedible’ ones are purple or brown and short-stemmed or pointy-capped”), which -- once you have practiced and acquired expert, automatized feature-detectors -- really do filter your input to make the edible ones “pop out” perceptually

      with:

      (2) the simple, informative, uncertainty-reducing effects of true propositions (or propositions you assume to be true): “Lula defeated Bolsonaro in the Brazil election,” or “there is only one even prime number,” which will influence how you feel and what you believe (as all information does), but not how you perceive.

      Delete
    2. “Undetermined” in reference to categorization means that the exact features distinguishing what is and what is not a member of a category are unclear. Categorizing something is an approximation, and usually requires repeated trial and error to understand the boundaries of a category. The features that you use to categorize may be continuously modified when presented with new stimuli and are therefore not predetermined. The amount of features that you may be presented with and will have to incorporate into your understanding is infinite.

      In the “gold” example, we are approximating the boundaries of what is and is not considered to be a shade of gold. We do not set an exact shade to mark the upper and lower boundaries of the colour, but rather make an approximation when we are confronted with a new shade.

      Delete
    3. Kimberley, good summary. Just a few details to fix.

      In cogsci, to “categorize” is to DO the right thing with the right kind of thing. Doing the wrong thing has consequences (eating poisonous mushrooms).

      One of the most important things human cognizers DO with categories is to name them correctly. If you mis-name them in a definition, you have not learned (or taught) anything, or, worse, you have learned/taught something that is incorrect. (Think of edible mushrooms agian.)

      To do the right thing, the categorizer must detect and abstract the features that distinguish the members of the category from the non-members well enough to resolve the uncertainties encountered and do the right thing.

      An exact (complete) definition of the distinguishing features of a category is only possible in mathematics and formal definitions. An “even number” is a number that is divisible by 2, and that’s never going to change in the Platonic world. The “youngest student” in this year’s McGill Psyc 538 class is the one born most recently – though someone could have entered the wrong date, at birth, or at registration.)

      The definitions of most of the words in a dictionary (or encyclopedia or textbook), hence the lists of their distinguishing features, are not exact; they are approximate. The defining features can sort members from non-members well enough, reducing uncertainty close enough to zero to work, for most cases so far. But not all possible cases; and not necessarily forever. (Think of gold and fool’s-gold. But that example, Kimberly, was about the substance “gold,” not the color “gold”! And most of our categories are not boundaries along a physical continuum like light-frequency. They are more like subsets of a multidimensional feature-space, from which we abstract the features that distinguish the categories we need to distinguish to be able to “do the right thing.”)

      So most categories in the world are “underdetermined” by the distinguishing features we know so far. Both our direct sensorimotor feature-detectors and our indirect verbal feature-definitions of categories have been close enough so far, but, in principle, and in practice, anything could one day turn out to be “fool’s gold” – even when it comes to the distinguishing features of “edible mushrooms.” The good news is that the approximation can always be tightened by adding more features.

      Delete
  3. I think the Learnability Theory is an intuitive theory explaining what language acquisition is, but the third stage called learning strategy conflicts with the theory of universal grammar, which I believe to exist. This stage of the Learnability Theory suggests that the learner, using information in the environment, tries out hypotheses about the target language. However, with the poverty of the stimulus argument in favour of UG, the young learner would not have enough negative feedback to reject their own “hypotheses” about the language. By negative feedback, I mean specifically errors in grammar; because adults make so few errors, there would be not enough opportunity to learn through direct feedback in this way.

    ReplyDelete
    Replies
    1. I agree with you on the fact that The Learnability Theory cannot be used to single-handedly explain how we develop language due to the over-arching notion that UG is innate but I do think that components of language that fall outside of UG i.e. OG/syntax can be learned following this theory.

      Generally though, I do think The Learnability Theory is too uncompromising in the way that it outlines language acquisition as it places too many conditions and places an emphasis on negative evidence which we know that children can learn without. Perhaps if it were to be combined with another theory of language acquisition, it could be sufficient.

      Also, I think The Learnability Theory is more appropriate for L2 (and up) learning especially when it is undertaken later in adult life. As we know, one of the main downfalls in learning a language later in life is that it is difficult to really achieve "nativeness" once we are past a critical period. The Learnability Theory explains how under these conditions we can acquire language and everything else that falls under language acquisition (which does allow a user to be "native") is explained by something more reminiscent of the poverty of the stimulus.

      Delete
  4. I found the section of this reading describing linguistic input to children particularly interesting, as it calls back upon themes of learning that we studied previously. On the one hand, children use positive evidence which corresponds to the information about which strings of words are grammatical sentences of the target language. This resembles the notion of unsupervised learning –or mere exposure– where learners extract information about the invariant features of categories from examples of members (and non-members) in the world. During language acquisition, children use examples of grammatical sentences to extract the rules of grammaticality, in the same way that humans use examples of category members to extract the ‘rules’ (invariants) of that category. On the other hand, children also use negative evidence, corresponding to information about which strings of words are not grammatical sentences in the language, in the form of feedback about their own mistakes. Supervised learning can be placed in parallel to this form of language input, since it corresponds to category learning through trial and error coupled with corrective feedback from the consequences of these trials. In other words, children use verbal feedback about their grammar mistakes to acquire language, in a similar way to how humans use feedback from consequences of their categorization to learn categories.

    ReplyDelete
    Replies
    1. Amélie, that’s all correct, but never forget to make the UG/OG distinction. OG is learned and learnable the ways other categories are learned: by mere observation (unsup) and imitation, or by trial-and-error and correction (sup) or by verbal instruction. But UG cannot be learned that way: Why not?

      And many learnable categories are only learnable sup, not unsup. Why not?

      Delete
    2. It's been stated many times that UG cannot be learned because UG errors are not made by children or adults, which means there is no way to correct them. As to why some categories are only learnable through supervised learning, I thought of the example of learned CP for rabbits and hares we discussed in class. Rabbits and hares are not easily distinguishable for the average person, but for those who have learned through trial and error, they can ignore the apparent features (four legs, furry) and focus on the differentiating ones. Unsupervised learning, or the mere exposure to rabbits and hares, will be hard for people to learn the categories.

      Delete
    3. To elaborate on these points:
      Negative evidence is crucial. Learning cannot be done through positive evidence alone. This is why UG is unlearnable; while every UG-compliant utterance is a case of positive evidence (which should be all of them, unless we're in a lab at MIT) there is an absense of negative evidence.
      Contrast this to OG, where children have both positive and negative evidence. For example, children might learn (unsupervised) that adding an "-ed" to the end of a word puts it in the past tense. So the child might say "I runned to my mum!" This is called overregularization- they apply a regular rule to an irregular- and kids do it all the time, but they get corrected (supervised learning/negative evidence).
      A generalization about positive + negative evidence might be to say that positive evidence is like unsupervised learning, and negative evidence is like supervised learning (maybe this is a bad analogy, but it makes sense when I think about language learning- let me know).
      In summary, UG cannot be learned due to the absence of negative evidence. If every utterance is UG-compliant, the child has no counterexamples from which to learn from.

      Delete
    4. Teegan, supervised learning needs both positive and negative evidence (both members and non-members, with corrective feedback, othersise you can find the feature or rule that distinguishes them). Since producing/perceiving (speaking/hearing) language is a mirror capacity, negative evidence usually takes the form of making a production error in OG (“It is me”) and getting negative feedback (correction: “no: ‘It is I’”). (These corrections can sometimes be pedantic, and the OG rule can change, making what was formerly an error, correct, as actually happened with this last example.)

      And some OG rules are so simple that you can learn them through unsupervised learning alone, hence from positive (perception) evidence alone (hearing “walked” but never hearing “goed,” only “went”; so you don’t need to have said “goed” and been corrected for it; you can just imitate “went.”).

      These are all trivial points for OG, just as they are for vocabulary learning. They have only created confusion when the OG/UG distinction has been ignored and OG was treated as if it were UG, and then far-fetched claims were made about OG learning being “too fast,” or negative examples being “too few.” For UG, negative examples are not “too few:” They are never produced or heard (by nonlinguists).

      That’s not the kind of help UG linguists needed, to fend of UG sceptics. (Similar far-fetched things have been said about the evolution of UG.)

      Delete
  5. The part of this reading about context (in the input section) could be interesting to place in perspective with our lecture/readings about mirror capacities. Recall that mirror capacities could notably be at the origin of language evolution, particularly that of language understanding and production. This paper states that interacting with live human speakers who talk about the here and now allows children to be “mind-readers”, guessing what speaker might have meant based on the context of the interaction they are witnessing/a part of. Personally, this gave me a clearer idea of one way in which mirror capacities tie into language acquisition.

    ReplyDelete
    Replies
    1. I also found the context part of the reading interesting, and I like how you connect it to mirror capacities. The section emphasizes the need for context and interaction during language exposure. A child cannot learn language from the radio or television, because these media are passive, offering no feedback. Through live human interaction, children can use context to predict the meaning behind what is being said. Although it makes sense to me that witnessing a live human interaction is much different than simply hearing a radio broadcast, I am more confused about the difference between live human interaction and watching TV. How is watching TV so different from passively witnessing a live human speaking? To learn a language, I think children need contingent responding, in which a child's behaviour results in a response that depends on the given behaviour. But if a child is not provided feedback from a human interaction (e.g. if it watches passively as adults are talking amongst themselves), is this any better than watching TV?

      Delete
    2. I also found the “Input” section of the sky reading the most interesting one. As an attempt to answer your question Josie, I think the reason as to why children learn the most when watching in person interactions, despite being only passive watchers, is due to the context (words that are used) in such interactions. By context I mean, the children are constantly surrounded by their parents, other family members and perhaps friends, that is: the same group of people, that tend to use the same kind of words or even be in the same type of situation. Example: in real life, a child will see the same type of situation many times such as their parents discussing what to make for dinner. So if a new word is used in that context, that the child is already familiar with, it will be easier for him/her to learn that world since he/she already lived in that type of situation many times. On the other hand, when a child turns on TV it is unlikely that the situation presented will be similar to other times where she was watching that program. Therefore, it will be harder to understand if new words are presented. In short, my hypothesis for why children learn more in human interactions is due to these situations being more familiar and happening more frequently than ones she/he watches on TV.

      Delete
    3. There are many different, interesting context effects:

      1. Verbal context (meaning the other words in the text or discourse in which a word of sentence is written or spoken). “John and Mary were the only ones there. She spoke first.” From context you know that “she” is Mary.

      2. The surrounding physical setting in which a word or sentence is spoken. “Please give me the plate nearest to you.”

      3. “Deictic” words, which only have a referent in a given verbal or physical context: this, that, here, there, he, she, now, me, you…

      Some of these only make sense if spoken in person. Others work both in person and on TV (but not radio).

      Pointing is an important supplement for pantomime, but only works in person (and to a degree on TV, but not on radio). The eventual substitutes for pointing once you have language are referring words and descriptions.

      This is true in writing and on radio too: “I mean her, not her” is resolved by pointing if they’re both here and visible, but you need to know the names if not.

      Yes, familiarity and frequency help too.

      Delete
  6. In this paper, Pinker explores the problem of language acquisition as an essential “black box” in cognitive science. According to him, this black box is somewhat accessible as we are aware of it inputs and outputs. Pinker approaches the problem by hypothesizing a ‘language-learning algorithm’ that enables children to have a grasp of grammatical structure in the context of the ‘poverty of stimulus’ argument (among others) which suggest a complex innate mechanism for language acquisition. I find the use of the word ‘algorithm’ telling of Pinker’s argument and his tendency to avoid the problem of UG in his explanations. According to him, transforming sounds and situations (the input) into a syntactic structure (output) is in part innate (even though innate knowledge isn’t sufficient), but he doesn’t seem to address much of this innateness component.

    ReplyDelete
    Replies
    1. Mathilda, what Pinker says is vague, but if the algorithm is UG, we're back where we started, with the question of its evolutionary origins.

      Delete
  7. Here it is interesting that it mentioned the human vocal cord is evolved for us specifically. Our close relative, chimp, does not have a homologous language. Now this is rather peculiar, according to the two reasons they have proposed, what possible tools that human ancestors have developed that were so advanced that the blind watchmaker put it as an unique trait in us.
    Since language requires many precursor capacities, maybe we have those comprehended. A few thoughts about what we did:
    1. categorical learning
    2. imitation
    3. show and tell
    I am wondering why all bets are off for second language, it seems obvious that it is the case, but I would like know what are the variables.

    ReplyDelete
    Replies
    1. Hi Jenny,
      From my understanding the rules of UG are very complex and convoluted, I am not sure there is a finite list of rules for UG available (except maybe at MIT linguists labs?). There are some examples of non-UG-compliant sentences (John is easy to please Mary*) the professor discussed in class, but I don’t think I could come up with my own example from scratch, especially in my second language. In that vein, I don’t think that we are aware of explicit UG rules when using our maternal language, or a second language (though we can tell when a sentence violates UG in our maternal language).

      How we distinguish UG from OG is the presence of negative examples in daily life. Children make OG errors all the time and are corrected. Furthermore, through the study of linguistics, OG rules of syntax can be explicited through syntax trees for example. There is no such option for UG I believe.

      (Also regarding the Pinker reading overall, I found it an interesting discussion of the process of language acquisition. It has been mentioned many times in this thread already, but Pinker conflates UG and OG which is important to keep distinct when discussing acquisition as one is innate (UG) and one is learned (OG)).

      Delete
    2. Jenny, in a second language (L2), hearers can make grammaticality judgment errors for UG- and possibly even some UG- production errors.

      To learn UG I’m afraid you have to take a syntax course (or try to find some teaching materials on the web).

      OG rules are well known and codified, so OG- errors have formal explanations. No production errors are made in UG by L1 speakers; so if a UG- error is spoken or written, it is likely to be by an L2 speaker; same for missing a UG- error when heard.

      If we are not linguists we are only aware that a UG- error in L1 sounds wrong, like an OG- error, but we don’t know what rule it violates, whereas with an OG- error we either know the OG rule violated, or we can find out from any language teacher.

      Delete
    3. Kayla, you’re mostly right, but the way to find out the rules of UG is to take a syntax course -- maybe even two!

      The reason UG rules don’t take the form of a list, as OG rules do, is partly because linguists are reluctant to call them “rules” at all. Sometimes they are called “principles,” sometimes “constraints.” (But they are always underdetermined, and always subject to POS – no negative examples spoken or heard, hence unlearnable. They are “structure-dependent,” so you need to learn what structures they are dependent on.

      So it’s not as simple as saying “It should be ‘it is I’ and not ‘it is me’ because ‘is” takes a predicate noun or adjective or pronoun in the nominative case, not the accusative case.”

      So you have to take a syntax course if you want to know more (because I don’t know much more than that!).

      Delete
    4. Here is what I get so far, in line with this comment: I understand that we are born with UG and it allows us to catch on to the abstract structural relations in our mother tongue(s). It allows children to understand language in their environment and learn to produce syntactically correct output. It is also unknown how UG succumbed to selective evolution and Pinker fails to address this problem.

      However, I am a bit confused because if UG principles are unidentifiable and unlearnable, how can they be understood by syntactical analysis?

      Delete
    5. I’m not sure if I’m 100% convinced by UG in general. It seems like the only thing about UG that linguists can agree on is that it exists. But they don’t agree with each other on what it actually contains; it is really divergent on what is or is not in UG. I feel like there is a lack of progress; after decades of research, we are still not near to understanding what UG is then when Chomsky first proposed it. I understand why it should and must exist because UG rules have no negative examples, making them unlearnable, so it must be innate. But what makes it linguistic innateness and not just a general innateness? Is there any other trait within the human that also lack “negative evidence”?

      I’m also wondering the same thing as Rosaline “ if UG principles are unidentifiable and unlearnable, how can they be understood by syntactical analysis?”

      Delete
    6. As someone from the linguistics side of CogSci, I think UG rules are indeed identifiable. However, it is not as straightforward as OG, and often involves some linguistics terminology. UG rules can be found only when there is a strong pattern that is present through many languages – for example, Mandarin is more topic-prominent and English is less. That is, sentences like ‘That kangaroo I saw’ (roughly equivalent to ‘I saw that kangaroo’) are used far more and are more acceptable in Mandarin than English. Thus, there is a parameter of UG found: topic prominence, and it may vary from language to language, as it could be more realized in a language than another, and it could be achieved through more than one way, for example by particles -un/nun in Korean and -wa in Japanese, or just an inversion of word order as in Mandarin. (I have only done a Syntax class so please correct me if you find errors in what I have written here)

      As a native speaker of Mandarin I had found myself speaking English, my L2, in a more topic-prominent way than it should be for native speakers. Certainly I have transferred some of the UG parameters of Mandarin to English unconsciously, including topic prominence.

      Delete
    7. Han, see other comments and replies about UG and L1 vs L2.

      Delete
  8. Pinker’s paper on language acquisition was able to demonstrate the importance of a multidisciplinary approach when trying to gain knowledge or understanding for complicated processes such as learning, or more particularly, language acquisition. This highlights the field of cognitive science as it brought together neurobiology, linguistic theory, multiple forms of psychology, the philosophy of induction, among other areas as well. The paper itself went through a story starting with the uniqueness of humans in terms of being able to communicate through language(s), and went on to outline the ways we build language as children, starting with exposure and ending with forming complex propositions. UG was also defined in the paper as “allowable mental representations and operations that all languages are confined to use,” however, its failure in distinguishing OG from UG has ultimately confused myself. As mentioned by many above, UG was not actually touched upon in the paper, and I find myself wondering what an example of an UG error would be.

    ReplyDelete
    Replies
    1. Karina, you're right to be perplexed about UG from Pinker's paper. All I can do is mention the usual examples of UG violations: "Who did he think that went out?" and "John is easy to please Mary." For more, you have to take a course in syntax.

      Delete
    2. It’s interesting, through this reading and 9b I’ve come to the conclusion that there really is no way to come up with examples that violate UG without being a linguist or using the common examples… It’s peculiar to me that the basis of linguists’ arguments for UG rules is what SOUNDS right and what sounds wrong… I completely understand that we must have this innate UG capability, but this is quite a bold (and perhaps risky) way to perform experiments and pose hypotheses. Essentially, all the rules of UG that linguists have determined are based on their own innate UG capabilities.

      Delete
    3. Stevan Harnad,

      Syntactic intuition refers to the innate (implicit) feature-detector that distinguishes UG+ from UG-. As you mentioned in his previous comments linguists use this implicit intuition to hopefully carve out the rules of UG by telling whether a sentence "sounds" like it violates UG rules (i.e., sounds wrong). Such an ability to detect UG- further confirms the existence of UG because if not, there would be no UG. The connection I would like to draw here is UG's relation to HP. The ability to say, "this sentence sounds wrong," is a state of feeling or, in other words, a capacity to feel that a sentence is ungrammatical. The Hard Problem, which requires us to answer how and why organisms have the capacity to feel, reasonably precedes our capacity to "feel" (implicitly) that a sentence sounds wrong. And if that is the case, does that mean we will not be able to carve out the structure of UG unless we first solve the HP? Knowing that you are not quite optimistic about HP's solvability, does that mean that it is unlikely we will ever get to map out explicitly the rules of UG?

      Delete
  9. In this chapter, Pinker argues that even though language is grounded in the brain, as evidenced by aphasics and rapid linguistic development, it is nonetheless necessary to use as many reliable disciplinary perspectives as possible to sus out how that brain architecture comes to process and produce speech. This framework is primarily conceptualized in terms of Universal Grammar
    While I agree with Pinker's conclusion that language is unattributable to and unexplainable via one process, he doesn't explain why we should believe linguistic architecture is attributable to universal grammar. While he does explain how the theory fits the evidence, the fact that so many disciplines have been necessary to understand how we've got our rudimentary understanding of language development suggests that there is probably no one mechanism to which language is directly attributable. If universal grammar exists, it is an amalgamation of diverse processes which form some higher-order model. How this could be exactly the same in all people without a direct analogue in genetics is something I struggle to understand.

    ReplyDelete
    Replies
    1. Jacob, "language" has many properties, including vocabulary, phonology, OG, UG, some learned, some inborn.

      Delete
  10. I still do not understand why Pinker doesn't want outright separate UG from OG. Maybe it has something to do with trying to reduce the speculative variables. But whichever the case, I have a question regarding sign language and UG. Is sign language built the same way as verbal language? If so, does it utilize UG rules? Or is it just a more complex form of Pantomime?

    ReplyDelete
    Replies
    1. Since ASL is considered a fully developed language, I would assume it also would have UG rules. Even though it has differing properties from verbal language, I would say it shares all the same fundamental characteristics of language including complex syntax and semantics.

      Delete
    2. I would also like to add my comments to this sign language post since I find it quite interesting. Sign language is indeed like any verbal language, both having OG and UG rules. It even has critical periods, just like verbal languages, and depending on the exposure to the sign language, deaf children master this language in different ways. For example, deaf children who are born to deaf parents are immediately exposed to sign language after birth, and they are better at understanding syntax than deaf children who were born to hearing parents, since these children were exposed to ASL at a later age. Hope this example helps!

      Delete
  11. Tess, distinguishing one CAT from multiple CATS is easy enough for an unsup or sup category learner. Ditto for DO or DID. And this is true both for perception/comprehension of words and production of words. Plural syntax and present/past tense verbs. I think this might be an example of Pinker overcomplicating OG as if it were an example of UG, which it isn’t.

    ReplyDelete
  12. Pinker, while providing an expansive overview of language acquisition, negated the distinction of UG and OG. In doing so, he still has not addressed the fundamental problem of the existence of universal grammar.

    A point in the "What and when children learn" section that was of interest to me, was Paul Kiparsky's theory of word structure. I had never considered a hierarchical framework for word structure that distinguishes between the layers of words in the way he does. In school, when I was learning French, we were taught in this way. Where first a root word is taught, then variations dependent on the addition of affixes are shown. While intuitive in the context of OG, as with the broader paper, Kiparsky's theory negates the distinction between OG and UG.

    ReplyDelete
    Replies
    1. Emma, there are some hierarchical properties in OG as well as in UG, but not the same ones.

      Delete
  13. This article explains the rapid and complex progression of language acquisition in children. The authors go on to explain the different factors that influence language learning (positive evidence, modified speech or motherese, prosody, and context) but suggest that there are some parts of language that seem to be unlearnable through encountered data (hinting at the existence of UG, or some innate grammatical abilities in children). I found this article a bit difficult to get through, but in the end I think they are trying to argue that universal grammar is the result of parameter setting and the subset principle i.e. natural languages seem to share an underlying basic foundation and common limits of grammatical possibilities that allow for the learning of implicit grammatical knowledge in children.

    ReplyDelete
    Replies
    1. Laura, see other replies about parameter-settings. The parameters-setting are settings on UG, but they are set by OG, which is learned.

      Delete
  14. If UG is the grammar rules that are universal for all natural languages, then there must be some intersections between OG and UG. Further the relationship of OG and UG is that UG is the intersection of all OGs ( Can be expressed as sets in veen diagrams).

    ReplyDelete
    Replies
    1. Since this intersection is difficult to find, so to actually produce a UG wrong sentence actually requires the knowledge of the OGs of all natural languages...sounds tricky even for linguists.

      But to produce UG correct sentences are easy. Assume I am a native speaker then sentences I used in this comment are UG correct.

      Thus this illustrate poverty of stimulus of UG incorrect sentences.

      Delete
  15. This reading explains language acquisition in children in a very straight forward way, however when mentioning UG, I would expect the writer to talk in more detail about why Chomsky believes UG is innate and his reasons for it (especially the poverty of stimulus argument). Furthermore, my understanding of UG is that, although there must be an innate mechanism in our brains that acquire universal grammar, it must still be triggered and encouraged with learning and environment. It is not learned, but exposure is crucial. Children like Victor (mentioned in the positive evidence section) who are deprived of language cannot speak. Genie is another example; she was deprived from language until the age of 13 and her language skills lagged behind other cognitive skills. When exposed to language, she was able to learn many new words, however she had difficulty mastering syntax (this is also related to critical periods in language acquisition). Fortunately, I believe that our innate language mechanism is very prone to learning language and receive input from the environment, since even if a critical period is missed, this mechanism tries to compensate this loss.

    ReplyDelete
    Replies
    1. Hi Alara, Baldwinian evolution as specified in 8a of last week ties into the fact that we have this innate drive to learn language. As Pinker describes, children are "... motivated to communicate" (section 2.3), and "... do not seem to favor any particular kind of language" (section 3). While our classmates pointed out that Pinker fails to fully describe UG in relation to OG, these excerpts connect to the ideas that:

      (1) UG is innate and universal to all humans and flexibly allows for us to acquire any target language OG we are exposed to,
      (2) We are not born knowing language as building in one specific language for all would be costly and inflexible- an example of "lazy" evolution, and
      (3) We contain a strong motivation and capacity to learn language

      Thus: whichever OG language we are exposed to through experience (unsup and sup learning) we are able to acquire due to UG, and our UG parameters are set based on this OG acquisition.

      Delete
    2. Alara and Darcy, please read the other commentaries and replies. Yes the child must learn the OG of at least one language to "trigger" UG.

      Delete
  16. “Between the late two's and mid-three's, children's language blooms into fluent grammatical conversation so rapidly that it overwhelms the researchers who study it”. I would like to draw a link to Fodor’s paper studied in week 4, looking at the implications and non-implications of neuroscience in the study of cognition. Indeed, the idea was that we need to stop studying the brain in-depth and depicting which localized area of the brain can do what cognitive action because this will never tell us how we do it, it just helps us give a scientific name to it. He shows how localization and correlation can fail to arrive at a causal explanation of cognitive science. I think this quote is another example of where neuroscience has its limits as it cannot play a part in explaining the exponential development of language in children during these 6 months. We know some things are changing on a physical level, such as the size of their head getting bigger, which could explain the increase in neurons present in their brain. There are also some changes in neuronal cells present in their brain. We know brain cells in children perform apoptosis (auto-destruction) in order to establish a glial and neuronal population of the right size.
    My overarching point is that we know there are biological and neuronal changes in the brain of children and yet this does not help us in explaining how language flourishes so much during these 6 months.

    ReplyDelete
    Replies
    1. Étienne, good points. But don't underestimate the potential of human developmental neuroscience.

      Delete
  17. Hi! I think I'm a bit confused, from my understanding of section 6.2 of the paper Pinker seems to be positing there is a lack of negative examples altogether, not just for UG. He also explains how negative feedback (not just negative examples) for ungrammatical sentences from the parents is not given nearly as frequently as positive feedback is for grammatical sentences. Is this just a misunderstanding/lack of specification on his part between UG and OG or are we to take it that children lack negative feedback overall in language development?

    ReplyDelete
  18. I found the part of the article that talks about the definition of language quite interesting. The boundary between what is considered a language and what is not considered a language is indeed quite blurry. The article gives the example of chimps that uses gestures, is language only about talking, or is it more than that? Babies think before they learn how to talk. Therefore, I can imagine that language is not about thinking. However, it is possible that language might be a way to organize our thoughts through categorization that gives a structure to our ideas. For example, often use the expression “to put words on things/feelings”. So, what constitutes language? When we think about bees and how they communicate the precise location of flowers with other bees by using a series of movements, is that considered language? Or does a parrot that repeats words use language? Not really according to the article, and that relates to the theme of language and intelligence.

    ReplyDelete
    Replies
    1. Charlene, please read the other commentaries and replies in 8a,b and 9a,b about the difference between human language and other communicative codes, biological and artificial (the difference is propositionality). What is a proposition? Can there be a language that can express some propositions, but not all?

      From the beginning of the course: Thinking (i.e., cognition) is the internal causal mechanism that gives thinkers the capacity to DO all the things they can DO (including to produce and understand language/propositions). What is blurry is the boundary between cognitive and vegetative capacities as well as the boundary between having a cognitive capacity and being motivated to use it.

      And then there is the hard problem of explaining how and why some internal states are felt states. At the beginning of the course we discussed how, for most things we can do, we don’t know how we are able to do them. We are waiting for cogsci to reverse-engineer them and explain their causal mechanism. UG capacity needs to be reverse-engineered too: We produce all and only UG-compliant utterances. But we don’t yet know how we do it. For UG (but not all the rest) a syntax course will help.

      Delete
  19. I found that this article offers a compelling argument for an innate language capacity by referencing a variety of robust studies and examples, yet it still left me feeling unconvinced about UG. I still cannot wrap my head around the UG argument, and it seems like Pinker cannot either due to his avoidance of addressing it. I know that last class we discussed how we are supposed to be confused about the evolutionary origin of UG - in fact, Chomsky suggests that we stop looking for the evolutionary origin of UG - yet how can we solve the Easy Problem (how and why we can do the things we do) without determining how UG evolved?

    ReplyDelete
    Replies
    1. Polly, at least UG capacity can be reverse-engineered without having to explain how it evolved. But that doesn’t mean we don’t want to know that too. And no reason to think it can’t eventually be known.

      [With the hard problem (feeling) it’s worse than that. We can’t reverse engineer feeling capacity itself, let alone how or why it evolved – though no doubt that evolved too, like every other biological trait.]

      Delete
    2. Thanks for clarifying that! Do linguists have all the rules of UG explicitly defined so we could, theoretically, program UG-capacity into a T3-passing mechanism? Or are some UG rules still not yet defined?

      Delete
  20. I think that another symbol of Pinker's failure to distinguish UG and OG can be seen in the bootstrapping problem. While children are born with UG, they still require context in order to develop their language abilities. They need to take sounds and situations. Pinker is arguing in section 8 that this transition cannot occur by innate grammar alone, they need to be able to pick out the grammatical structures that exist before their innate UG can be of use to them. This is what he refers to as the bootstrapping problem.

    ReplyDelete
  21. I found this reading interesting for its thorough discussion of the possible theories surrounding language acquisition and some of the problems that it addresses and poses. The question of how children learn languages addresses modularity, or how interrelated our cognitive faculties are; how exactly we differ from other animals and how our language systems differ from the communication systems of other animals; and what form our mental representations of thought take. Additionally, the reading discusses the role of physical maturation in language acquisition, which we had not previously discussed, in addition to the constraints posed by a lack of understanding of more basic concepts. Furthermore, the theory that children learn syntax through their understanding of the semantic meaning of a word and then apply this understanding of syntactical structures to understand the meanings of other words stuck me as interesting and relatively plausible. Finally, the finding that, while children sometimes overgeneralize within a category of words, they typically restrict their set of possible sentences to the smallest possible set that fits all positive evidence provides a way to supplement universal grammar to further understand language acquisition in the poverty of negative evidence.

    ReplyDelete
    Replies
    1. Elena, it’s hard to imagine that the referents and meanings (semantics) of the words children know does not influence their syntax for both OG and UG.

      There has been some controversy regarding the notion of the “autonomy of syntax” from semantics. Chomsky writes that he never held that syntax was autonomous from semantics, the way it is in computation, where the meaning (if any) of the symbols does not enter into the symbol-manipulation rules (algorithms); it is all just shape-based. Today the notion in linguistics is that there is an “interface” where syntax, semantics and phonetics interact.

      Delete
  22. Hi Sophie, Pinker did not specify the differences between UG/OG in the article, which makes it confusing. As UG is innate, the child will never produce UG- sentences, and parents will never give UG- feedback. Therefore, there won't be any UG feedback, positive or negative, for children. The Poverty of Stimulus argument is set to argue for the existence of UG, which means for language, overall, children don't receive enough feedback. There needs to be something innate (UG) for children to speed up learning the language, and only the non-innate part (OG) needs to be learned and corrected. I hope this clarifies the argument.

    ReplyDelete
  23. This reading discusses the mechanisms behind language acquisition. The study of language acquisition is way to answer questions related to cognition. It puts forward the question of modularity of language capacities. Indeed, is language learned through the performance of a “mental organ” separated from other mental capacities? Another theory is that this acquisition of language simply part of general intelligence. I found Specifically Language Impaired people a striking argument supporting that intelligence and language acquisition are two distinct features. Williams Syndrome also demonstrates how these capacities are utterly different. However, going back to the Turing test, language capacities does have an inherent need for groundedness. Indeed, feedback and environmental stimuli do play a role in the learning of a new language by a child as stated in this reading. Thus, this capacity is linked to other sensorimotor capacities, which makes it not an autonomous system.

    ReplyDelete
    Replies
    1. Ines, after you read the skyreading, please read the other comments and replies so you learn the which points are salient for this course.

      Delete
  24. In the introduction, the concept of heredity and innateness involvement with learning language seemed abstract to me. Though, I understand for the most part the whole argument and the confusion with presented ideas, i.e., Pinker's lack of context between UG and OG, as well as the lack of negative feedback with UG, which leads to the belief that it is innate. However, returning to the reading, when mentioned that "we know that adult language is intricately complex, and we know that children become adults. Therefore, something in the child's mind must be capable of attaining complexity", I would like a bit more clarification to help me understand the difference between children's and adults' abilities to acquire language regarding their rate of doing so. Specifically, constantly growing up, my parents would motivate me to learn languages because "it is easier to learn languages compared to when you are at an older age like them." Furthermore, this was also a saying constantly mentioned by my language teachers, whether it be in Italian or French class. In this case, would this quick acquisition of language then be connected to this so-called innate ability that may take form in the early years?

    ReplyDelete
    Replies
    1. Maira, both grammar and phonology are learned faster and more fully in an early “critical period,” from infancy to early adolescence, and even within that window, the first language (L1) is learned fastest and most fully. There are critical periods for other capacities too, in many species (e.g., imprinting in ducks: what is that?).

      Lazy evolution prefers the cheaper and more flexible option of learning things rather than building them in. Baldwinian evolution (what is that?) motivates and facilitates learning that is especially important. In the critical period for learning, both the motivation and the capacity to learn an important skill is the strongest, and then weakens or disappears later. Ducklings only need to imprint on their mothers once, at the beginning of their lives; and humans need to learn L1 early: The value of learning an L1 is greatest in infancy; the added value of an L2 and L3 is less than the value of L1. And part of lazy evolution is not to hold a gate open longer than necessary.

      Although “use it or lose it” applies to many skills throughout life, it is especially salient for the powerful capacity to learn an L1 during the critical period. (All children are born with the capacity to use the /ra/ /la/ distinction in speaking, and they also have categorical perception of the distinction, but they lose it if only one and not the other is used in their L1 language’s OG – as in Chinese, which has /la/ but not /ra/ vs. Japanese, which has /ra/ but not /la/.)

      Delete
    2. Hi professor, this remind me of the language acquisition that I learned in my neuroscience class. That languages fluency requires early language exposure and the acquisition of a second language must be before age 7, after this critical period they would never be able to reach to the level of native speaker. You mentioned in the earlier comments that “UG is learned with one’s first language” , does it mean UG is innate but only innate to one’s first language? That’s why it is still possible to produce some UG violation in the second language?

      Delete
  25. Sophie, Pinker does not make the OG/UG distinction, as Jenny notes. Pinker considers all grammar UG. That’s also why he treats the learning of plurals and past tense in English as if both weren’t perfectly OG and learnable (as they are, by both children and neural nets, even just unsupervised ones) but were instead somehow afflicted with POS. The regulars (cat/cats, walk/walked), which are the majority, are trivially learnable, and the exceptions (mouse/mice, go/went) are trivially memorizable. But in conflating these with UG, Pinker can only argue that they are learned “too fast” if there had not been UG somehow helping out in the background. (And as for there not being enough negative feedback for learning not to say go/*goed, I don’t believe it!)

    These are all symptoms of conflating OG with UG.

    But you’re equipped now to draw your own conclusions…

    ReplyDelete
  26. Melis, there is no doubt a prelinguistic stage where it is “just a series of category names and requests”. Maybe a lot of the pre-speaking comprehension is also just that.

    ReplyDelete
  27. This reading poses the questions we have surrounding language acquisition and explains perfectly how even though we have children that acquire language so quickly, they are not very useful for being able to explain the mechanisms behind language acquisition. We also have discussion of how language is located in the brain and how language doesn't equate to intelligence as most often will believe. What often is discussed as "motherese" despite being seeming like simplistic language can allow children to grow up speaking their language fluently. It is a popular study, especially here within Canada where we have two official languages and especially in Montreal where it is pretty much a bilingual city. Prosody however has always confused me, "The boy I like slept compared to The boy I
    saw likes sleds." I do not understand this example, does this have to do with how children understand language in context over intonation?

    ReplyDelete
    Replies
    1. Helen, after you read the skyreading, please read the other comments and replies. (In the two examples you give, the pause is after "like" in the first and before "likes" in the second. What is the subject and predicate of each proposition?)

      Delete
  28. The physiological changes underlying language development are costly – this cost incurred gives us some sense of the evolutionary importance of the benefits language confers, namely communicating information. Clinical literature suggests language ability is distinct from general intelligence. The maturation of the brain in early years corresponds to a maturation of the language system and a rapid increase in linguistic ability. Maturation also involves a loss of plasticity in the language system, and a reduced ability to learn languages. Learnability theory provides a framework of parameters (a class of language, an environment, a learning strategy, and a success criterion), whose constraints on each other can be explored as they relate to actual language learning in children. The language class and environmental parameters are relatively well studied, leaving researchers to puzzle mainly over the question of learning strategy, and what kinds of strategies children use for inductive learning and language production children use in the development of language. It is particularly difficult to explain children’s aptitude at extracting patterns, and producing patterned language, when much of the necessary information appears to be missing from the environmental parameters. I found the idea of having default parameters (in a different sense than above) which tend to over-restrict children’s language production, e.g. a strict obedience to known word order, until they witness examples of strings which demand a loosening of the parameters to be a particularly compelling explanation of strategies for language production.

    ReplyDelete
    Replies
    1. Hadrien, after you read the skyreading, please read the other comments and replies.

      Delete
  29. The following argument proposed by Pinker helped clarified and demonstrates the innate nature of UG, through the scope of children and the "poverty of the stimulus" argument. Pinker argues in this article that children solve the problem of language acquisition by having the general design of the language wired into them through UG. Despite how much certain languages vary from one another, all human languages have the capacity of adhering to UG, and individuals who are exposed to different language environments tend to follow the same grammar. Children understand some language rules that they could not have learned from external stimuli (i.e. poverty of the stimulus argument). Children are not exposed to "negative" evidence, but they recognize which structures are ungrammatical and learn language very naturally. This suggests that neither unsupervised nor supervised learning is used to learn UG.

    ReplyDelete
    Replies
    1. Sara, please read the other comments and replies.

      Delete
  30. Upon reflecting on today's discussion of gestural and vocal modalities, I wanted to summarise my understanding of why gestural modality has a better bet for the transition from showing to telling, rather than the vocal modality, when we are thinking about how language started. To compare vocal limitation and gesture limitation, pick three things you can depict gesturally AND that you can depict vocally. We immediately notice that vocal modality is extremely impoverished for showing, all you can SHOW vocally is what you can imitate vocally. If you communicating only by pantomime, how do you vocally imitate “window”, or “person typing on a computer”? Here, it is much more evident if this was done gesturally rather than vocally, what you are attempting to communicate is clearer through gestures. We talked about how pointing is the ancestor of references, a way of shifting one’s attention to something else. It is understood through our mirror capacities, a way of understanding the action of pointing as a way of understanding what is being pointed at “refers” to. Very little of what happens in our everyday lives can be easily mimicked vocally. Thus, a starting point for language – NOT the ideal destination for it – for the transition between pantomimes (as a way of communicating) to telling.

    ReplyDelete
  31. What I found interesting about this paper was the emphasis on how the maturation of the brain is the driving force behind language acquisition, which suggests that learning to speak and communicate with language is one of the many processes that occurs only during the period from birth to adolescence. Our innate ability to learn language, which seemingly exists before we are even born, while we are in the womb, is only there while our brain is still malleable, and after the brain has developed, learning a language seems to just become like learning any other skill, we no longer have a built-in aptitude for it. This seems to suggest that while Chomsky’s Language Acquisition Device may exist biologically, our environment is also a big determiner in our language acquisition.

    ReplyDelete
  32. On Whorf’s assertion that learning a language is not only to talk but to think in it: I find this an interesting hypothesis, however from experience and observation, learning a foreign language a lot of the time consists of knowing how to translate from out mother tongue to the foreign language that we are learning, until we eventually have memorized the vocabulary necessary.

    I also find it interesting how multilingual people will have different personalities, or even handwritings, in the different languages that they know. This makes me think about the extent to which we are taught languages, as we can take on different ‘traits’ that we associate with that language (for example, finding oneself more cold yet poetic in French). Could this possibly be associated with why we have trouble teaching animals language (because those traits and mannerisms that come along with it are specific to humans)?

    ReplyDelete
  33. Something especially interesting to me in this paper as well as the discussions here is the section in the input section. Specifically, the mention of the specific dialects that children are exposed to that influence the way they learn grammar was interesting, even if the dialects appear “ungrammatical” to speakers of the standard dialect. I know a lot of dialects do follow their own OG rules that may seem ungrammatical in the standard dialect, and that people not exposed to these dialects as children may not immediately understand the grammar of other dialects as they do their own. However, as these are OG rules, it’s usually easy enough to learn the same way one would learn OG rules of their native dialect, through positive and negative evidence.
    However, I am curious about languages like pidgins and creoles. I don’t know too much about them, but I do wonder if the line between dialect and different language is blurry enough for a native speaker of a language that feeds into a creole to possibly make UG errors, the way a person could make in learning their second language.

    ReplyDelete
  34. The researchers correctly describe language as a special ability unique to humans. They describe how language abilities are more than just a subset of general intelligence but stop short of equating language with thinking itself, as Whorf does. In Turing’s mind, language abilities are so central to what it means to be human that he believed it was enough to differentiate a human from a machine when he designed T2. We have learned in the course though that language must be grounded, meaning sensory motor capacity is needed to pass T2. I think the neurobiology of language acquisition should be examined further and will shed light on how UG actually works and enables toddlers to learn language so fast and innately. I disagree with Fodor’s perspective that to study language areas in the brain would be, in a way, a waste of time.

    ReplyDelete
  35. Supervised and unsupervised learning is required for OG because it is a learning process that needs positive and negative evidence to develop. Supervised learning is feedback from the knower, but unsupervised learning is by noticing patterns in how people speak to establish what is correct and incorrect. During language acquisition (OG), children receive negative and positive evidence, making it a learning process. Still, UG is not a learning process due to only evidence being positive evidence. Hence, UG cannot be learnt since the child receives only one type of evidence from their environment.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...