Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.
Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
IMPORTANT: Please post early in the week, not just before the next lecture, or I won't have time to reply.
ReplyDeleteAnd please read all the preceding commentaries and replies before posting yours, so you don't repeat what has already been said.
I understand how the implementation of a program would not concur intentionality within a machine/computer. It was a logical progression of thought to conclude this when framing it in the context of the Turing Test being an indicator of simply functional equivalence (hence functional indistinguishability).
ReplyDeleteI do question Searle's Periscope theory however. I think there is more nuance to the idea of us being conscious and knowing what it is like to be a cognitive, intentional entity and am not convinced we can be so sure of this. Suppose that we discover that in reality our awareness of what it is like to be, is simply the result of a defined set of internal computations, would it not be that our understanding of ourselves is then no different than the pseudo understanding of Chinese that Searle had in his experiment?
What Searle was testing with his Periscope was whether passing Chinese T2 was enough to make him understand Chinese. It wasn’t. And so it isn’t enough if a computer is running the Chinese T2 program either. Therefore cognition (in this case, underststanding) is not just computation. (“Intentionality” is a vague, slippery weasel-word; understanding English and not-understanding Chinese is clear, because we each know what it feels like to understand words and sentences.)
DeleteWhat is Searle’s Periscope on the other-minds problem? How does Searle know that not only does he not understand Chinese, but neither does the computer (at least not because it’s executing the Chinese T2-passing programme)?
I am not sure I fully understand Searle’s periscope, but here’s what I got from the reading. Computationalism postulates that mental states are nothing more than implementations of computational programs (1), and that the way they are implemented does not matter (2). If that is true, then any implementation of a specific computational program is equivalent and has exactly the same computational properties as all the other implementations. Searle says that this allows us to penetrate the “other-minds” issue, since we can be sure that if we -as an implementation of a computational program- have or lack a property, then all other implementations of that program will also have or lack that property. As an implementation of a Chinese-interpreting program, Searle does not understand Chinese. Therefore, any other implementation of the same program, in a T2 machine for example, will also fail to understand Chinese.
DeleteAmélie: spot-on!
DeleteI feel that this definition of computationalism underestimates the importance of neuroscience in human cognition. I understand that in attempting to reverse engineer cognition, we must adhere to dualism. But the human brain is inarguably responsible for some cognitive experiences that we cannot yet explain. I do not want to re-enter the argument about what is considered cognitive or vegetative, but the first example that comes to me is the experience of love. We know that love is associated with certain patterns of brain activity which could be considered non-cognitive but that love can also be associated to entirely cognitive actions such as choosing to prioritize someone you love, physical affection, etc. So, there is definitely a link between cognition and neuroscience/biology that cannot be excluded.
DeleteLaura, I think you’re right, there is a link between cognition and neuroscience. But, I think what we have discovered through Searle and Dr. Harnad’s critique is that neuroscience is just the implementation of cognition that humans use. If cognition is implementation-independent, then the physical details (i.e. neuroscience) are irrelevant and in theory, a simulation brain could achieve cognition the way a real-life brain could. The problem is we haven’t been able to reverse-engineer this. However, I think it’s true that an understanding of neuroscience would likely be helpful in that endeavor. But, after reading these papers, I’m not convinced that cognition is intrinsically linked to the physical “stuff” that makes up our brains.
DeleteLaura: What do you mean “we must adhere to dualism”? Where did that come from? And what is dualism?
DeleteSearle is testing computationalism with T2. T4 includes neural correlates of feelings, T3 includes behavioral correlates of feelings. Next week (4) Fodor will suggest that neural correlates may not even help solve the easy problem, let alone the other-minds problem or the hard problem.
Teegan, Searle’s Chinese Room only works against computationalism, if it can pass T2. T3 and T4 cannot be passed by computation alone; unlike computation, they are not implementation-independent.
Dualism, specifically Searle's reference to Cartesian dualism, is the idea that the mind and the brain exist independently and would persist in existence solely without the other. If we are trying to recreate human cognition in something other than ourselves, we would necessarily need to believe that cognition exists without the brain i.e. adhere to dualism.
DeleteLaura: Yes, that's dualism. But cogsci is not trying to "re-create" cognition, it's trying to reverse-engineer it, and then to T-test whether it has succeeded.
DeleteProfessor, I think this was where I have been caught up. Thank you for making the distinction between the dualist re-creation of cognition and reverse engineering it. This makes much more sense.
DeleteOne of the main points I take away from this reading is the new understanding of the TT it has provided me with. It was easy to take ‘granny-arguments’ or use Searle’s CRA to dismiss the significance of the Turing Test and computationalism in general. But the fault in this is to consider the TT as a definite proof of the validity of computationalism in the first place: this makes the test vulnerable to arguments proving that it fails as an indicator of mental states, therefore also invalidating the third premise of computationalism (according to Searle). Rather, Harnad suggests a modification of this premise to make us consider the TT as the best empirical test we have for having a mind or not: we cannot do better empirically speaking, in that it isn’t perfect but can be used for reverse-engineering the mind.
ReplyDeleteHarnad writes that the CRA wouldn’t work against a hybrid computational/noncomputational T2-passing system, but I think I would need some more explanation on the nature of this kind of system to fully understand this new aspect of the TT.
From what I understand, the CRA relies on the fact that Searle & the computer are implementations of the same Chinese-interpreting computer program. Therefore, it would not work against a hybrid T2-passing system, because this system might be able to understand Chinese, not through its computational implementation (ie what we can compare with Searle, who implements the same program yet doesn't understand Chinese), but rather through its noncomputational aspects. I'm not sure if this clarifies or if it reflects an accurate understanding of the reading, so let me know!
DeleteAmélie, you are right. Computers and computation are permeable to Searle’s Periscope on whether they have a mind (why?); but, like every organism, human or nonhuman – or, for that matter, every other inanimate object other than a computer, computing, Searle’s Periscope cannot penetrate them, either to confirm that they do feel or to confirm that they do not feel.
DeleteI had similar thoughts to Matilda regarding the Turing test. I realized that I was misunderstanding the Turing test, as I previously believed that to pass the Turing test was to definitively prove that cognition was computation, and that the reverse-engineering process was supposed to result in a model that was identical to the brain in both structure and function. This was illogical to me, as there must be multiple ways to achieve indistinguishability and to pass the Turing test. This is clarified in this reading: “This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking.”
DeleteDon't forget the Turing Hierarchy of doing-capacities. Only T2 could (in principle) be passed by computation alone, not T3 or T4.
DeleteThe explanation of scientific debate in the literature and conferences was fascinating to me as I read this paper. The importance of clear definitions and clearing “granny arguments” is essential in enabling a productive conversation without getting lost in the weeds, which I believe we end up doing a lot in these settings.
ReplyDeleteFor example: “If Searle had formulated the second tenet of computationalism in this explicit way, not only would most computationalists of the day have had to recognize themselves as his rightful target.” Indeed I had not extracted the fact that computational states are implementation-independent, not hardware-independent, which is what Searle meant when he said the “brain is irrelevant” initially.
I believe the conversation between Prof Harnad, Searle, and respective literature has led to this important discussion and evaluation of the useful and non-useful aspects of the CRA and Searle’s periscope.
“Hardware-independent” and “implementation-independent” mean the same thing, but they don’t mean that you don’t need to implement the computation (software, algorithm) at all, on any hardware at all. It just means the physical details of the hardware (the implementation details) don’t matter, as long as the hardware (whether the T2-passing computer or Searle) is implementing the right software (the TT-passing algorithm(s).
DeleteI had never dug deeper into the difference between hardware and software, but in reading this piece I feel convinced to not “look for mentality in the matter.” However, this sentence goes against a lot of what I believe as a student studying neuroscience, who spent years learning the physical structure of our brain and the localization of certain emotions or cognitive states. Even though I see why only functionality is relevant in the Chinese Room Argument, I stress that functional indistinguishability cannot exist without structural “resemblance” (to avoid making too bold a statement). And, just like is advocated through the duck argument, you need pretty intricate resemblance to simulate duck, or human, capacity in a non-duck or human. The line is blurry because there are some simpler things computer programs can make any hardware execute seamlessly but I’d argue computer programs will never manage to reproduce the bulk of human capacities in non-humans.
DeleteSee other replies about the distinction between the hardware for doing computation (symbol-manipulation) and the hardware for seeing, hearing and moving (which is not just computational). And don’t forget T3 and T4, which are also just testing what thinkers can DO.
DeleteWhile I appreciate the text for the clarifications it provides on the CRA, I'm not at all convinced by the conclusion that the CRA does not apply to a hybrid system such as a T3-passing robot. I see no reason for thinking that such a system would be any more capable of "understanding" than a purely computational one. Yes, Searle would not "be" the entire system in this case, but why does that matter? If the robot's computer "brain" is not capable of understanding, how does adding arms, legs, eyes, etc. (which themselves do not understand anything) on top of it can make the whole system posses this ability? To me, the suggestion seems equally contrived as the reply (which Searle refutes) that although the man in the room does not understand Chinese, the conjunction of the man plus the room understand Chinese. Perhaps one could attempt to hide in the murkiness of the other minds problem here, and say that, since we cannot know what it is like to be the hybrid system, it could be capable of understanding, even if this seems counterintuitive. But this, too, is unsatisfactory.
ReplyDeleteIs this all there is to Harnard's argument? If not, can someone please elaborate?
Hi Gabriel,
DeleteFrom what I understand of the readings, Searle's Periscope is able to penetrate the OMP you mention for a purely computational T2-passing computer because it relies on the implementation of computational properties which Searle was able to implement himself. Since, according to computationalism, all implementations of computational properties are equivalent, Searle's lack of understanding when implementing the program shows that all other implementations of this program will lack that same property.
This doesn't work on anything else than a T2 computer because then Searle would not be able to compare the properties of his experience with another system: a hybrid computational/noncomputational T2-passing system might be able to understand Chinese through noncomputational abilities that cannot be emulated by Searle.
This is just my understanding for now, please feel free to correct me if anything seems unclear!
Hello Gabriel,
DeleteI initially had the same doubt as you. What is clear is that Searle is right about the attack to computationalism. The mind is not just an implementation of computation. What is unclear is where Searle is wrong. Searle believed only the physical structure of the brain mattered to cognition. But if we were to really reverse engineer cognition we have to consider that its physical structures, if it is a machine that can virtually do everything we can do. I think this quote "Those degrees of freedom would shrink still further if we became more minute about function -- moulting, mating, digestion, immunity, reproduction -- especially as we
approached the level of cellular and subcellular function" summaries my point well. It is reasonable to feel unsatisfactory to believe a T2 computer have a mind, but if the point above is true, then this would be T2 that is structurally significantly different from say, our laptops, then the other mind problem would become a more serious issue to consider.
Sorry I forgot to reference the quote! The quote is from the reading this week, "What is Wrong and Right about Searle's Chinese Room Argument?" by Harnad.
DeleteGabrielle, you wrote:
Delete“I'm not at all convinced by the conclusion that the CRA does not apply to a hybrid system such as a T3-passing robot. I see no reason for thinking that such a system would be any more capable of "understanding" than a purely computational one.
Searle’s CRA only applies to a system Searle can “be,” so that he can use his Periscope to report that he does not understand. Otherwise, his would just be a Granny Argument.
“Yes, Searle would not "be" the entire system in this case, but why does that matter? If the robot's computer "brain" is not capable of understanding, how does adding arms, legs, eyes, etc. (which themselves do not understand anything) on top of it make the whole system possess this ability?”
I’m not sure how you got from what Searle showed -- which is that computation alone cannot produce understanding in a purely computational T2 -- to the idea that a T3 robot’s “brain” is just doing computation (which, remind yourself, is just symbol manipulation)?
There’s a lot more to T3 capacity than just T2 verbal capacity. And T3’s verbal capacity has to be grounded in T3’s sensorimotor capacity, somehow. You seem to be thinking that anything other than sensory and motor peripheral organs can only be computation, with the rest of the brain just serving as the (irrelevant) hardware for doing that computation (i.e., computationalism).
The real brain instead seems to contain a lot of internal topographic analogs of its sensory and motor peripheral organs, and a lot of other analog functioning too. That includes anatomy, which is non-arbitrary spatial shape, and physiology, which is both spatial and pharmacological. This is not arbitrarily shaped symbols being manipulated according to formal rules – any more than an ice-cube is.
But Searle has done the only job he can do: he has shown that whatever produces cognition has to be hybrid; it can’t be just computation. How much of it -- and what -- has to be dynamic rather than computational to pass T3 is waiting for the successful reverse engineering of T3 capacity. Whether it also needs acetylcholine depends on how much of T3 can be passed without having to dip into T4 (or T5).
”To me, the suggestion seems [as] contrived as the [Systems] reply (which Searle refutes) that although the man in the room does not understand Chinese, the conjunction of the man plus the room understand Chinese.”
But the way Searle refutes the Systems Reply is not the Granny way. It’s to “become” the whole system by memorizing and executing the T2-passing algorithm – which is exactly what he cannot do with a hybrid system.
“Perhaps one could attempt to hide in the murkiness of the other minds problem here, and say that, since we cannot know what it is like to be the hybrid system, it could be capable of understanding, even if this seems counterintuitive. But this, too, is unsatisfactory.”
No more counterintuitive than that cognition is just computation – but at least not demonstrably wrong.
Keep in mind that reverse-engineering in cogsci is about explaining how and why we can DO all the things we can do, and then testing whether we have succeeded in explaining it. The only reason Searle can penetrate the “murkiness” of the other-minds problem in the special case of computation is that computation is hardware independent, so Searle could point out that he could memorize and execute the T2-passing algorithm without understanding Chinese. He cannot do that with whatever passes T3 or T4, because neither T3 nor T4 can be passed by computation alone. No Periscope.
Mathilda: Good summary!
DeleteYumeng: One way to look at it is that trying to reverse-engineer T3 will be what determines how much of T4 (acetylcholine) and “vegetative” capacity will turn out to be needed to pass T3, as Kayla does. (I didn’t understand the end of your 1st comment.)
This paper further emphasized the idea that “cognition is not just computation” as passing the pen-pal version of the Turing test (T2) is not enough to justify understanding. As Searle was able to pass the Chinese T2 version yet he concluded that he could not understand what he was doing.
ReplyDeleteHowever, I am not quite sure I understood everything in Searle’s periscope. Especially the part about conscious vs unconscious understanding. In the paper “understanding a language” is described as “conscious understanding” but couldn’t there be some sort of unconscious understanding. Imagine I am in a room focusing on reading a book, yet I can hear a conversation from the table next to me. Even though I am fully focused on the book, I can still hear and understand the conversation. At least get the major points. Isn’t this some sort of unconscious understanding?
I am not sure I can clearly see the difference between conscious and unconscious understanding and how “in conscious entities unconscious mental states had better be brief”
The way I understand unconscious understanding is that no one is conscious of (feels) how their brain produces understanding. (Like in Hebb's 3rd grade schoolteacher example)
DeleteBut understanding and meaning what you are saying is not just a matter of being able to say whatever any normal person can say (T2), or even also do (T3). It also feels like something to mean and understand what you can say.
I was also thinking about conscious and unconscious understanding/cognition. It made me think of Clark and Chalmers extended mind hypothesis, where they argue (in Searles term) that anything that is part of the system such as the room itself, is part of the mind insofar that it has the capacity to perform “unconscious” cognitive abilities. In this case what they refer to as unconscious is anything that is not present at mind, for example a memory you are not actively thinking about. Whereas ‘conscious’ would be something you are actively thinking about, for example once you remember said memory. This might be slightly different then Searles use of the term but this is the easiest way I have found to understand the differences. On another note, I think Searle would be opposed to the extended mind hypothesis, as he seems to think unconscious cognition isn't cognition at all.
DeleteI was also really intrigued by the consciousness vs unconsciousness argument that was presented in the paper because it asserts that the unconscious state does not require cognition and are not mental states. I had some concerns regarding how unconsiousness consolidation would be regarded in terms of understanding because understanding often does take awhile. From my current understanding, the reason why the unconsciousness is not considered a mental state is because it relies on the act of doing and/or saying, it is the automatic action of understanding. However, consciousness has the act of feeling that you are understanding.
DeleteI was a bit confused by the difference here at first, too. But I agree with Inayat's argument that unconsciousness is the automatic act of understanding and consciousness is the feeling of understanding. This also reminds me of the blurry line we drew on cognitive vs. vegetative function, where it is difficult to distinguish between the vegetative/automatic actions and the cognitive/intentional actions. As to why the reading suggested, "in conscious entities unconscious mental states had better be brief," I understand it as we need not focus on the unconscious understanding here because this type of unconscious "understanding the language" is like speaking in tongues and not what we usually mean by understanding a language.
DeleteÉtienne “Conscious understanding” means felt understanding. Searle can say, truly, that he does not understand Chinese, because it feels like something to understand Chinese, and he doesn’t feel it. Searle does not have partial understanding of Chinese either. (See other replies.)
DeleteMelis, surely we don’t understand how we do all the things we can do until cogsci reverse-engineers them and then explains them to us. Our cognitive capacities are know-how that we have, but we don’t understand it – we don’t know how our know-how works.
Sophie: I used to cover the “extended mind” in this course because that stuff was so popular, but after 2018 I dropped it because it was too ridiculous, and the level of skywriting it inspired was too low.
Your “mind” is only as wide as your head, and it includes only the states that it feels like something to be in. If, in your head, there are states that it does not feel like anything to be in, then those are not “mental” states; they’re internal or cerebral states that we’re waiting for cogsci or neurosci to explain to us. But if they’re not even internal to your head then their claim to be “mental” is even more absurd (“a mental state consisting of my brain and my notepad”). All this because “mind” and “mental” are weasel-words. Mental states are felt states. The rest are just internal states we don’t yet understand, or even know about.)
You can look at the 3 readings and skywritings on this in 2018 HERE
Inayat, distinguish the states you know you’re in, because you feel it, from their internal causes, which you don’t know about. They both happen in your head, but only the felt ones are mental.
Jenny, most of what’s going on in your head you’re not conscious of; they may generate understanding (or at least a feeling of understanding), but what on earth would “unconscious understanding” be? Un-understood understanding? You’ll understand them when cogsci succeeds in reverse-engineering them, and then explaining them to you!
Week 11 in 2018: 11a 11b 11c
DeleteThis point also intrigues me and to some extent confuses me. Knowing a language can be largely unconscious, therefore it could qualify as a kind of 'unconscious understanding'. When we speak our first language, often times we are unaware about the grammatical rules in them; when we talk, we do not spend much time thinking about which tense or aspect or case to use, because it just 'pops out of our minds.' We need further contemplation to summarize what the rules are; this is how research on linguistics is based on. Actually, a vast number of linguistic phenomena have no universally accepted interpretation. So, do we understand a language or not? If understanding is equivalent to knowing all the rules in a precise way, no one can say that they understand even one single language in the world (except artificial ones.) Meanwhile, even though no one knows all the rules in any language, they do understand some language.
DeleteHan, being able to speak and understand a language is not the same as understanding HOW you are able speak and understand a language. We are waiting for cogsci (1) to reverse-engineer the mechanism producing our ability and then (2) to T-test the mechanism, and then (3) to explain it to us. Then we will understand how we speak and understand. (It’s the same as the example of remembering who was our 3rd grade school-teacher.) See other replies on having the “know-how” (to do something) but not knowing how we do it.
Delete(“Know-how” is an English idiom, somewhat similar to “savoir-faire” in French, but there may not be an equivalent in Mandarin. It means “to be able to do” something, for example, to recognize a friend’s face. But it does not imply that we know how we are able to do it. This is related to the difference between procedural and “semantic” knowledge, and also to the distinction between implicit and explicit knowledge. Most of our cognitive know-how – whether sensorimotor or verbal, is implicit, and only reverse-engineering by cogsci can make it explicit.)
他有能力做到這一點。
他不知道他怎麼能做到這一點
In this case, it is this alternative English word, “know-how” for the Chinese word “ability” 能力-- that sounds like a Chinese-style compound! Usually it is the other way round, with the Chinese word being a compound and the English word a singleton (like “ability”). I don’t think the English compound “know-how” has quite the same flavor as 能 plus 力: Does it?
The duck example made me think of the way we touched upon T3 and T4 in class. There is a continuum in the distinction between T3 and T4 and that in order to reverse engineer an indistinguishable “duck”, the “duck” will end up needing to satisfy T4 rather than T3 in order to include all relevant components that makes a duck a duck. I also think there is a continuum in the accepted variance of behavior. Does an indistinguishable duck have to be a “perfect” duck? Indistinguishability would need to define a strict boundary on the amount of variance allowed before it is considered distinguishable because ducks do not act in consistent manners. I think a simplified version of a T4 duck would depend on the amount of variance indistinguishability would allow.
ReplyDeleteTuring-indistinguishability refers to functional (behavioral output) equivalence to human behavior (aka cognition), which can be found in a system that passes the TT. Harnad argues that T4 is not necessary because of this definition- a system only needs functional equivalence to pass the test- any additional structural equivalence (same internal/external presentation) is irrelevant to passing the TT, rendering T4 unnecessary. I may be a bit wrong in this explanation, so please correct me if so: Since we are reverse-engineering human cognitive capacity, individual differences in behavior (input/output) are irrelevant as the capacity to do what humans can do is a generic one (you don’t need to be identical in the way you do things to pass the TT- as long as you can produce the appropriate behavioral output you will pass!). Thus, the differences in duck mannerisms do not matter in this case. (Harnad discusses this in some of the replies last week, they more be helpful than this to check out)
DeleteSo, as long as the system would be able to produce the various "duck" mannerisms (behavioral output), they would pass the test
DeleteMelis, do you really think variance has much to do with it when we T-test Kayla? And why would it make any difference to you if someone like Kayla was just T3 rather than T4?
DeleteDarcy, yes, I think “weak equivalence” is all Turing ever meant. What the TT requires is all the capacities
3.b From what I understand after reading that article, the CRA is far from perfect, but it is not entirely wrong according to Harnad. However, Harnad argues that TT is the best test that we have so far. It doesn’t mean that it explains everything, but empirically speaking, we “cannot do any better”. Also, the reformulation of the tenets in the article made it more clear what the thought of the defensors of AI was. the idea of mental states being computational states, and computational states being implementation-independent, is very interesting.
ReplyDelete...not according to Harnad, according to Turing (but I agree...)
DeleteI'm not sure I quite understood this reading. From what i understand, the only issue with the original argument were the premises that the CRA were put under and they just needed to be expanded to bring a better understanding of why the CRA proves that T2 machines cannot pass as having "Causal power". I'm confused by what this means for T3 and T4 machines. I would appreciate some clarification!
ReplyDeletePlease read the other comments and replies on 2a, 2b, 3a and 3b.
DeleteHi Helen,
DeleteI was also a bit confused after finishing this reading, but after reflecting and going through some of the skywritings I think I can help clarify your confusion regarding T3 and T4 machines. If I have understood correctly, T2 machines are the only machines “vulnerable” to Searle’s CRA for the simple reason that neither T3 nor T4 can be purely computational AND be implementation-independent. But why does this matter?
The importance of these characteristics is explained by Searle’s Periscope. Essentially, if we want to experience someone (or something) else’s mental states, we must BECOME that other thing or person. Of course, this sounds impossible, but because of the fundamentals of computationalism and the idea that (1) mental states are computational states and (2) computational states are implementation-independent, IF we are able to be in the same computational state as the machine we are trying to be, then (regardless of the hardware/implementation) we should be able to see if it has the mental states “imputed to it.” This is exactly what Searle proposed of doing in his CRA when he suggested that he would memorize all the rules, the script, etc. and ‘become’ the machine to see if the machine could truly understand.
Anaïs: Spot-on!
DeleteThis reading really helped me understand what Searle was actually trying to show while also enabling me to get a clearer understanding of the terms and expressions used in Searle’s CRA. This paper is an example of how reformulating and paraphrasing an expression can clear some ambiguities. For example, the tenet “the mind is a computer program” does seem a bit vague and is open to raise many “what?” and “how?” questions. However, rewriting this as “mental states are implementations of computer programs” gives a much clearer idea. The key word here, probably, is “implementations”, and reading this tenet in this reformulated way helped me understand why the CRA was highly criticized and objected to in the first place.
ReplyDelete"Implemented" means carried out, executed, like a recipe for cake (but just by manipulating symbols). It's carried out by the hardware, whether it's a computer or Searle. The hardware details are irrelevant.
DeleteFrom what I have understood implementation is indeed carrying out through symbol manipulation (so computation), and rule based and shape based, as you said like a recipe, and semantically interpretable. I believe the details are not relevant hardware-wise but algorithm-wise. This reading further clarified Searle's reading for me as well in that area.
DeleteAriane, that's it.
DeleteThe "Brain Simulator Reply" uses an inference as follows:
ReplyDeleteIf a program (A) can simulate the functional/behavioural capacities of the brain (B), [i.e., the actual sequence of neuron firings]; and if the understanding (C) happens to be a property of (B), therefore, (A) must have (C).
I understand that Searle does not really question the inference used in the "Brain Simulator Reply." Instead, he focuses on the notion of Strong AI that "we don't need to know how the brain works to know how the mind works." And that a program's implementation is independent of how it is implemented (hardware being irrelevant). In other words, as a Chinese-interpretation program, if Searle fails to understand Chinese, so would any Chinese T2-passing program. However, this does not bypass the fact that the inference used in the "Brain Simulator Reply" is overly inductive. And I wonder if Searle commits a similar potentially fallacious inference in his CRA that:
A non-Chinese speaker (A) can implement a program (B) [Chinese syntactical rule book], and understanding Chinese (C) DOES NOT happen to be a property of (A); therefore, (B) must not possess (C).
Yucen: Too complicated. A computer simulation of a brain would not think any more than a computer-simulated ice-cube would be wet. And if Searle executed the simulations, he would neither understand nor be wet. He would just squiggle and squoggle.
DeleteOne point made in this reading that I had confusion towards was the following: passing the Turing Test does not guarantee having a mind, and failing it does not guarantee lacking one. The part I do not understand is the latter point about failing the Turing Test not meaning that one lacks a mind. I understood that it calls for functional equivalence between the real thing and the reverse-engineered candidate, but is this related to understanding needing to be a conscious mental state, and the Turing Test does not call for conscious understanding?
ReplyDeleteHi Karina, if I understand correctly, computationalism believes Turing test is decisive, but it's not, failing the Turing test just means we "cannot do better than the TT", TT is not a proof of something, e.g. having mental states.
DeleteThe TT is the best we can do. Kayla is evidence enough, and no better than we have with one another. But physics experiments are not proofs either. Only maths has proofs. Science is just extremely probable and supported by a lot of evidence. There will always be some uncertainty because of the other-mind problem -- and incompleteness unless cogsci solves the hard problem. The Cogito "proves" that feeling exists, but reverse-engineering FEELING is even harder than reverse-engineering DOING.
DeleteSince it is quite impossible for us to become the other entity, and cognitive science is about reverse-engineering, I am curious about the duck example where it says to do so, we rely on structural and functional cues. We don't have more information than those and couldn't ask for more. When we agree to ask for LESS, doesn't it automatically put us on the level of D-test which is far from the indistinguishability of such test? If so, the reverse engineering starts from LESS and since the only information we have are about structural and functional, to make the duck more advanced, we are allowing lesser degrees of freedom by narrowing on the levels (structure and function, thereby solve the easy problem for duck) that's presumably no where near "understanding" what it is like to be a duck.
ReplyDeleteI agree. I am a bit confused with this statement as well. Even more so, adding to your comment, a claim in the reading mentioned that I’d like a bit more understanding would be the statement of: “ ‘understanding a language’, surely means CONSCIOUS understanding”. Moreover, we are unable to truly “disconfirm or confirm” whether an “entity is in a mental state unless we ARE that entity (which is ultimately close to impossible)”. But in this case, I know it was in mentioned class to avoid stating “x can't do this” as we are not that x; however, would it not be clear that a machine never will/have a conscious understanding when it comes to language? For example, the other day, I was asking SIRI to send a text to my friends I was meeting last night as I was walking [ex: Hey siri text Marina “I will be there in 5”]. Here, when SIRI asked to check whether the message it heard was the same as what I had said before, it had been wrong, and instead, SIRI had heard something different. In this case, is this an easy example of why according to the CRA, machines can only simulate a mind but can never be intelligent? That is, machines like SIRI will never have the consciousness to fully understand language which in turn represents never possessing the function of true intelligence?
DeleteMonica, D2 (quack like a duck) is functional, and D3 (quack and look and walk) is functional and structural and D4 (quack and look and walk) is functional and structural, just at T2-T4 are. Reverse-engineering everything observable that an organism can DO is functional, and reverse-engineering every observable feature of its body and brain is structural. Nothing is left out except what is unobservable. That’s feeling, the hard problem, and it’s left out because of the other-minds problem.
DeleteMaira, Siri fails T2; Searle’s CRA Argument applies to a mechanism that is (assumed to be) able to pass T2. Kayla, texting, would pass T2, but the CRA would not apply: Why not?
Thank you, it's much clearer now.
DeleteSearle’s paper essentially highlights the limitations of the Turing Test as a way of assessing program’s ability to “understand”, seeing as it can be passed without “understanding”. Ultimately it felt as though his conclusion around refuting computationalism takes us right back to where we began. But Professor Harnad in his reply clarifies his contribution to the overall discussion around simulating mental states and refuting ‘pure computationalism’ (i.e. that cognition is NOT just computing) as a drawback, but instead motivates new avenues (as mentioned – the “hybrid road of grounding symbol systems in T3”). Although it is still unclear as to whether it will ever be possible to ‘become’ any other physical entity other than ourselves, what is indeed clear is that a ‘thinking-being’ is a computer – it is clear that computing does not encompass all of cognition. I found this to be a very insightful takeaway from this reading.
ReplyDeleteSearle shows that computation alone does not work and concludes that cogsci should forget computation and the Turing Test and just go study the brain. That's not the right conclusion: what is? and why?
DeleteForgetting computation and the Turing Test and simply focusing on the brain is not the right conclusion to reach when we are ‘recalibrating’ how we approach the study of cognitive science. This is because studying the brain empirically (i.e. through brain imaging techniques etc.) will only provide information on what and which parts of the brain are allowing you to perform a cognitive task, and still does not explain why the brain does this. Based on lectures and the other readings, we understand that not all of cognition is computation, and, although there are many upsides to advancing neuroscientific research, there are still things we don’t know about why and how we are able to perform certain cognitive tasks.
DeleteThis article pieced together many concepts that we've discussed in class but were, until now, evading my understanding. For example, I now have a better grasp of what Searle's Periscope is and how it offers a solution to the Other Minds Problem from a computationalism perspective.
ReplyDeleteI really enjoyed the introductory anecdote about how destruction is not necessarily futile. I think it's important to realize that, much like physically reverse-engineering a T3-passing robot will entail countless cycles of construction and destruction, so will philosophizing/theorizing the idea behind such a machine.
Something that puzzled me, however, with regard to paper 3b is one of Professor Harnad's comments in forum 3a. It reads:
"although we 'do' feel, feeling is not an action; it’s not something we do; it’s a state; a state that we are in, a state that it feels like something to be in".
This comment puzzled me because: if feeling isn't something we're doing, how is it that we're conscious of our feelings? Since article 3b asserts that we must be conscious of mental states to make them mental states, how is 'feeling' as a mental state?
From what I understand, a feeling is not an act, or a 'doing' act, because it is simply an internal state that happens to us. We can't choose whether we feel like we can choose whether we do something. Emotions, pain, all qualia aren't discretionary like actions, so they're not 'doing.'
DeletePolly: the reply of Jacob is correct. And even the feeling of doing something intentionally, i.e., the feeling of “free will,” is just a feeling, unobservable to anyone except you (Cogito). Your DOING is observable to others, but what it FEELS like is not. That’s part of the other-minds problem.
DeleteBut the problem of free will, very interestingly, also touches on the hard problem (which is to reverse-engineer and explain feeling, causally): feeling feels causal to each of us, yet there seems to be no way to explain how or why.
[Note that this has next to nothing to do with the woolly weasel-word “intentionality,” which just mean “aboutness,” or “intended meaning”: “What I had in mind (i.e., what I meant) when I said ‘bank’ was not the place you put your money but the land edge of a river.”]
Reformulating the tenets of "Strong AI" (computationalism) further solidified Searle's point in the CRA. Computationalism tells us that our brains (the hardware) do not matter since all we do is just computations (software). But this does not make any sense. The software cannot run without hardware. Software necessitates there to be hardware. Otherwise, it would be the same as me stating that unicorns exist, even though they do not exist physically. I can state that they exist (software), but without a physical proof (hardware), my point becomes moot.
ReplyDeleteHi Alexei,
DeleteFrom my understanding, computationalism states that our mental states purely are a product of computation. However, it is not stipulating that the hardware is nonexistent. In fact, in this paper Prof Harnad tweaks the second tenet of strong AI to clarify that mental states (the software) has to be physically implemented, but their output (mental states) is in the software, and the brain is irrelevant to describing mental states. This allows us to conclude that different hardware can generate similar (or even identical) mental states.
Alexei, you are misunderstanding hardware-independence (which is not something Searle misunderstood; it’s really part of the nature of computing). It’s not that the software (the algorithm) can be executed without hardware! It’s that the physical details of the hardware are not relevant to computation. The algorithm is a software property, not a hardware property. Many very different hardwares could execute the same software (e.g., Searle, executing the same T2-passing software as the computer, both passing T2).
DeleteRosalie, the first half of your comment is right, but the second half sounds a bit garbled…
This reading emphasized the dynamics between cognition researchers and scientific debate through literature. I found it very interesting in revealing the way researchers interact with each other’s writing and thesis. This reading states that the Turing test is the closest empirically proven test we have, in regard to our question of “what is cognition?”. Harnad nuances by explaining that it is no guarantee, but it is to this day simply the best option we have. However, I didn’t quite understand the next bit that computationalism having an eschewed structure, and that as a result, this only leaves function.
ReplyDelete“This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking. Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a) and reverse-engineering has only two kinds of empirical data to go by: structure and function (the latter including all performance capacities). “
I was also struck by the notion that the Turing test is simply the best empirical test that we have at the moment to see if a subject has a mind and is capable of cognition, since it implies that in the future, better tests could exist for determining this question. What I think that the first part of the passage you’ve highlighted means is just this, that because we have yet to fully understand cognition, we can do no better than the TT right now to determine whether or not the subject has a mind, and it can still be right or wrong depending on the participants. I was also a little confused by the reverse engineering part however.
DeleteI think what it means by "computationalism having an eschewed structure, and that as a result, this only leaves function." is that computationalism concerns the functional relations between input, outputs, and internal states, which are all computational. The physical implementation of the computer program doesn't matter because the computational state doesn't depend on its implementation, according to computationalism. Therefore, they do not care about structure but only function.
DeleteI also have a question. If all we do is reverse-engineering, and we agree that we can only achieve structural and functional equivalence. But in the end, we are still not sure if something we created has a mentality, then what is it all about?
I agree with Nadila: computationalism is only concerned with function because computational states are implementation-independent. Basically, the physical details of the hardware are irrelevant, as long as the “correct” software is being implemented. Searle uses this tenet in his CRA, as it importantly means that a human can be an implementation of a computer program. Therefore, structure (physical implementation) is not important to computationalism, while function remains important. Nadila - that is something I have been wondering as well. I think that if reverse-engineering can only result in certainty of equivalent function and structure, but not mentality, it is still just as important. I say this because even with other humans, we are unable to break the “other minds” problem and know with certainty that other humans have cognition. Essentially, our reverse-engineering will have resulted in something indistinguishable from another human, so we would be as sure about “it” having cognition as we are about any other human.
DeleteEach of you has only sorted things out partially, and it’s partly my fault because in the 3b reading I used the weasel word “function” which I no longer use (but fortunately not the even worse weasel-word “functionalism.”)
DeleteYou can’t go wrong if you sort out the available observable data as observable DOINGs: The organism’s body can do some things and not others. That’s behavior, which is observable, and includes verbal capacity (T2) and robotic + verbal capacity + some observable structure (the body) (T3). Then T4 includes not just the doings of the body (T2, T3) but also the doings and structure inside the body, and inside the brain (T4)
It was Turing who suggested that T2 to T4 was all the observable data there was, and would be, and that’s true. No further observable data are in sight, or imagination. (Do you have any candidates?)
But all that just applies to the easy problem of DOING, because FEELING is unobservable (the other-minds problem). (Feeling does have observable correlates, but that’s not the same thing as observing the feeling, and the correlates are not always reliable.)
Searle was only addressing T2 (words in, words out), and only in the special case of computationalism, the hypothesis that computation alone can pass T2 (e.g., in Chinese). Searle shows that even if T2 can be passed by computation alone, it would not understand, because Searle could execute all the computations without understanding Chinese. And the reason he would know that is because it FEELS like something to understand Chinese;so even if all the squiggles he was manipulating to pass T2 were interpretable by, and understandable to, real Chinese pen-pals, Searle himself would not be understanding them.
Because of the other-minds problem, it is normally impossible to know for sure whether any entity feels or does not feel. But in the special case of computationalism, it was possible: why? how?
So forget about “function” and think of it only in terms of observable doing-capacity, and T2, T3 or T4. But forget also about any of these as being doable by computation alone: simulable, yes, because of the power of computation and the Strong CTT, but that would just be squiggling and squiggling, not passing T2, T3 or T4, i.e., not the capacity to do anything a human thinker can do, indistinguishably from a human thinker, to a human thinker.
“Because of the other-minds problem, it is normally impossible to know for sure whether any entity feels or does not feel. But in the special case of computationalism, it was possible: why? how?”. Searle knows he doesn’t understand Chinese because he knows he knows what it feels like to understand something. But the problem is not about how we know other people have cognitive states. In the case of robots and computers, he disregarded the case of other mind problems because he/we know that their processing is syntactic, so it does't make sense to attribute intentionality to machines.
DeleteThe point of this article is to elucidate the CRA’s import and to restrict the conclusion initially drawn by Searle on its basis. What we can take from the CRA is that Searle’s periscope allows us to observe any purported cognitive system we can be, and as we can be a purely computational implementation of a hardware-independent program, we can observe this system from the inside. The CRA, resulting from periscopic observation, entails limits to a purely computational approach: alone, it cannot account for mental states such as understanding. The crucial point is that the CRA does not preclude computationalism from having explanatory value, despite its incompleteness. Searle’s intuition is that a complete explanation must lie in the broader “hardware” of the human body and brain, but it would be a mistake to discount computationalism’s methods completely and look for a complete explanation in this broader hardware. Where Searle’s intuition leads us is towards a hybrid view which includes a functional account of the broader hardware to remedy pure computationalism’s limitations.
ReplyDeleteYou are mixing up “computation” and “computationalism.” Please say what each of these means, and then restate what you were trying to say above.
DeleteAlso, you are mixing up two different meanings of “hardware”: (1) the hardware that is executing the computation (which is not relevant) and (2) noncomputational, analog structures and processes other than the hardware that is executing computation: examples are temperature, humidity, size, weight, shape, movement, charge, valence…
Computation is what we have defined before, it is the manipulation of symbols according to specified rules making up a program, computationalism is the theory that cognition is computation.
DeleteI distinguish hardware sense (1) and (2) as simply "hardware" for sense (1) and "broader hardware" for sense (2).
My point was that Searle's periscope allows us to test computationlism as a theory, and define some of its limits (it does not account for perceptible mental states), but it does not because of this entail that computation is irrelevant to cognition.
I appreciate the clarification and translation into computationalism of the term “strong A.I” in this paper, since I think that this was probably one of the weakest aspects of Searle’s initial argument in “Mind, Brains and Programs” due to his poor definition of the subject. I think that the third point of the argument as a tenet of computationalism is especially important, since it makes that clear that “there is no stronger empirical test for the presence of mental states”, implying that the Turing Test is the most decisive test in this context. It is possible that there may be a better test that will postulated or that is unknown currently to determine whether or not a machine can be considered a “strong A.I”.
ReplyDeleteIf a stronger test might be possible, could you speculate what it might be like?
DeleteThis reading was the one that finally made the relevance of the T3, T4, and T5 Turing-type tests click for me. I found while reading Searle that if I were to come across a robot such as he had described, which is able not only to pass T2 but also to interact with the world in a relatively human-like way, would in fact be somewhat convincing to me (moreso than Searle seemed to think), particularly if I were to discover that it has a within it a computer that looks exactly like a functioning human brain running the program that causes the robot to behave in the ways it does. For me, introducing sensorimotor capacities and dynamic interaction with the world appears to not only change our intuitions about a system we are interacting with, but may also have implications for the type of the system itself, including whether it is in fact entirely computational.
ReplyDeleteBut how could just a computer, computing (squiggling), have sensorimotor interactions in the world? How can it pick up an apple and peel it? That would need T3, and T3 is not just a computer. The computer is just squiggling, even if the squiggles can be interpreted as neurons interacting. Think (again) of the simulated ice cube. And if the computation is transmitted to the gloves and goggles of a VR, that's not just a computer any more. (And it's not passing T3.)
DeleteWhat Searle senses in the Chinese room is that he feels like he doesn’t understand. This was a very clever approach to Turing’s imitation game, as a computer is not able to justifiably tell us that it feels but Searle can. So, Searle is thus able to not only show us that he can pass the Turing test, but also tell us that he has no feeling that he understands what took place. I find his thought experiment to be greatly amusing as it seems like such a “granny” argument at first, for which many “soft-headed friends” accept. The argument further pushes for complete dismissal of computation being part of cognition, for which Harnad pushes back by reinforcing an aspect of the systems reply.Through this paper it became evident to me how contributions from giants in the field of science only redirect our resources towards their compelling ideas, but it’s the refinements from putes that truly guide us towards the conclusions that we can draw from their contributions.
ReplyDeleteBut are you still a simulation in the matrix, Sepand?
DeleteI cannot agree with the claim that the expressive power of T2 is equivalent to or even draws upon our full cognitive capacity. While the importance of language in humans is huge there is also so much more that is missing, namely sensorimotor capabilities. This lends me to think T3 (or T4 because I am not yet sure how to distinguish them preciely) are required. In reaction to this statement "There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain" my intuition is that we should look to reverse-engineer the brain to reverse-engineer cognition; I would go so far as to say this view is probably also held by many people at McGill working in this field, at least it seems this way from the past courses I have taken.
ReplyDeleteYes, T4 is the prevailing rule at McGill, but read Fodor (4a) and don't forget that T3 and T2 are part of T4 too. So how are we doing on T2 and T3 so far?
DeleteKayla, although she is just as hypothetical as Searle's T2-passing program, has the sensorimotor capacity: Now explain why she would also need T4 capacity -- and how to reverse-engineer T2 and T3 capacity from the brain.
What you would need to find is something like the reason we have for explaining why T3 capacity is needed to pass T2 (symbol grounding) -- but in this case a reason why T4 is needed to pass T3.
I can't think of one, and I actually think it's not true. You don't need all of T4 to pass T3. Some of T4 is vegetative. And the test of what in T4 is needed to pass T3 is... T3!
This reading was interesting and helped to further explain and elaborate on Searle's thought. It adjusts Searle's argument to make it more defensible, stating that mental states are produced by programs that do not necessarily have to be enacted by a specific type of machinery. This leads to the conclusion that there are many avenues for exploration of cognition that Searle excludes in his assertion that the only way to study cognition is through studying the physical brain. As such, it seems important to ask which methods are best for studying which questions. Furthermore, it helped explain the idea of Searle's Periscope by connecting the implementation-independent nature of programs to Searle's ability to understand his own mental states.
ReplyDeleteCan we be conscious without emotions/feelings?
ReplyDeleteA conscious state is a state that it feel like something to be in. An unfelt state is not a conscious state. Emotions are all felt states. But most felt states are not emotions: seeing green, feeling a smooth surface, hearing an oboe, moving your arm, feeling warm, feeling tired: all feel like something, but they are not emotions.
DeleteI thought Searle’s description of understanding was straightforward enough in the reading in 3a, but the distinction made between conscious and unconscious understanding in this reading has made me think it might not be as straightforward as I thought. This was particularly intriguing to me as there was an emphasis on how with the CRA, Searle doesn’t feel like he understands Chinese. The feeling, to me, seems integral to Searle’s Periscope, as (if I understand it correctly), putting oneself in the state to implement an algorithm should thereby put one in a mental state which feeling should intrinsically be part of. Unconscious understanding, on the other hand (like the example in the reading of typing of a phone number you can’t consciously recall), doesn’t feel like anything.
ReplyDeleteI think that distinction was also really helpful in analyzing the System's reply and taking that argument further. It seemingly helps to strengthen that argument by claiming unconscious understanding as there wouldn't exactly be a way to argue against it other than redirecting the focus to what really matters (conscious understanding)
DeleteI understand from this piece that we can never experience another entity's mental states directly due to the fact that we can never BECOME them, but Searle's Periscope states that although we can never become another entity, if there are mental states that happen only in terms of being in the correct computational state, then we can verify whether or not it has the mental states inputed to it if we're able to get into the same computation state as the person or entity in question. But if mental states cannot be described by simply being in the correct computational state, then this would be wrong to begin with. I think it's impossible to ever be in the same state as another entity, because not all of cognition is computation, and how can we even prove one is in the same computational state as another's mental state?
ReplyDeleteThe other-minds question is whether they are in any felt state at all.
DeleteI'm having trouble understanding this sentence: "SUPPOSE that computationalism is true, that is, that mental states, such as understanding, are really just implementation-independent implementations of computational states, and hence that a T2-passing computer would (among other things) understand." What do you mean exactly by "implementation" and "implementation-independent"? I think what you're getting at is that cognition is equivalent to the implementation of computational states, and in that case, a T2-passing computer could be described as understanding. Is that right?
ReplyDeleteNo, I'm saying that computation has to be implemented, i.e., executed, otherwise it's not computation. But the physical details of the hardware that executes it are not relevant. Lots of different hardwares can execute the same algorithm.
DeleteI found that this reading complements the prior reading (minds, brains and programs) while also providing good examples of t2, t3 and t4. The reading divides strong AI into 3 premises, which in my opinion (perhaps a little biased due to the recommended readings) are very “crazy”. The article does a good job on explaining that the mind is not a computer program, but what I found interesting and maybe agreeable is the premise that states that mental states are implementation-independent implementations of computer programs (the difference between hardware and software). This concept is new to me but I it is definitely in my head and see reasons to agree with it.
ReplyDeleteIn class, we briefly mentioned Searle's Chinese gym argument, which simulates neural networks. Searle argued that interconnected nodes such as that cannot have actual "understanding," and I'm curious about how this argument was put forward and possibly argued against. Could we please elaborate on this topic a bit?
ReplyDeleteHi Jenny, I’ll do it, but Searle’s argument here is as silly as his claim that everything is a computer. It’s just an oddity for those who are interested.
DeleteNeural nets are like ice-cubes. The real neural nets are interconnected sets of physical units, passing activations back and forth between units that superficially resemble neurons. The simulated neural nets are computational models of the real nets (i.e., squiggles and squiggles that are interpretable as neural nets). Whatever the real one can do with its input and output, the simulated one can do too (especially if its input and output re just symbols too).
Searle’s argument against understanding in the real neural nets was that it’s obvious that if they could pass T2 they would not be understanding, any more than real people in a gymnasium, passing messages back and forth to one another, like neural nets, would be understanding, or any more than the Chinese Room in the CRA was understanding.
That was all that Searle said: “It’s obvious.”
Now, “obvious” is not an argument, as the CRA had been. It’s just a Granny objection. And if that had been all there was to the CRA, the CRA would have been nothing but a Granny objection too.
But it’s easy to turn the “Chinese Gym Argument” into the CRA:
As I said, there are real physically interconnected neural nets and there are computational simulations of them. Suppose the real neural nets (even if implemented as people passing notes back and forth in a gym) could somehow manage to pass T2, starting with Chinese symbol input, then a lot of interactions between the people in the gym, ending with Chinese symbol output that passes T2. Searle’s Periscope would not work on that, because Searle himself cannot himself become a gym-full of people, any more than he can become a bunch of interconnected neural net nodes.
But since this is still about T2, which means words in and words out, if the neural net, or the gym-full of people could pass T2, then so could a computational simulation of the neural net, or the Chinese Gym: Same Chinese messages as input, and same Chinese responses as output. In other words, the simulated neural net or simulated Chinese gym would be weakly equivalent (I/O equivalent) to the real net or gym.
And then Searle could use his Periscope on the computational simulation, once again himself passing T2, but able to report that he does not understand. Searle’s Periscope works on the simulated neural net.
Three interesting details:
(1) If you look at all the research literature on what neural nets can do, it is all based on computational simulations of neural nets. No real nodes or connections, just squiggles and squoggles. Neural nets are really just algorithms -- and they don’t have to be implemented by distributing a bunch of real interconnected units in real space. The simulation can produce the very same output to its input as the real net can. And in the case of T2, this means that if the real net/gym can pass T2 -- symbols in and symbols out – so can the simulated net/gym
(2) Searle doesn’t even seem to understand what neural nets can do, and how. They are learning algorithms. They learn to categorize inputs based on their features. They would be useful parts of a T2 algorithm, but they too are just algorithms (when simulated). In fact, since part of T2 is the capacity to learn, they would already have had to be in the (hypothetical) T2-passing program in the original CRA for it to succeed.
(3) Another interesting property of neural nets is that they can become Universal Turing Machines (although very inefficiently). So real nets could be used as the hardware for running any software. But, as hardware, they would again be irrelevant to whatever algorithm they were executing (e.g., T2).
(Exercise: why would this trick not work for T3? Why can’t a computational simulation of Kayla’a body and brain not simply be inserted into the T2 algorithm as just an additional algorithm, as we inserted a simulated neural net?)
Would this be because T3 requires sensorimotor capacities and the algorithm alone wouldn't have the physical hardware required to produce reactions? Basically in the case of T3, hardware is not irrelevant?
DeleteThis paper was helpful for clarifying the terms and conditions necessary for Searle’s argument to hold. By redefining Strong AI and how it relates to Computationalism, it becomes much clearer how the scope of Searle’s argument is dependent on understanding being a conscious state. Searle’s Periscope uses the implementation independent property of computationalism to use a mental state transparent entity (in this case a human) as the computational device and verify the mental state as a result of the computational input. This scenario used language as the input and output of the machine and in order to understand language there must be conscious understanding. The paper provided an example of unconscious understanding (dialing a phone number through muscle memory rather than recalling it) but I wonder if there is a more precise definition of how to determine what is conscious vs unconscious understanding. Why can’t dialing this phone number be conscious understanding as well? The person dialing is conscious they want to call someone at a specific, but they just happen to recall the muscle memory of the button pressing sequence rather than the numbers.
ReplyDeleteOne very important point made in response to Searle’s argument was the fact that we can never speak on another entity’s experience or understanding unless we become that thing itself. So when Searle tries to refute computationalism by saying that he does not understand chinese yet can execute certain behaviors that make it seem like he does, it’s a bit reductionist as it doesn’t consider the complexity of understanding or the different components that contribute to form our mental states and thus intentionality. For instance, we know that on a neurobiological level, our brain does perform some symbol manipulation in the form of sensory transduction so to deny the computational capacities of the brain entirely would mean that we would be denying the contribution of these sensory transduction manipulations as they hierarchically transform to build up our cognition. Sure, it MIGHT be the case that our intentionality requires something in addition to computation, but it remains to be a fact that our brain does perform computations.
ReplyDelete