Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.
Subscribe to:
Post Comments (Atom)
PSYC 538 Syllabus
Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25 Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...
-
Harnad, S. (2003b) Categorical Perception . Encyclopedia of Cognitive Science . Nature Publishing Group. Macmillan. http://www.youtube.co...
-
Searle, John. R. (1980) Minds, brains, and programs . Behavioral and Brain Sciences 3 (3): 417-457 This article can be viewed as an atte...
-
Fodor, J. (1999) " Why, why, does everyone go on so about thebrain? " London Review of Books 21(19) 68-69. I once gave a (perfec...
ReplyDeletePlease post early in the week, not just before the next lecture, or I won't have time to reply.
And please read all the preceding commentaries and replies before posting yours, so you don't repeat what has already been said.
I enjoyed how this article synthesized the previous pieces we have read thus far. Of note, I felt it helped me gain an understanding of the true difference between weak and strong AI. The key being weak simply simulating where as strong actually emulates. This difference originally seemed like a semantics game, but through paralleling it with cognition and the brain, I felt I gained significant appreciation for the distinction.
ReplyDeleteSimilar to you, I was initially a bit confused about these two definitions. However, I’m able to have a better understanding. Thus, I'd like to go into more detail about the difference, based on how I interpret it.
DeleteIn the reading, weak AI is associated with the “principal value of the computer in the study of the mind in which it gives us a very powerful tool. Ex: enables us to formulate and test hypotheses in a more rigorous and precise fashion”. In other words, weak AI is only able to run a certain task upon an algorithm or such. It cannot, however, think or behave cognitively. Whereas strong AI is something that can be equivalent to us, aka be able to perform the cognitive tasks we constantly exhibit. This interpretation comes from “the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, the sense that computers given the right programs can be literally said to understand other cognitive states.”
Furthermore, the paper states, “In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations”. This extends to the idea that, currently, strong AI does not yet exist as we have not cracked the code of figuring out how to build something indistinguishable from us. However, when that day comes then they are the “explanations” for then maybe it is then we have solved the hard problem.
“Strong AI” is exactly the same thing we have been calling “computationalism”: “Cognition is just computation.”
DeleteI agree! I find myself understanding more about the intentionality behind symbolic manipulation and how strong AI cannot perform this intentionality. Despite as both humans and machines can be quite similar and in fact humans being a type of machine in a way, it's the small blur in the lines that make all the difference when it comes to cognition and what it means to "understand" what is happening or what inputs you are receiving.
DeletePlease read prior replies about "intentionality" and "machine"
DeleteSame here, this reading was a step forward for me in terms of understanding the weak/strong AI analogy that had already been explained in class in different terms. This is definitely a stepping stone for understanding the bigger picture of this class as well, in terms of what weak AI can provide as a tool for psychology and strong AI being the psychological explanation as opposed to assisting the explanation. According to class, I had understood weak and strong AI as strong AI being computationalism, and additionally, that cognition is computation, but hadn't put two and two together in the larger scheme of things.
DeleteWhat are:
Delete1. Weak and Strong Church-Turing Thesis
2. Computation
3. Computationalism
4. Weak and Strong Equivalence
5. Weak and Strong AI
6. Computer Simulation
Everyone in the course should know all these things by now.
Also:
Deletea) Symbol
b) Symbol Manipulation
c) Machine
d) Turing Machine
e) Universal Turing Machine
f) Algorithm
g) Implementation-Independence
h) Symbol-Interpretability
i) Easy Problem
j) Hard Problem
k) Other-Minds Problem
l) Reverse-Engineering
m) Turing-Testing
n) T2, T3, T4
o) Doing
p) Feeling
q) Observability
r) Virtual Reality
s) Certainty
t) Uncertainty
u) Information
v) Cogito
w) Cognition
x) Causation
y) Explanation
z) Mechanism
Searle demonstrates here that the belief that an appropriately programmed computer is a mind, therefore that a computer can explain cognition, is erroneous. This is because, according to him, it is fundamentally impossible to create intentionality artificially as AI cannot duplicate the causal features of the human brain, in that it only uses a formal program defined by form, rather than by content (like intentional states). Moreover, Searle dismantles the appeal of AI as a duplicator of the mind rather than a simulator, represented by the equation that the “mind is to the brain as the program is to hardware”. Indeed, unless we have the dualist belief that the mind is independent from the brain, a program independent of any realization cannot reproduce/explain our mental life.
ReplyDelete“Intentionality” is an empty weasel-word. It just means underststanding. If you understand English, you understand “A zebra is a striped horse” but if you don’t understand 斑馬是條紋馬 it’s just a string of meaningless symbols. It to understand. Searle, even if he were manipulating the symbols according to the rules (and passing T2) would not be understanding Chinese. That’s the Chinese Room Argument showing the Computationalism (Strong AI) is wrong: Cognition (in this case understanding language) is not just computation.
DeleteI'm still confused by the meaning of understanding in this paper, even after reading the passage supposedly clarifying this concept in the reading. Could anyone rephrase what is meant by understanding?
Delete斑馬是條紋 -- did you understand that?
Delete“A zebra is a striped horse” -- did you understand that?
That's the difference. (1) You know what the English (content) words refer to; you could point them out. It feels like something to understand them. (2) But it feels like looking at meaningless symbols if you don't understand Chinese.
Does Searle not have a point in arguing that computers are not intentional in the sense that they cannot initiate actions, or execute functions truly through their own volition? I agree that intentionality is word tactfully used by Searle because when you ask yourself if a computer has intentions it leads you to agree with him. But a computer can only really do things that it is programmed to by a human. Perhaps the question should not be of understanding but of free will? I understand this may not be as pertinent to the notion of cognition but has certainly led me down a rabbit hole.
DeleteSee replies on “intentionality” and “free will”
DeleteReading this article helped me understand better the arguments used to refute the AI affirmation that machines can understand things. Searle seems to have a clear opinion about it. According to him, “understanding” cannot be assessed through the mere behavioural compound (or “result” to use another word). To explain that, Searle uses the example of Schank’s program where a computer manipulates symbols to give answers that can assess its understanding of a story. The computer could manipulate the inputs of the story and give the same response that a human would give, but that doesn’t mean that the computer really “understood” the story. Therefore, as long as machines don’t have the same “causal powers” that the brain has, understanding (or intentionality) is not possible. There is one thing that I haven’t completely understood, and it is the systems theory. In Searle’s response to the system theory, when it is said: “if he doesn’t understand, then there is no way the system could understand because the system is just a part of him”, I am not sure what he is referring to by the word “system”. Is he talking about the cerebral system perhaps?
ReplyDeleteFrom my understanding, the system here refers to everything that is needed to interpret the Chinese symbols: the rules, scratch paper, pencils for doing calculations, 'data banks' of sets of Chinese symbols, and the interpreter himself. In this sentence, we've assumed that the entire system is internalized in the interpreter, meaning that since he doesn't understand, the system can't understand either (because there is nothing that the system is that the interpreter isn't). Let me know if that clarifies!
DeleteAmélie, that’s right.
DeleteThanks Amélie, that really clarify things in my mind!
DeleteSearle demonstrates that there is an important distinction between the question “Can a machine think?” and “Do all machines think?” To the first, the answer is yes. Searle asserts that the human brain is a machine. To the second, the answer is no. Searle poses that just being a computer is not good enough to qualify as thinking, and for Searle it all comes back to the idea of intentionality. Dr. Harnad pointed out that intentionality is essentially just understanding. Thus, being a machine does not entail understanding, but understanding entails being a machine.
ReplyDeleteTeegan, Cogsci is about reverse-engineering what kind of machine you have to be to think, and understand -- but it can only infer it from what the machine can DO.
Delete“If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to”. I am confused by this quotation from the reading, because it sounds to me like a sort of Turing test that would attribute intentionality rather than intelligence as the original Turing test should be doing. The reading mentions earlier that there could be two systems that both pass the Turing test, yet only one that understands. Therefore, why would a T4 test that compares a robot’s behavior with human behavior prove anything about understanding/intentionality?
ReplyDeleteAmélie, “intentionality” and “intelligence” are weasel-words. And Searle is confused about robots, and especially the “Robot Reply” (how?). Also about “causal power.”
DeleteTo keep it straight, just think in terms of the causal mechanism producing the capacity to DO and the capacity to FEEL (which includes understanding what words mean). That is what cogsci is trying to reverse-engineer.
Searle thinks that only something with the “causal power” of the brain can do and feel, as humans do. So for a robot to feel, it has to have the causal power of the brain; if it doesn’t, then passing the robot T-Test is misleading us: it doesn’t feel, whether it’s T2 or T3.
Don’t wrack your brain to make this coherent: it’s not. Yes, there must be a causal capacity underlying the capacity to feel. But that is just a tautology about causality. Saying that only the brain can have that capacity is just a speculation, whether right or wrong. Saying that only something that has the “causal power” of the brain has that causal capacity is again just a tautology. All this solemn talk is easy as long as no one has yet reverse-engineered T2, T3 or T4.
Searle is right that “Computation alone cannot cause cognition.” But his take-home message – “so just forget about T-testing and computation and go study the brain” -- is wrong, or, rather, completely empty.
From this reading, I think Searle would have suggested that T4 is required for reverse engineering cognitive capacity, and T3 is not enough. The second reply "The Robot Reply" essentially suggested a T3 candidate, whereas Searle showed indistinguishability in performance can not fully explain cognitive capacity, in this case to understand language. But here i t confuses me because we said in the previous discussions we can not become a T3 robot, but didn't Searle thought that he could? "To see this, notice that the same thought experiment applies to the robot case." Isn't it on the outside a T3 robot does the same thing as humans, but internally they don't have minds like ours?
ReplyDeleteYes, Searle misunderstood T3 in his answer to the "Robot Reply." A robot is a hybrid computational/dynamic system. Searle's Periscope only works for the computational part. (Why? How?)
DeleteIf I have understood correctly, Searle’s Periscope only works for the computational part as he clearly shows that we are not just computers, there must be something else involved in cognition (more than simply computationalism). However, as T3 robot, I do more than simply compute, I can “understand” consciously and unconsciously (I believe), and I interact with the world in a sensorimotor capacity.
DeleteSearle’s Periscope does not address the dynamic system as he does not even consider the dynamic system as a whole- in the case of the Chinese room argument, he considers himself independent from the rules and procedures of the interaction. Therefore, the dynamic interaction part of who I am as a robot is not addressed.
The reason Searle can and does ignore the hardware details is “Searle’s Periscope”: how and why does it penetrate the other-minds barrier in the special case of computationalism?
DeleteTo my understanding, Searle's Periscope penetrates the other-minds problem by generating an implementation-independent system, which is the Chinese-interpreting system in Searle's paper. From a computationalist's point of view, the implementation details of the system do not matter. Thus either a T2 machine or an interpreter following rules could be a system that understands Chinese. What inspired me was this quote from the second reading: "The critical property is transitivity: If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property)." Computationalism ignores the details of physical implementation, thus allowing this transitivity property. This means we can see into the mind of a T2 machine doing the same tasks as the interpreter, and the interpreter can easily know that him/her lacks understanding of Chinese, meaning a T2 cannot understand Chinese either.
DeleteSymbol-manipulators -- whether machines or Searle -- are not interpreters; they are executors.
DeleteThe main points made by Searle are as follows: the brain is the causal mechanism that allows the mind to exist; the ability to manipulate symbols through the processing of language does not sufficiently demonstrate understanding of its meaning; programs on a computer are built and are bound to their designed structure; and finally, our minds possess the ‘understanding’ of its contents. He concludes by saying that because of its design, computer programs on their own cannot give a system a mind – meaning programs cannot have minds. In response to the many mansions reply (Berkeley) when Searle says that the brain in a sense ‘causes’ the mind to exist, is he trying to suggest that our minds are a result of an operation of the brain? And if such a program were to be developed and can perform this ‘causal mechanism’ that allows a system to perform this operation, would that cause the program to ‘develop a mind of its own’?
ReplyDeleteSee other replies about Searle’s notion of the “causal power” of the brain.
DeleteWhat is “Searle’s Periscope”?
Re-reading the threads above and from the lecture content, my understanding of Searle’s Periscope is as follows. In the paper, Searle’s Periscope proposes that “if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it”. To put it in simpler terms, if I were to put myself in the same computational state as the computer, I should (according to his periscope) also be in the same mental state (if the computer is in a mental state).
DeleteThis reading made me think about the way we talked about Descartes in class. Descartes points out that we all know when we feel something. The hard problem is that given that they feel, how and why can organisms feel feelings. While explaining how and why humans do what they can do we use causality. But in the case of feelings (which is something we do) how can we use causality since I am the only one that can know that I am feeling? If for example, Kayla says she can feel because she is feeling, how do I know that this is because of causality, maybe a 'reaction' created by the squiggles and squoggles but not just the squiggles and squoggles themselves that gave her a script to say that she feels.
ReplyDeleteThe reason for reverse-engineering DOING is because DOINGs are observable – whether T2 (verbal doings), T3 (verbal and sensorimotor doings), or T4 (verbal and sensorimotor and neural doings).
DeleteBut (and this is just a tiny bit tricky), although we “do” feel, feeling is not an action; it’s not something we do; it’s a state; a state that we are in, a state that it feels like something to be in; it’s not observable by anyone but oneself, and it’s more like something we “are” rather than something we are doing. When I’m running, I’m doing it. But when I’m feeling tired I’m not doing it; I’m just feeling it.
To use an older idiom: running is a behavior, an observable behavior, but feeling is not. Feelings may have observable behaviors and circumstances and neural activity that are correlated with them, but the feeling is not the behaviors and circumstances, they are just its correlates. And the neural activity may be causing the feeling, but we cannot observe how, or why (or even whether – because of the other-minds problem).
Turing says: “Be satisfied with the TT-indistinguishable correlates, since that’s the closest you can ever get.
In Searle's response to Berkeley it is explained that if Searle memorizes the (hypothetical) Chinese-TT-passing (T2) program and executes it on Chinese input, he, Searle, is simply executing the code. Searle's Argument is that executing the code does not produce understanding in him, so not in the computer either. I don't think that real language learning just involve words; it also involves the world that the words are about. T2 can only connect words to words. It is T3 that connects the words to the world. If T2 can be passed by computation alone ("Strong AI"), then I don't think it would be a strong enough test for reverse-engineering cognitive capacity. But If T2 can only be passed by a robot that can also pass T3 would that make T2 strong enough?
ReplyDeleteSee the last sentence of the reply above.
DeleteAt first thought, Searle’s argument really made sense to me. I tried to adapt this Chinese room scenario to a game of chess in my mind. A computer system might win a chess game against a world champion since it has learned the rules of the game and only performs symbol manipulations to predict the next move of its opponent. In this case, I am also convinced that the machine is not equivalent to a human brain, because I believe there is no evidence that the machine actually had the intention of winning the game, nor that the machine is aware that it has won the game against a human being. But here is where I get stuck: since we still don’t know how people think, how can we prove that machines can’t do it as well, since we don’t know the actual process underlying it?
ReplyDeleteBy reverse-engineering DOING capacity and T-Testing it to see whether you succeeded, and it is indistinguishable
DeleteI am also confused with the Systems Reply and wanted to think about it since I did some research and found out that it was one of the most common objections being made towards Searle’s argument. From what I understood, the objection being made here is that the system as a whole understands Chinese, even if the person does not, but how can we conclude this? Since the instructions are English, the person is enable to do the manipulations, but how does this relate to understanding the semantics of the written words?
ReplyDeleteI found the part of the reading where Searle talks about information processing very interesting, I think it helped me understand the ambiguity in the word “information”. However when he mentioned first order and second order information, I was a bit confused. Would first order information be the syntax, or “squiggles and squaggles” that the computers interpret? And then would the second order be the semantical or actual meaning behind the symbols? In this case is it fair to say that Searle believes that computers are capable of interpreting first order information but not second order information and that is why they cannot fully explain cognition?
ReplyDeleteIgnore what Searle says about “information” and “1st/2nd order.” It’s not relevant to his Chinese Room Argument. And ignore everything he says about “intentionality.” He executes the T2-passing program yet does not understand Chinese. That’s all there is to it.
DeleteJust to clarify, what I should focus on is not intentionality but causal power?
DeleteCausal power to DO (T2 capacity) -- and the absence of understanding (when T2 is passed by symbol manipulation alone).
DeleteWhile I'm not entirely convinced that the CRA successfully refutes computationalism, I do believe that at the very least it poses a formidable problem to anyone interested in Artificial Intelligence – namely, explaining how (if at all) a symbol manipulation system can become aware of what the representations it manipulates are about (i.e., what is their meaning/content). Since the word "awareness" appears here, I cannot help but wonder if consciousness must not necessarily factor in this explanation; after all, intentionality/aboutness has been proposed (e.g., by the phenomenological school) as THE defining feature of consciousness. My question, then, is this: could a system be be capable of "understanding" without being conscious? For instance, can the unconscious part of my mind that is responsible retrieving memories be said to "understand that" it is retrieving memories, or at least "understand how" to retrieve memories?
ReplyDeleteSee other replies about what it means to understand. It feels like something to understand. That's how Searle knows he doesn't understand Chinese.
DeleteA conscious state is a state that it feels like something to be in.
There is no "unconscious mind". And internal states that it does not feel like something to be in are not 'mental" states; they are cerebral states.
(Except when they mean feeling or felt, "consciousness" and "conscious" and "mind" and "mental" are weasel words. Try out the various combinations of the six to see which make sense,)
I found brilliant the observation that we use the word “understanding” lightly as a society. For something we cannot even explain and justify, it seems so universal. I’ve never questioned the word when used. When we say the oven “knows” when it is preheated or the thermostat “understands” temperature changes, we are extending our “own intentionality” or “purposes” to the tools we have created. In a sense, I appreciate the “systems” contradiction arguing that a machine doesn’t have intentionality, but that, rather, it functions within a broader system of intentionality. Can we say that humans do so, as well? Do we have innate understanding or are we falsely attributed understanding by larger things like the laws of physics, of math, of evolution? The true difference between human brains and computers might not be intentionality or understanding. Is it maybe individuality? I propose that individuality springs from consciousness, and that this is the fundamental quality that machines lack.
ReplyDeleteI like your point about how human beings also function within a broader system, and that this shapes our understanding of the world. However, I do not necessarily agree with the idea that computers do not or cannot have individuality. Computers are capable of functioning independently, and theoretically can learn new skills and adapt over time. What separates this type of creation of identity from that of humans? Don’t humans also form individuality and identity through learning and independent experiences?
DeleteIntentionality, individuality, intelligence, and even consciousness are just weasel-words that mean too many different things to too many different people. Just stick to “states it feels like something to be in” and understanding is one of them. (Of course, the feeling you are understanding may be real but wrong: you may be misunderstanding. But that’s not what’s at issue with Searle’s Chinese Room Argument: He feels he is not understanding Chinese – and he’s right.
DeleteSearle’s argument is understandable to me, especially with his counterargument to Roger Schank’s example. However, I found myself considering the problem of other minds. As much as I understand why Searle argues that we cannot be sure that computers are understanding, I do not see why we can be sure that they are not.
ReplyDeleteThis argument is presented in the article and labeled ‘the other minds reply’: "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."
Searle’s reply to this argument is not particularly compelling to me, as he seems to rely on presuppositions as his defense. Is there any other counterargument to this reply, or is this an unsolvable debate because it relates to the ‘hard problem’?
In Searle's view, it is not about "how do we know other people/computer programs have cognitive states?" He is more concerned about what "cognitive states" imply. In other words, "what properties are we attributing to a program/human when it is said to be cognitive?" In his reply, he rejects computational processes for being part of the cognitive state because computational processes can exist without being in a cognitive state.
DeleteWhen considering the problem of other minds in relation to a simulating program, I think the source of confusion may come from the notion of "information processing." Like Searle said, "information processing" is used too liberally and causes many misunderstandings. We habitually attribute "information processing" to a computer program in the same sense as how we process information. The nuance is that the former DOES NOT imply intentionality, but the latter DOES. So, when we attribute "information processing" as a cognitive property to a computer program, we need to ask which version of "information processing" we mean.
Kimberley: The other-minds problem cuts both ways: You can be sure another person feels and you can’t be sure rock doesn’t. Searle’s Periscope only penetrates it only in one special case: Computationalism: If computation alone can pass T2 then Searle can point out that it would not understand because computation is hardware-independent. So he can become the hardware that is passing the Chinese T2 program without understanding Chinese (i.e., not feeling what it feels like to understand Chinese). When he memorizes and executes the symbol-manipulation rules, he is the whole “system.”
DeleteYucen: using other weasel-words – such as “information-processing” or “cognitive” – does not help. If Searle does not understand when he executes the T2 program, then neither does the computer. (Searle’s Periscope.)
Cogsci is trying to reverse-engineer what cognition (thinking) is, by reverse-engineering what thinkers can DO. Searle cannot say what cognition is, but he can say what it isn’t: (just) computation.
Note, though, that Searle does not show that computation can’t be part of cognition: The fact that we can DO long division and factor quadratic equations and solve syllogisms proves that we can and do DO computation. But computation is not all we can do; and it’s not enough to produce language understanding (which is not just “symbol processing”: that’s just syntax).
(Perhaps that’s because syntax is “ungrounded”: for semantics (meaning) words need to be connected to the things they refer too, and that can’t be done by connecting them to more words. Direct sensorimotor interactions with the things in the world are not just symbol-manipulation.)
The most interesting thing that I got out of the Searle paper is the definition of understanding and what it means for us humans to understand the world around us.
ReplyDeleteI am wondering if there are different degrees of understanding that pertain to different languages. Does my brain think in a particular language. For instance, when I am calculating mathematical computations, I have a tendency to think in French whereas when I am writing an email or a formal essay, I prefer writing in English. How and why does my brain select a certain language over another. As Searle states: “there are many different degrees of understanding; that “understanding” is not a simple two-place predicate; that there are even different kinds and levels of understanding”. Could it be possible that my brain has attained a certain level of understanding in one language and not the other? If Searle says he can produce an output with squiggles and precise instructions, an algorithm, but yet he cannot understand the meaning of the interaction between the input and the output. How can I know if I am not merely applying instructions when I am doing something in a language different from my mother tongue.
Hi Etienne, I think this goes back to the conversation of what understanding is- understanding "feels" like something- you know what the content/words refer to (through sensorimotor ways like Jenny explained above). I guess in this way, there are different levels of understanding that are possible when you are learning a language; you can "feel" what it's like to understand what certain phrases/grammatical structure are like but can't for others if you haven't learned them yet- they just look like meaningless symbols since there isn't yet that connection.
DeleteÉtienne: We’ll get to what language is in Weeks 8 and 9. It is a code with both syntax and semantics, consisting of subject/predicate propositions that can be True or False, and every human language (not every code), can express any proposition.
DeleteAbout partial understanding and misunderstanding, see other replies. But remember that even misunderstanding FEELs like something. Searle does not understand Chinese at all. What he feels is that he does not understand it. Take him at his word!
Jenny, you’re mostly right.
Searle himself can learn Chinese. And any T2 passer must be able to learn Chinese too (the capacity to learn language, and further language, is part of T2). But if cognition is not just computation, then to pass T2 through computation alone [which requires a T3 robot, because of the symbol-grounding problem] would not be to understand any of the languages. It would just be to manipulate, and learn to manipulate, its words (syntax).
Darcy, yes, understanding can be partial. But that’s not the point here. Searle has zero understanding of Chinese; and he’s not learning to understand Chinese in doing the T2-passing symbol processing. He is just following a recipe for manipulating words.
(I'm not sure why my previous post was deleted, so I'll rewrite what I remeber here :) )
DeleteWhen we learn a new language, we begin with the basic symbols (words) and the manipulation rules (grammar). At this stage, we don't have a feeling of understanding and cannot directly interact with the world using this language. We are just manipulating symbols. When we can associate this language and incorporate it into our other sensory-motor capacities, we shall have a feeling of understanding and navigating the world through this language. For example, at this stage, we should have no problem relating an actual apple to "pomme." When we prefer a specific language in a particular setting, I think it's fair to assume that we have a more profound sense of "understanding" of that language in that aspect.
I, intuitively, agree with Searle's overall arguments about the "current" impossibility of creating a "Strong AI." But I would like to get some clarifications. I understand that Searle is arguing against the idea that a computer may have an inherent understanding of whatever symbol manipulation it is doing. Since knowing how to manipulate a symbol is not the same as understanding what the symbol represents. And this is all based on the idea that computers function only on inputs and outputs.
ReplyDeleteWhat boggles my mind the most is that our brains also work on this system. We put our hand on the stove (input) and move it from the stove once we feel the heat (output). In other words, our inputs and outputs rely on the information transmitted by our neurons to other neurons. In neuroscience, researchers have discovered that for a neuron to fire its electrical impulse. It needs to build its potential first, and only then can it send the signal. In other words, it either sends the signal (1) or does not send it (0).
But, at least for me, this is the same idea as a computer's input and output. If our neurons do not receive the required input, they will not send the output. Yet we still feel conscious. And we still not only manipulate the symbols but even understand them.
I hope I was able to express myself adequately. If not, I will try to reformulate my issue.
I agree with you in principle that the input and output argument is applicable to both the human brain and to a machine. However, I think there is a clear distinction between the two that Searle is trying to convey. When you remove your hand from the stove, you do not have an internal dialogue because that motion is more of a reflex. In that example, it is very similar to the way a machine would operate. However, in another scenario that is not reflexive and requires thoughts and understanding, Searle's argument becomes clearer. When you are writing an essay, there is more to your action than the input (your finger on the keyboard) and output (a letter appearing on your screen). You have an understanding and idea of what you are writing. The same cannot necessarily be said for a machine.
DeleteAlexei, sensorimotor interactions with objects and people are not the manipulation of arbitrary symbols any more than an ice-cube, melting, is. Nor is learning (especially grounding) a language. According to the Strong Church-Turing Thesis, just about any object or process can be simulated by a Turing Machine, but that does not mean everything is a Turing Machine.
DeleteWhat is computer simulation? And what is the difference between a real ice-cube, melting, and a computer model of an ice-cube, melting?
Kimberly, it’s much simpler than that. Pouring water into a teapot can be modelled by sending symbols to a computer, but there is no tea for you to drink (unless the output symbols are piped to a VR machine – which cannot be just another computer: Why not?).
Kimberly: I agree with you that motion can be caused by reflex. However, what if I put my hand on something hot and willingly keep it there, even though it causes me mild discomfort? I fully understand that keeping my hand there will bring me more pain, yet I hold my hand there because of reasons "x," "y," and "z."
DeleteProfessor Harnad: A computer simulation is a computer trying to simulate whatever it was programmed to simulate (in other words, try to reproduce a particular thing that exists in time and space). The point is that even if a computer can produce something, it does not mean that the computer is "conscious."
And now I see where you were going with this... You got me!
Alexei, I still have no sense that you know what a computer simulation is. Please read the other replies. (Find "simul".)
DeleteProfessor Harnad: A simulation is various squiggles and squaggles that correspond to something in the world (i.e., an ice cube melting will have specific squiggles/squaggles). Since computation is just symbol manipulations, we can assert that computation could also manipulate the squiggles and squaggles because those are just symbols. A symbol can mean anything, depending on the algorithm a computer uses. In other words, symbols are arbitrary. Needless to say, squiggles and squaggles are also arbitrary.
DeletePutting it all together, a computer simulation is a computer that interprets a squiggle as something that is found in the real world. But there is a problem with that. Since symbols are arbitrary, they need to be interpreted. Computers are, as of yet, unable to make interpretations without having a specific rule book (an algorithm). So we need to rely on the prediction that the computer's algorithm represents the most accurately something in the real world.
From my understanding, Searle does not believe programs like AI are capable of being equivalent to the human brain due to not being able to achieve the causal power “strong AI” run on programs and programs are insufficient for reverse engineering thinking. Drawing from your point on causal power, causal power is our brain’s capacity to do and feel, and Searle believes the Turing test and computation should be left behind altogether due to them not being able to cause cognition. I am confused by what he considers to be a machine because of his statement that “AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines” (14). What is the difference between a program and a machine? Is Searle belief that a program needs additional support to function, while a machine’s functions are automatic?
ReplyDeleteFrom what I understand, thinking is a mental state. In Harnad's words, mental states are "implementations of (the right) computer programs." AI indeed has nothing to tell us about the machine because AI only refers to the implementations of (the right) program (software). In other words, a program has nothing to do with the physical detail of its implementation (hardware). A machine can be any causal system that executes and implements the program. Why AI has little to tell us about thinking? If I understood correctly, thinking is a mental state, and mental states are "implementation-independent implementations of computer programs" (Harnad). So, If AI can tell us anything about thinking, it means that the program inevitably possesses the same causal capacities that we do; ones responsible for our capacity to feel (including thinking). However, if AI could actually do that, there would not have been a "hard problem" in cognitive science.
DeleteComputers are machines. So are molecules, monkeys and the moon. A machine is just a causal system of some kind. According to the Strong CTT (= Weak AI), computation can model or simulate any machine. But a computer simulating a molecule is not a molecule (even though it’s made of them), even though it may help us understand how to reverse-engineer the molecule.
DeleteThis text, along with the emphasis on distinguishing simulations from reality last Friday, helped consolidate my understanding of strong AI as a formal program that is constitutive of the mind, and not simply a computer that generates a perfect replication. His line, “If you can exactly duplicate the causes [of consciousness], you could duplicate the effects,” puts it simply. Searle obviously opposes computationalism since even if the TT is passed using computation, there is no cognition.
ReplyDeleteIn Searle’s refutation of the combination reply, he explains that we would not attribute intentionality to a robot whose behaviour is identical to a human’s if we know it simply runs a formal program. Is this not begging the question that computers cannot have mental states?
Please read the other replies.
DeleteTo pass TT the candidate has to do anything a human thinker can do: verbally (T2), verbally and robotically (T3), or verbally, robotically, and neurally (T4).
Verbally, we can speak, reply and understand. Normally understanding cannot be tested directly, because of the other-minds problem. But in the special case of T2, passed by computation alone, Searle points out that T2 would fail to understand (Searle’s Periscope).
So computation alone is not enough. (Computationalism is refuted, through Searle’s Periscope.)
If T2 can be passed by a hybrid computational/noncomputational candidate, like T3 or T4, the other-mind barrier remains intact, so we have nothing better to go by than what the candidate can do, which is the basis of the method of reverse-engineering and T-Testing.
I’m intrigued by Searle’s assertion that intentionality is the key to cognition, and that intentionality is inherently tied to the physicality of the brain. Particularly this quote, “Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them,” is interesting to me. The crux of Searle’s Chinese Room argument is that the Chinese wouldn’t mean anything to someone who couldn’t understand it. But I wonder, even though there’s nothing semantic attached to the Chinese for the person in the room, there is to the English instructions, so if provided with different instructions, translations, the person in the room could feasibly learn Chinese. So, given the proper tools, would it be inappropriate to suggest that a computer could learn Chinese the same way a person could (without a human brain)? (Though maybe that’s a misuse of the circumstances and I’m just getting into the symbol grounding problem).
ReplyDeleteSearle is dismissive of the argument that the AI he argues against is merely the present form of technology because he says it “trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition,” and that if that were the case, his arguments in defense of the Chinese Room would no longer apply. I think overall I agree with Searle’s argument against computationalism, but I’m not sure I agree with his assertion that the only possible AI under that definition (as something that explains cognition) is biologically identical to the human brain.
What you are saying makes me think of a previous reading that talked about the concept of learning. I you remember, the article discussed the possibility of making a computer learn from experiences in the same way a child would learn by using scientific induction. And your question is a bit similar to that since it relates to learning a language.Taking into account the other reading, I think that it is not inappropriate to suggest that a computer could learn Chinese the same way a person could (without a human brain), but it would probably learn it differently since the machine wouldn’t have some features that are relevant in language learning such as ears (perhaps to recognize different accents for example). Moreover, in terms of understanding, it is very hard to tell. According to Searles, “instantiating a computer program is never by itself a sufficient condition of intentionality”. In this case, the computer program being learning Chinese the same way a human can learn it and the machine being the computer, it seems clear that according to him, the result is that understanding could not happen unless we are talking about a brain or the exact same system as a brain involving all its features. I think that even though a machine could supposedly learn Chinese, it doesn’t necessarily mean that it would actually understand it.
DeleteNo translation in the recipe (algorithm), just “If squiggle, then squiggle.” No learning either.
DeleteOn the T2 capacity to learn language, see other replies. In threads 3a and 3b.
Even it's my first time reading Searle, his train of thought for the Chinese room argument is clear to follow and with all the previous foreshadowing from class, there's no doubt that it's against the argument of strong AI. Since understanding itself is not observable, there is nothing when it comes to what computers understand. As we said CogSci is reverse-engineering, is Searle's claim on presupposes reality and knowability of the mental similar to this? We need to test if computer can do/feel what we can do/feel, it seems like it's going back in the loop of the hard problem.
ReplyDeleteAll TTs test only what we can DO, what whether or what we FEEL. The assumption is that if we can reverse-engineer the doing-capacity (the “easy problem”), feeling will come with the territory. The hard problem is explaining how and why.
DeleteSearle’s Chinese room argument clearly illustrated that there are multiple levels to understanding, where it could have been believed to be categorical (either having or lacking understanding). I particularly enjoyed thinking about how the concept of understanding does not apply to any case, such as with an adding machine which simply follows rules. It was stated that the conjunction of the man who does not speak Chinese, and the rules along with paper that allows him to follow the rules is a system that has understanding, despite the man himself lacking the concepts of his communication. However, intentionality is mentioned as being a key condition for understanding, which cannot be done by inanimate objects. Doesn’t this mean that the system together wouldn’t be intentional as it is (mostly) comprised of inanimate objects such as the pen, paper, and rules? This would involve syntax without semantics, and therefore be meaningless.
ReplyDeleteWhat you are saying seems to be a very good summary of the article and I liked the fact that you mentioned the concept of understanding being more of a spectrum rather than a categorical thing. From what I read in the previous comments, it seems that intentionality is just a "weasel-word" (to reuse Harnad's words) that is actually the same thing as understanding. Therefore, I think that what you are saying is right. Since the system is made of inanimate objects, it cannot create any sort of intentionality. That is particularly highlighted in this sentence: "Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality." From what I understand, understanding is really about the "hardware" (the machine in itself), rather that the "software" (the program).
DeleteKarina, please read the other comments and replies on “intentionality” (which does not mean “intentional”), feeling, and understanding (including partial understanding).
DeleteCharlene be careful to distinguish the “hardware” of the computer that is doing the computation -- which is irrelevant, because software is hardware-independent -- from other kinds of “hardware,” such as the eyes and hands and body of T3 (and the brain of T4), which are doing sensorimotor and other physiological function, not computation (symbol-manipulation).
Searle asserts a fatal flaw in string AI claims through the following thought experiment: Suppose we develop a computer program that takes as input, and returns as output, Chinese symbols in such a way as to pass the Turing Test with a native speaker of Chinese. If we then instruct a person who has no prior knowledge of the Chinese language or alphabet to execute this program as a computer, Searle posits this individual has no more understanding of Chinese as a result of executing this program than they did before. This "periscope" into the internal state of a computer is made possible by the strong AI claim that mental states and cognition result from the execution of hardware-independent programs; thus, a human is an adequate piece of hardware on which to test strong AI's claims, and that by virtue of their consciousness the human in question can observe what mental states emerge from the program's execution. What Searle finds through his periscope is that program implementation is not a sufficient explanation of the conscious mental states he calls intentionality, and thus that some other feature of the brain, namely the hardware in his view, must explain it.
ReplyDeleteHadrien: Good summary, but read the replies to the other commentaries: See the two senses of "hardware."
DeleteSure, the first sense of hardware is the physical machine necessary to implementation of a certain independent program. The second sense, which Searle proposes will lead us to the explanation to mental states, goes beyond this to include the rest of the body as a physical system.
DeleteSearle just tells us to forget about computation and study the brain. Is he right? We'll be talking about that in Week 4.
DeleteWhile reading Searle’s arguments, I found that one particular line in his “Systems Reply” caused a great deal of confusion for me. He states:
ReplyDelete“The Chinese subsystem knows only that ‘squiggle squiggle’ is followed by ‘squoggle squoggle.’ All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end.”
From what I understand, the basic premise of this statement is that since the English subsystem actually knows what the words in the rules refer to, the Chinese subsystem is only then able to produce outputs– by following instructions presented through the English subsystem. But then, can we not argue that understanding one subsystem THROUGH another subsystem (ie. understanding Chinese squiggles and squoggles through the English subsystem) is a form of understanding? Perhaps I am misunderstanding Searle’s arguments…
Anaïs: There is no translation in the (hypothetical) Chinese T2-passing program. The computer does not speak English; the recipe just says "If squiggle, then squoggle." It's all meaningless shapes, and shape manipulation rules. Chinese T2 programs are not English/Chinese translation exercises.
DeleteThat makes sense, but perhaps I didn't phrase my confusion correctly in my first comment… I posted my skywriting before going through the second reading for the week. Upon going through it, I realized the confusion I was trying to express originally was similar to the one you faced– essentially, I agree with Searle's CRA but I also agree with the Systems Reply. I was trying to reiterate that if we have a system and within that system we have subsystem A (non-understanding) and subsystem B (understanding)… could it possible that the entire system is deemed understanding because it contains parts that are understanding? And more specifically, could it be deemed understanding because subsystem A functions through subsystem B?
DeleteWhat does all this “system” talk actually mean? I know what computation is: A symbol system, with rules for manipulating the symbols, being executed by hardware, whose physical details are irrelevant: it’s the software that counts.
DeleteBut when you talk about a “system” with “understanding subsystems” and “non-understanding subsystems” it’s not clear what you are referring to. There can be a brain, with computational and noncomputational components, but that’s not the same thing; and the only thing we’ve learned from Searle is that the computational components, alone, do not understand.
Kayla is a T3 robot. She understands. And she may be part computational inside. But the computational part does not understand: Kayla does.
[The “System Reply” about computation and cognition is wrong. The “Robot Reply” is wrong too, because it thinks that a robot is just a computer plus noncomputational external peripherals (sensors and effectors), with computation still doing the cognitive work, inside. But there can be a lot more going on inside Kayla’s head than just symbol manipulation.]
I think Searle's paper is interesting because it uses a clear analogy (the CRA) to explain why a digital computer program cannot think and, thus, why strong AI can tell us little about mental states. I'm perplexed, however, by this line in the concluding paragraph of Searle's paper: "Since everything is a digital computer, brains are too". Is Searle trying to say here that everything - including the brain - is a machine, or does this quote imply that Searle believes in the Simulation Hypothesis (i.e. our entire existence is a simulated reality)? I found this comment a strange way to end the paper because the prior arguments have an orderly flow, whereas this quote took me by surprise.
ReplyDeleteWhat I understand from Searle’s concluding paragraph and his claim that the brain is a digital computer is that both are capable of doing finite and discrete computations, just like what a Turing Machine does. In a broad sense, both can be said to have similar functions in terms of computation and symbol-manipulation.
DeleteBut Searle then goes on to describe that although the brain is a digital computer, as is everything else, it is not only that. Its causal capacities and intentionality (understanding) cannot be emulated by a computer program.
I do agree with you though that this quote seems a bit unexpected compared to the rest of his paper and claims against strong AI and computationalism.
Polly, it just means that Searle doesn’t fully understand what a computer is, or does. It’s the common error to conclude that since all things can be modelled or simulated computationally (the Strong CTT), this means all things are computers. (But no, he’s not so far off as to lapse into matrix musings!)
DeleteMathilda, it’s worse than that, since Searle said not only that every object is a Turing Machine, but that every object is a Universal Turing Machine (digital computer), which would (if it made sense) that every object is every other object. Can you sort out the nonsense now? It’s not the Matrix, but just as silly.
(I think Lilliputians sometimes bumble into saying something that is correct, and even correct for the right reasons -- but they don’t fully “understand” what they’ve said!)
I enjoyed this reading as it discusses the difference between strong AI and weak AI. According to the text, weak AI focuses on the idea that computers are useful in explaining things. On the other hand, strong AI lies on the idea that computation is cognition. Searle believes the mind exists as an effect of the mechanism of the brain. He argues that it is impossible to produce intentionality in a computer. Indeed, intentionality or understanding is what makes our mind able to understand what symbols it is manipulating. A machine without intention would just come back to being a program that can manipulate symbols very well. He states the idea that AI cannot reproduce the causal mechanisms of the brain. Thus, he is invalidating the concept of strong AI as in computation without understanding would just be a manipulation of symbols. Thus, cognition is not solely computation, according to Searle.
ReplyDelete“According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. “
You're basically right, but please read the other replies on 2a, 2b, 3a, 3b to deepen your understanding.
DeleteDoes Searle peel back the umbrella of capacities to which we ascribed computation? Our communications are put under question as to whether they are solely computation because we apparently understand what is communicated rather than simply “squoggle when we see a squiggle”. This is portrayed when Searle cleverly shows that no understanding of Chinese is necessary to pass T2 in Chinese. My curiosity here is with the communication aspect and how it ties into understanding. What if we were to allow Searle to continually accumulate rules and data banks of Chinese symbols (as he suggests in response to the systems reply) for weeks, months, or years? What would come of it? (1)Would he be indistinguishably communicating with Chinese speakers? (2)Would conversations be more coherent? (3)Could there be such a comprehensible transaction of symbols that elicits feelings? For the first 2 questions I would be compelled to say yes. My reply of yes to the 3rd question is of course contingent with some of the sensorimotor capabilities that Searle, a human machine, possesses. Which ties into the robot reply, for which I think there is a more compelling argument now formulated than in the 80s when the one from the paper was posited. If a robot is permitted to see a human’s behavioural response to its replies, it could purportedly not only mimic the behaviour linked with the exact responses but also connect the behaviour with the semantics of the language. This would mean that there is no reason to refute the idea that the robot is understanding.
ReplyDeleteI this this is a great point- a human actually performing the Chinese Room process outside of a thought experiment might actually be able to (and given enough time probably would) understand the semantic aspects and purpose of computation. Also- the act of Searle engaging in the intentional act of following the rules of the Turing machine calls into question whether he can truly be said to communicate without intention in this case. He does understand something and acts with intention, it just isn't the content we typically consider communication.
DeleteSepand, please read the other commentaries and replies. You’ve got some of the pieces, sort of, but you are not putting them together understandably (nor understandingly). See the discussion points about “imitation” and “language learning” (which is not what Searle is doing in the CRA). “Indistinguishably communicating with Chinese speakers” is what passing T2 means, from the minute you start executing the algorithm, if the rev-enging was successful; and robots are no longer just T2 but T3; and you seem way off target on semantics…
DeleteJacob, you too need to read more about the difference between T2 and T3, and between intention (irrelevant) and intentionality (a weasel-word)…
Searle, by the Chinese room thought experiment, argues that formal mechanisms are not sufficient for an understanding; and thus computers, using formalized mechanism, cannot be able to understand language.
ReplyDeleteThen, what does a person lack when they merely learn language by 'word manipulation' in the Chinese room experiment? We have to see that humans, when learning another language, learn more or less formalized grammars and how to use them. Meanwhile, learning only grammar is not enough. When a human learn the word 'apple', they are presented a picture of an apple in the real world. And so, they associate the meaning of 'apple' to the fruit apple. As Frege said famously, there are a sense and a reference of a word; the spelling 'apple' is its sense, but the reference is the true apple in the world. Understanding a language requires not only understanding of the sense of words but also understanding the reference of them - often learned as equivalences of the real-world entities and phonetic representations. For a computer to understand a language, apart from grammatical rules, they need to be able to learn such equivalences and learn new ones from the inputs by self-modify their behaviour according to the inputs (which presupposes that they have adequate inputs from the outside world).
Turning back, consider a person in a room. If we present them, e.g., '斑马 = (a picture of a zebra)', '条纹 = (a picture of stripes)', '马 = (a picture of a horse)', and a series of formalized rules that can generate a new sentence, I think we can expect them to both generate the correct sentence in Chinese and understanding why it is the case.
First, you are speaking about T3, not T2 or computation as soon as you talk about any sensorimotor contact with the objects words refer to. And the sense (meaning) of a word is certainly not the (arbitrary) sound of the word nor the written shape of the word (which, even in Chinese, is mostly arbitrary: we will talk about the transition from “iconic” resemblance [between a word and its referent] to the arbitrary shape of symbols in Chapters 6, 8 and 9).
DeleteThe iconicity of Chinese characters is interesting, but Chinese children first learn language (and the referents of their words) orally, so the resemblance between the written word and its referent comes too late. But it does give Chinese some interesting features not shared with languages that have non-iconic written forms.
Afterthought: Some of the combinatory nature of the components of Chinese words is present in the oral form, but I don't know how much. So even the oral version might help children to guess the meaning of new words. We will discuss this again when we discuss the dictionary after Week 5,
DeleteSearle takes the fact that he would be able to successfully communicate as if he were a native speaker without understanding Chinese for granted. However, perhaps one cannot simply memorize a list of responses to prompts or questions and speak like a normal person. It were not possible, this would not provide any real support for computationalism, but rather for the legitimacy of the Turing Test as an indicator of an intelligent, thinking machine.
ReplyDeleteAdditionally, would it even be possible for the brain simulator to stimulate the exact pathways and individual cells that native Chinese speakers use without being capable of understanding Chinese in the first place? If my understanding is correct, to be able to stimulate these connections, the brain must first establish these connections by learning or knowing Chinese.
I’m not sure when Searle speaks in the first person that he is necessarily referring to his own ability to be able to perform the task of translating the list of Chinese symbols, but rather he is only proposing a hypothetical scenario in which a person does memorize the list of responses and is able to answer the questions as if they were fluent in Chinese. I think you’re thinking of the Chinese locked room example in a literal sense, whereas Searle’s point is that if the human had the same processing capabilities as a computer and got to a point where they could respond as if they were native in Chinese, they would still not “understand” what they are saying.
DeleteElena, please read prior replies as these points have been discussed before:
Delete1. T2 is not just question-answering
2. A T2-passing algorithms is not memorizing questions/answers.
3. What is an algorithm?
4. What is simulation?
3. An algorithm is a set of rules or steps that is followed when manipulating symbols.
Delete4. A simulation is a system that is built to represents or behaves like another system. So, in an example in which we have isolated the exact neural pathways that are required to produce understanding, they or correlating pathways would first need to be simulated on a computer. If it were not even possible for the computer to make these connections, the computer's lack of understanding of Chinese could not be used to argue that cognition is more than computation.
A simulation of an ice-cube melting is computation (symbol manipulation) whose symbols can be interpreted as the properties of an ice-cube, melting.
DeleteIn the Gedankenexperiment which compares the true “understanding” of English to the processing of Chinese characters by a native English speaker and how this acts as a metaphor for the Schank program with a machine, I definitely agree with Searle’s distinction that the subject cannot be said to “understand” Chinese, but what also intrigued me is how this related to the black-box nature of cognition. The part of the reason there is a ontological discussion to be had in the first place is because we still have yet to understand the processing the computer is doing; Searle even admits that he has not proven false the claim that “more formal symbol manipulation...distinguishes the case in
ReplyDeleteEnglish, where I do understand, from the case in Chinese, where I don't”.
It feels like something to understand sentences in a language.
DeleteSearle understands English
He does not understand Chinese.
A T2-passing algorithm is neither translating Chinese into English nor is it a manual for teaching Chinese to an English-Speaker
What is the role of Searle’s Periscope in all this?
I have some questions regarding the simulation vs duplication. From what I understand, Searle believes simulation is not duplication. However, I'm a bit confused about how A can simulate B. Does the simulator need to be more complicated than the simulated object? Or it doesn't matter? I'm not sure if it's correct, but in my opinion, the simulator needs to have more states than the simulated object to encode the input and state into the simulator's state. Thus, the simulator needs to be more complicated than the object/system being stimulated. But how can a program simulate human cognitive capacities if that's the case?
ReplyDeleteHi Nadila, from my understanding it does not matter what the simulator is composed of so long as the output proves to be interpretable. The examples Searle gives of stones, toliet paper, etc... emphasize that while also serving to make that distinction between simulation and duplication as those things don't have intentionality/understanding anyway and so their output could only be a mimic of human behavior
DeleteSet aside cognition and consider the computer simulation of an ice cube, and a real ice cube. Once you understand the difference between the simulated ice cube and the real ice cube, apply it to the question of the TT-passing computer and a person who understands Chinese. In both cases, the simulation is just interpretable squiggles and the real thing is not.
DeleteSearle's references to syntax and semantics were able to facilitate a better understanding for me of the symbol grounding problem. When he referred to his being able to understand English instructions to manipulate Chinese symbols but not understand Chinese itself, my naïve response was to think that perhaps the fact that he was able to understand the formal rules dictating the manipulation of the symbols was the real relevant understanding here, and that the "meanings" of the symbols were somehow less important. However, Searle's clarification on definitions of "understanding" (however weasel-wordy) in more literal terms (as in understanding a story) was helpful to me in conjunction with his dichotomy of form versus content; I am hard-pressed to disagree with him that mental states do not involve content but only form. If I would like to believe the key tenet of computationalism that I should be equivalent in all major properties to any machine capable of the same formal symbol manipulation tasks, but would also like to believe that my ability to literally understand the meanings of words and feel pain is in some way epiphenomenal (for Kid-Sib: a by-product), it would be difficult to accept these two without a glaring contradiction as this second premise would appear to be an explicit difference in properties.
ReplyDeleteSearle is just squiggling symbols, not understanding Chinese. You can believe him, because it feels like something to understand.
DeleteA computational state is a formal state (squiggles). A mental state is a felt state.
Everything we can do or feel is a “byproduct” of (i.e., caused-by) an internal state (of the brain, or of Kayla’s T3 internal mechanism [if she can feel]).
Apart from that, what does it mean to say that pain is an “epiphenomenon” (if it really hurts) – other than as just a confession of not having a solution to the hard problem of explaining causally how and why organisms feel? (In other words:“epiphenomenon” is yet another weasel-word.)
In the reading, Searle contradicts McCarthy’s statement that “machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance". I agree with Searle’s opinion that a machine like a thermometer does not have beliefs in the same sense that humans have beliefs. However, this is beside the point of strong AI and the Turing Test. Strong AI means that cognition is computation, which does not imply that any sort of computer has mental states such as beliefs. We would not expect simple machines, such as thermometers, to be able to pass the Turing Test (as mentioned in class, T0 is irrelevant and involves such simple machines as these). In order to reverse-engineer cognition, the “right” computational program must be considered, arguably in addition to sensorimotor capacities.
ReplyDeleteIt feels like something to believe (e.g., to believe that a zebra is a striped horse or that 2+2=4), just as it feels something to understand those statements. An unfelt “belief” is not a belief. (So “belief” too is a weasel-word except if the belief is felt.)
Delete[If you want a common thread across all the weasel-words – intentionality, epiphenomenon, understanding, mind, unconscious mind -- it’s always connected somehow with feeling, and hence the hard problem.]
This reading helped me better understand Searle's claim that if all you have is symbol manipulation, understanding is not there. If you're really understanding something, then you're FEELING what it feels like to understand it. Since computation is hardware-independent, Searle could be the hardware that passes the Chinese T2 program, and he would merely be doing symbol manipulation without understanding the computation in Chinese. If Searle doesn't understand when executing the T2 program, then we can't say that the computer does either. This shows that cognition is much more than computation, but we can certainly agree that computation is part of cognition.
ReplyDeleteSummary ok.
DeleteLike many of the other students I am convinced by the CRA that cognition is not just computation. I find Searle's refutation of strong AI to make total sense and his analogy of him taking on the functions of a computer program to be sound in showing that neither he (in this case) or a program (ever) actually understand the symbols they manipulate. I do not buy the "robot reply" as I do not see what the addition of sensorimotor capacities have to do with feeling or understanding. I find the "combination reply" convincing in that you are essentially dealing with T4 here, but only if a "computer programmed with all the synapses of a human brain" can be taken to mean a functionally equivalent nervous system to humans. However I do not believe the "combination reply" refutes Searle's main claim that cognition is not computation.
ReplyDeleteI hope the ones who are satisfied with the CRA are not just the ones who believed the punchline all along anyway, for Granny reasons. The robot reply is wrong if it just refers to eyes and arms as add-ons. T3 is the right rebuttal (Searle can’t “become” T3 the way he can become T2 – why not?). T4 is not the simulation of a brain, because that would be just computational again. Both T3 and T4 are hybrid computational/analog.
DeleteThe article ‘minds brains and programs’ argues that strong AI does not exist based on its lack of intentionality(understanding). Searle states the difference between strong and weak AI and using the chinese room experiment he explains that despite a machine perhaps successfully manipulating chinese symbols this is only a manipulation and not in fact an understanding. He continues to defend his view of the inexistence of strong AI by giving counter arguments to multiple responses of the experiment. I found the article amazing since he is able to prove, at least by words, the fact that computers will never achieve the capability of understanding and therefore are far from being compared to a mind.
ReplyDeleteRather than "far from" let's just say "not enough".
Delete“As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states” From this quotation, Searle’s argument is made very clear in that he argues against cognition being simply computation and essentially that simulation is empty. In the robot reply and his subsequent response, I believe what he says is that the added sensorimotor features of the robot do not add to the understanding of the robot as it is still the result of a program. In the case that a robot would ever be said to have intentionality (as this cannot be attributed to its program) it would have to be another part of its build not linked to the program? (which is what we have yet to find out through reverse engineering...)
ReplyDeleteSearle can show that computationalism ("cognition is just computation") is wrong with his CRA, using his Periscope. He cannot do the same with a T3 (or T4) robot. Why not?
DeleteI believe it is because the T3 or T4 robot would have computation or an algorithm that isn't implementation-independent and in passing those turing tests would mean there is something about their makeup that he wouldn't be able to just take and become the robot.
DeleteIn this paper, Searle refutes Strong AI and Computationalism (the concept that computation is the only necessary element for cognition) through the Chinese Room Argument. Searle proposes the following scenario: Suppose a program that takes in Chinese and outputs Chinese is executed by a person who has zero knowledge of Chinese through directions on how to respond. The person can execute the program in a way that's indistinguishable to the way a native Chinese speaker would respond without understanding any Chinese because they are merely matching the input to the rules provided to return the output. In this scenario, we are able to use the human as a “periscope” to see the program’s internal state, where we can see the human can perform all operations without necessarily understanding. He uses this argument to show that program implementation is not enough to prove intentionality, and believes that casual powers are required in order to produce this intentionality.
ReplyDeleteI find that Searle's response to the systems argument heightens the need and relevancy of the higher level turing tests such as T3 and T4. He points out that a fault of the TT (specifically T1/2) is that despite having no understanding of Chinese, he could, with the appropriate set of instructions, pass it. In this way, emphasizing its inadequacy in actually exhibiting the quality we are in search of. He emphasizes the importance of causality, particularly in intentionality and that this is what gives rise to "the powers of the brain" (which is interesting language). In this way, he breaks down the idea of the brain as computational machine utilizing formal states and adds an extra layer, one that involves intentional mental states, an even harder quality to emulate as we ourselves aren't so sure of the "powers of the brain" that give rise to these capacities.
ReplyDelete