Saturday, September 24, 2022

1a. What is Computation?


What is a Turing Machine? 
Computation is Symbol Manipulation 
What is a Physical Symbol System?


Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 

Overview:  Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 



Alternative reading for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the basic ideas, which are all clear and simple.)

Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT


141 comments:

  1. I quite enjoyed the "What is computation" reading it helped clarify the halting problem which confused me a lot before. From what I understand in the readings computation will always have a changing and varied definition depending on what scientific discovery is done next. What we do know is that is is in flux and that it is a discrete process. I'd love to consider what cognition could mean in different parts of the world as the definition gets slightly altered in different regions.

    "his proof that, by this means, a single machine--a universal machine--is able to carry out every computation that can be carried out by any other Turing machine whatsoever."
    I'm confused by this part of the Turing reading, does this mean this hypothetical machine can read the program of every other Turing machine? Is this a parallel to the modern computer being a (mostly) universal machine being able to read the programs of other machines such as old consoles or run calculator programs, etc.?

    Thank you for letting me know.

    ReplyDelete
    Replies
    1. Computation was defined, independently, by Church, Kleene, Post, Gödel and Turing, who were trying to formalize what mathematicians were doing when they “computed.”. Their definitions five formulations were different (recursive functions, lambda calculaus, Post machins, Turing machine), but they all turned out to be equivalent (and Turing’s was the simplet way of putting it). The definitions do not depend on science. They have not changed since the 1930’s, and every example of computation proposed by mathematicians has turn out to fit them: What is Turing's definition?

      Any entity (human, nonhuman or artificial) that is executing an algorithm (a symbol-manipulation recipe) is computing, and hence a "computer" (a "Turing Machine"). But what we mean by a computer today is a human-made machine that can execute any algorithm (a “Universal Turing Machine”). Unlike an ordinary Turing Machine, it can store programs and data. Your laptop is a Universal Turing Machine. A desk calculator is not.

      Delete
  2. I found the Turing Machine reading quite interesting as I've never learnt about computation and computer science before. It's fascinating how extraordinarily simple the Turing Machine and its instructions are, but how it invaluably serves as the blueprint for modern digital computers. I particularly found the traffic light example of a computer program that never terminates/halts intuitive. However, I did have trouble understanding the last section where the author stated that x can be input, despite the restriction that the input inscribed on the tape must consist of a finite number of symbols. If someone can clarify this for me, I would really appreciate it!

    ReplyDelete
    Replies
    1. Hi Alexander,
      From what I understood, the reading states that we can input x as long as it is computable, even if it has an infinite number of decimals, because we have a way of representing it in a finite form. Indeed, we can input x not in its infinite decimal representation form, but rather in the form of a program that calculates x. In this way, we are able to input the infinite computable number x in a finite form, since the program in itself is finite. Basically, if we want to input pi but cannot inscribe this infinite number on the tape, we could instead inscribe a program that calculates pi as our input, since this program can be written in a finite form. I'm not sure such a program exists, so this is a mostly theoretical example. Let me know if this clarifies?
      Amélie

      Delete
    2. Hi Amelie, you are right, a finite-length algorithm (or programme) that can generate an infinite-length real number (such as pi), running as long as we like, is the substitute for any infinite-length real number. See my reply to Alexander below.

      Delete
    3. Hi Alexander,
      Real numbers (unlike integers and rational numbers) can have an infinite number of decimal points. Computers are finite. But, for example. PI is an irrational number (3.1415926535…) but it can be generated by a short algorithm: dividing any circle’s circumference by its diameter. The algorithm calculates pi out to as many decimal places as your computer has the time to calculate out to.

      In a field called “complexity theory” there is an interesting similarity between (1) such short algorithms for computing infinite decimals and (2) scientific theories that predict and explain experimental data (which can also be treated as infinite strings of numbers). The scientific theories can be treated as algorithms that can generate (predict) the data. The theory’s complexity is calculated as the number of bits in the shortest algorithm that can predict all the data. If that shortest algorithm is as long (i.e., has as many bits) as the data themselves, the data are random. You can’t crunch them into a predictive theory less somples than the complexity of the data themselves. No explanation there, just the data.

      (But all these details about computing, halting, real numbers and complexity are far afield from this course in cognitive science. If your interested, you have to take a course in mathematics, computation or philosophy.)

      Delete
  3. I found the “What is a Physical Symbol System?” reading quite insightful and intriguing, especially in relation to how we define intelligence. Indeed, the physical symbol system hypothesis, although possibly false, states that intelligent action can be performed by any physical symbol system. For example, this would imply that DNA replication is a form of intelligent action since it copies DNA material, hence manipulating physical symbols (nucleotides). This is striking to me, as I always perceived intelligence as something mental and related to thought, rather than able to be purely bodily/physical.

    “Symbols in a physical symbol system are physical objects that are part of the real world, even though they may be internal to computers and brains”. I’m not sure I understand what is explained here; how can an object be both part of the real world and internal to computers and brains? Computers and brains are themselves a part of the real world, so this could allow physical objects to exist in both, but then it isn’t clear to me what is specific about physical objects? In other words, I’m struggling to conceive the notion of a non-physical object, since it seems to me that all objects that could be used as symbols are part of the real world. If anyone can clarify, thank you in advance!

    ReplyDelete
    Replies
    1. See the reply to Helen above. An important but subtle point is that it is not correct to say that every machine (every causal mechanism) IS a Turing Machine. Rather, a Universal Turing Machine (e.g., a digital computer) can SIMULATE any machine.

      A Turing Machine is a physical system that is executing an algorithm. An algorithm is a set of rules (a recipe) for manipulating symbols. Symbols are arbitrary shapes (e.g., “0” and “1”), manipulated according to their shapes, not their meanings.

      The Turing Machine is a machine with a finite number of internal “states.” It reads an input tape, one symbol at a time, and its only actions are READ, WRITE, ADVANCE-TAPE, CHANGE-STATE or HALT. It does what it does depending on what state it is currently in, and what symbol (if any) is currently under its input reading head.

      An adding machine or a heart are both machines. What they do can be done by a Turing Machine that is executing the right algorithm (or “program,” the formal recipe for doing what the machine doing). The Turing Machine just manipulates symbols. But if it is executing the right algorithm, then some of its symbols and states can be INTERPRETED, in the case of the adding machine, as numbers, added, and in the case of the heart, as a heart, pumping blood.

      What an adding machine does, or what a heart does, can be DESCRIBED by the right algorithm. You can think of that algorithm as the “recipe” that the machine is executing on its input. It is a little misleading to say that a heart IS a Turing Machine. The heart can be described (and explained) as a machine that is executing an algorithm: there is (at least one) algorithm that can be interpreted as being equivalent to what the heart is doing. The algorithm (if it is correct) can predict what the heart will do, depending on its current internal state and its current input. And the algorithm can thereby explain how the heart is doing what it is doing.

      But the heart itself is not manipulating symbols; it is pumping blood. In contrast, a Universal Turing Machine (e.g., a digital computer), unlike the heart, really IS a symbol-manipulator, and its states and symbols can execute any algorithm (whether one describing an adding machine, an automobile, a heart or a brain). So a Universal Turing Machine can SIMULATE any machine (including a heart) by reconfiguring its own internal states to execute the algorithm that (correctly) describes and explains how the heart works. But the Turing Machine’s symbols and symbol manipulations have to be INTERPRETED (by us) as representing blood, and valves, and flow. The algorithm itself is just symbols.

      Just the way this verbal explanation is just symbols.

      We will be saying a lot more about all this in this course. For now, you should just be beginning to understand the difference between (1) a machine (like the heart), (2) the algorithm (the Turing Machine) that describes and explains (symbolically) what the heart can do, and how, and (3) the digital computer (the Universal Turing Machine) that can execute any algorithm, and can hence simulate (symbolically) any machine.

      The difference between computation (the symbol-manipulating Turing Machine) and “computationalism” (the theory that cognition is just computation), is like the difference between saying that (1) the heart (like all machines) can be DESCRIBED and explained by an algorithm (a symbol-manipulating Turing Machine) and saying that (2) the heart IS a Turing Machine, manipulating symbols.

      P.S. An adding machine, unlike a car or a heart or a brain, really IS just a symbol-manipulation machine. Can you explain how and why?

      Delete
    2. Thank you for your response, this does clarify a lot. I am not sure about the adding machine, but is it because it is described purely in terms of the algorithm? We don't have to interpret a Turing Machine’s symbols and symbol manipulations for them to represent anything in the adding machine, since the machine already functions with these symbols and symbol manipulation. Also, the adding machine only performs one precise algorithm, which is why it is a Turing machine rather than just able to be described as such.

      Delete
    3. Yes, the adding machine is a computer hard-wired to execute just the algorithm for adding. And it really just manipulates symbols.

      Delete
  4. The "Cohabitation: Computation at 70, Cognition at 20" reading made me think about the question that I've thought about a lot in the past when thinking about AI. If we were able to make a computer or system that passes the Turing Test, by "tricking" humans into thinking it is exactly like us, how can we define its personality and consciousness? I believe that our consciousness serves the purpose of our brain explaining itself to us in a way that we can understand and know why things are going on. But there must be certain things that make each brain unique in that each personality is different. I think of it as a random process in that every human or animal gets a slightly different quality and quantity of chemicals in their brain and everything together makes up their personality. Of course, this is way oversimplified and I don't consider myself to an expert on this topic by any means. Like everyone else, I don't know why we are conscious, but I do believe that we can solve "how" with computation one day; we are just not there technologically yet. This might be going far into it, but the singularity is the concept of when AI becomes self-aware, or conscious, and how they would show such continuous improvement such that they evolve beyond our control. If we get to this point someday, ignoring a Terminator-like scenario, would we have solved consciousness? It's interesting to think that a reverse engineering done by humans themselves could solve such a question. The implications of this kind of discovery would be vast, such as whole brain emulation for example.

    I'm also wondering what you mean when you talk about grounded symbols and and what makes a symbol grounded? How does a human prove their understanding of a symbol more than a robot can?

    ReplyDelete
    Replies
    1. Alexander, you posted this in the wrong thread. It should go in the 1b thread.

      The Turing Test is not a trick: It is meant to test a reverse-engineered mechanism to see whether it can really do anything a thinking human can do, indistinguishably from any real thinking human, to any real thinking human. And not for 10 minutes, but for a lifetime.

      Does our brain, through introspection “explain itself to us in a way that we can understand and know [how and] why things are going on”? (If so, cognitive science is finished and we can all go home!)

      The “singularity” is sci-fi. This is cog—sci.

      We’ll get to symbol grounding in Week 5. First let’s understand computation, the Turing Test, and whether and why computationalism is right or wrong.

      What is computation and what is computationalism? (Please continue in thread 1b, not here in 1a.)

      Delete
  5. The "What is a Physical Symbol System?” reading helped with my understanding of some terms that I needed to know for this course. I'm unsure what the author means when they say that "any intelligent agent is necessarily a physical symbol system." What characterizes an intelligent agent and how do we tell one apart from one that is not? How low do we have to go in the complexity of organisms to find something that isn't an intelligent agent?

    My other question would be how would we go about choosing a particular level of abstraction? Wouldn't we always want the lowest level of abstraction to have the greatest amount of detail? I understand that with a low level of abstraction, there are more steps to consider and choose from, but wouldn't we want to have that choice and if we can program something to a degree that can make these decisions very quickly? Would there be a disadvantage in that case? As an example, I'm thinking of the "delivery agent" as a Tesla in the near future driving itself to someone's house to deliver pizza. In this case, the Tesla has pretty precise details about the streets, curves, corners, sidewalks, construction cones, etc., meaning it would have access to all the information it needs when making decisions, so I believe that a low level of abstraction would be useful in this scenario. Would you agree with this assessment?

    ReplyDelete
    Replies
    1. An “intelligent agent” is anything that can pass the Turing Test. (“Intelligence” means cognition, thinking).

      A TT-passer has to be a physical system. But whether it is just a symbol-manipulating system (a Turing Machine) depends on whether computationalism (“cognition is just computation”) turns out to be true. We will get to this in Weeks 2 (Turing) and 3 (Searle).

      We are talking about the human TT here; we will get to “lower” organisms in Weeks 7 and 11.

      What do you mean by “lower level of abstraction”? We are talking about Turing-Testing, which is just about being able to DO things. Nest week we will get to the TT hierarchy: T2 – T4.

      Teslas can’t pass TT.

      (Alexander, your enthusiasm is welcome, but please keep it down to one skywriting per skyreading, and short! I have to deal with 58 per week!)

      Delete
  6. 1a. What is a Turing machine?

    I felt I needed a bit more clarity surround Turing Machines so I decided to go on Youtube. I clicked the first video that came up, and it did help me visualize and understand the concept.
    https://youtu.be/dNRDvLACg5Q
    At the end of the video, he brings up an interesting point. Allegedly, “quantum computers” have been put forward as challengers to Turing. The guy in the video refutes this claim with a brief example of why he feels this isn’t valid, but I wonder if anyone else has come across this and has any opinions to share.
    For some background (because I did not know what quantum computers were), quantum computers can run multidimensional algorithms which makes them able to overcome some of the roadblocks that a Turing machine might face. Other people should add on to this definition if they wish because I don’t dare try to get more technical than this, I’m afraid it’s a bit over my head. Anyways, I guess some people claim that quantum computers are just a highly efficient form of a Turing machine, but others claim that in fact they are entirely different.
    I’m curious to know what others think. It seems to be a hot debate in the computer science community.

    ReplyDelete
    Replies
    1. You can find all kinds of things on Youtube (and in Wikipedia). Do the course readings before heading off into google space!

      Quantum computing, and whether it is Turing computing, goes beyond the scope of this course. So does Quantum Mechanics. The "Easy Problem" of cog-sci is already hard enough without getting into the quantum puzzles of physics (Schrödinger cats and all), let alone "quantum computing" (which does not even exist yet).

      But if you want a “Stevan Says” on that stuff, I’d say anything that depends on the details of the physical hardware of a computer (i.e., the symbol-manipulating machine) rather than on its software (i.e., the symbol-manipulation rules – algorithms) that the computer is executing, is not just a computer, computing. One of the most important properties of computation will turn out to be “implementation-independence,”, i.e. hardware-independence. Not that you can do computation (symbol-manipulation) without some sort of hardware! But the physical details of the hardware are irrelevant to the computation you are doing; the very same algorithm could have been executed by countless other, very different hardwares.

      [A sundial is a piece of hardware that can “compute” the time of day, by using the position of the shadow cast by the sun. That’s sometimes called “analog computation,” but “Stevan Says” it’s not computation (manipulating symbols on the basis of rules [algorithms] operating on the symbols’ [arbitrary] shapes) at all… Quantum computation -- using the “entanglement” of quantum mechanical properties like “spin” in elementary particles sent off in different directions -- is acting like a sundial at least in part: an implementation DEpendent part. That’s not Turing computation; and it’s not happening in your brains when you cognize, whether or not computationalism is true.]

      Delete
    2. Okay interesting! Yes I realize it was a bit off topic but I think debates like these can still play into the theme of "What even is computation anyway" which is what I found most interesting, rather than the details of quantum computing itself. The sundial is a better example because most everyone knows what a sundial is. Thanks for responding.

      Delete
  7. In the study of cognitive science, the brain is often compared to a computer, as they are both “physical symbol systems,” taking in inputs and producing desired outputs. Reading the “What is a Turing Machine” piece, I found myself wondering about this analogy. Characterizing the brain’s capacity as a mere mechanic computation seems reductionist in my opinion—our minds do more than simply follow instructions and perform “atomic operations to be performed if certain conditions are met.” If the brain were a Turing machine, how could it account for creativity, which is boundless and free from any premeditated instruction or blueprint? And even if it were a computation machine, where would its “instruction table” originate from, if not but another brain?

    ReplyDelete
    Replies
    1. Can you tell kid-sib what a computer is, and does?

      Delete
  8. I found the discussion in the “Computation is Symbol Manipulation” reading very relevant to my understanding (or previous lack thereof) of the computational theory of mind. In previous instances when this theory had been discussed, I had a lot of trouble understanding computationalism as a ‘valid’ framework for studying cognition, or how and why organisms can do all that they do. I couldn’t grasp the analogy made between the computer and the brain and viewed it as a surface-level attempt to study the complexity of cognition. The idea of mapping the processes of a discrete machine onto the layered continuous events that happened in the brain didn’t make much sense to me. But this reading made me realize that I was looking at the issue from the wrong perspective. Following the definition of computation used in this reading, as a sequence of state transitions in which each state is defined by a set of symbols, I realize that the computations carried out by computers, when looked at in lower levels of abstraction, can also be considered continuous processed. I had just never really thought of the possibility of computers as continuous devices. This understanding of computation makes it easier for me to compare the algorithms used by computers with the biochemical processes that govern cognition, therefore giving me a better insight into computationalism and its relevance.

    I had a harder time gaining an understanding of the ‘physical symbol system hypothesis’ that was discussed in the “What is a Physical Symbol System”, especially concerning the following claim: “An agent can have multiple, even contradictory, models of the world. The models are judged not by whether they are correct, but by whether they are useful.” The delivery robot example did not really help me understand this aspect of the reading, so if anyone has any other examples they can share, I would be super grateful!

    ReplyDelete
    Replies
    1. Hi! I hope I'll be able to resolve the confusion that you have about why "agents" have multiple models of the world (at least, I hope that I fully understand the subject at hand.).

      Let's first start with a few definitions to fully understand the concept. In the context of Artificial Intelligence, an (intelligent) agent is an entity that can take into account its environment to achieve its set-out goals (i.e., humans are also intelligent agents since we can make decisions based on our environment.).

      Now let's define what the author (s) meant by a "model". Simply put, an agent uses a or multiple models as a guide(s) to dictate how it would proceed to accomplish its task. For example, let's say an agent (either a robot or a human) is blind and needs to navigate through a maze. They do not have any visual models of the world available to them (since they cannot see), so they need to use other clever ways to complete their tasks. For example, they could alternatively use their sense of touch to orient themselves in the maze.

      Let's now imagine another scenario, one where the agent can also see the world around them. In this situation, it would be more effective for the agent to utilize their vision instead of relying upon their tactile abilities to navigate. Knowing this information, we can reasonably assume that this agent has two different models that they can utilize. This is what the authors meant to say by "whether [a] model is useful". If an agent can see. What is the point of them utilizing their tactile sense to navigate the maze?

      The reverse can also be applied. If the task requires describing what the texture feels like of a specific fabric. The agent could look at it as long as it wants, but it could never accurately describe how the texture feels without them touching it.

      In this sense, even though both models are correct, some models are more useful in certain situations compared to others.

      I hope this helps! I can come up with more examples if needed :).

      Delete
    2. Neither continuity nor sensory perception have much to do with symbol manipulation. Review what a Turing Machine does. That's computation.
      (And read 1b.)

      Delete
    3. I just wanted to try to add on to Alexei's answer, more specifically addressing the issue of conflicting models. I think what the reading is trying to say is that it is possible that if two models incorporate different information, then it is possible for them to conflict. For example, you are eating a dish that looks like spaghetti but is completely made of candy. If you are relying purely on visual information, you will likely assume that the dish is a savory dinner. However, if you can only taste the food, you will probably assume that it is just sweet candy string. It is only when you combine vision and taste that you will get a full sense of what the dish is. Thus, these three models (vision, taste, combined) present very different evidence for what is actually on the plate.

      Delete
    4. But what's important in that reading is to understand that computation is symbol manipulation. Vision and taste are not symbol manipulation. (Computational modelling is again symbol manipulation.)

      Delete
  9. I believe the readings were a good introduction for us to build a solid foundation in the theory of computation before delving further into the intricacies between computers, the brain, and cognition.

    However, one thing that picked my interest was the reading "What is computation." In it, the author recaps Alan Turing's main idea behind the "imitation game" which boiled down to the statement "In other words, intelligence is ultimately a behavioural phenomenon."

    This is an interesting stance for Turing to take since (at least for me) this implies that intelligence comes to be from behaviours. If that is the case, then this allows the possibility of many avenues within the interdisciplinary research between computer science and cognitive science. For example, behaviours may lead to a certain level of intelligence. Would it be then possible to design such an algorithm that would make a computer continuously learn about human behaviours?

    On a side note, I have discovered an interesting paper that proposes a theory of "Radical Plasticity" where the brain learns to become "conscious" through either external or internal behaviours.

    *The paper is a bit too technical for me to completely understand, but if you have the time feel free to look over it!

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3110382/

    ReplyDelete
    Replies
    1. Yes, the Turing method, and the Turing Test is purely “behavioral,” in the sense that it is based upon what the organism can DO. But don’t forget the “can.” It’s not just the description of the movements of one organism for a lifetime. It is a mechanism that produces the capacity to do all the kinds of things an organism can do, indistinguishably from other organisms.

      Read Week 2’s two papers by and on Turing. And then explain to kid-sib the difference between T2, T3 and T4. Turing’s method is based on what is observable -- and that’s not just behavior (T4). His paper is also based on T2. But T3 and T4 are still TTs.(One question you should start thinking about before Week 3 (Searle) is whether Turing was a computationalist.)

      The Cleeremans paper that you linked, "The Radical Plasticity Thesis: How the Brain Learns to be Conscious" is typical of the kinds of empty (and usually circular) non-solutions to the “hard problem” that we Lilliputians keep triumphantly announcing every few years. I’ll just give you a few essential definitions and then let you debunk it as an exercise (cued only by the boldface I’ve inserted in the quote below.

      Definition 1: The weasel-word “consciousness” means the capacity to feel. A conscious state is a state that it feels like something to be in.

      Definition 2: The “hard problem” is explaining how and why organisms can FEEL rather than just DO.

      Now note how you need go no further than the middle of the abstract to see how the “radical thesis” is circular, hence empty: “learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience”

      Delete
    2. Coming back to this thread with definition one makes me understand better for the term simulation, simulation does not entail the capacity to feel, hence different from what we can do.

      Delete
  10. Before reading all the materials in this section, I asked myself the question "what is computation" and briefly discussed it with a math&cs student. Our answer was: Computation involves three major elements: input, processing, and output. During computation, inputs are processed by a given algorithm to produce the output. It seems that our discussion matched the readings partly, in which computation is regarded as symbol manipulation. However, one thing was left out in our discussion: we were too confident that there had to be output. In "Computation is Symbol Manipulation," the author brought up interactive computation. What is the output of an indefinite computation? Is it the halt, exit, or quit command? Or is the continuous mapping of "a portion of one state of the computation to a portion of a successor state" already the output? This seems similar to what "What is a Turing Machine" says about the addition over computable numbers - to represent the computable number x as a program.

    It is interesting to see how the word "symbol" gets brought up so often in the discussion and even the definition of computation. I have also discovered that myself is not a computationalist in cognitive science. The definition of "symbol" seems inconsistent with that of a machine and human brain. "Computation is Symbol Manipulation" didn't place the focus on understanding "the inner working of the continuous device that is ultimately responsible for the computation." However, according to "What is a Physical Symbol System?", neural signals could also be symbols because:
    1. They carry meaningful patterns (grandmother cells display certain neural activities).
    2. They are part of the real world.
    3. The human brain acts as a symbol system to manipulate neural signals.

    If neural activities are symbols, why don't we study them? On the other hand, if they are not symbols, then cognition is not pure computation. This inconsistency leads me to doubt the accountability of computationalism. Furthermore, even if we suppose neural signals are symbols, we only pay attention to the changes in sequences of symbolic states. In that case, I believe that we are missing a significant part of cognitive science. Also, if "we conjecture that you do not have to emulate every level of a human to build an AI agent but rather you can emulate the higher levels and build them on the foundation of modern computers.", then we can only build T2, possibly T3, but we'll never reach T4.

    ReplyDelete
    Replies
    1. We don’t have to define “symbol” beyond saying it’s any object with any “shape” and that that shape is arbitrary: it could have been anything (e.g., a black and a white pebble, or a “0” and a “1”), just as the sounds we choose to use as words are arbitrary.

      So, yes, neural signals, or neurons themselves (or sounds), could be used as symbols in a symbol-manipulating system (a computer, a Turing machine), including the brain – if the brain is just a symbol-manipulating system (i.e., a computer).

      The important thing to understand is what is meant by “arbitrary”: Just as we could have chosen any sound to refer to “apples” (and different languages do use different sounds to refer to apples), different symbol-manipulating systems can use different symbols to execute the very same symbol-manipulation rules (algorithms). The shape of the symbols they use, and of their physical hardware, is irrelevant, as long as they are executing the right algorithm (software).

      But once you’ve settled on your symbols (say, 0 and 1), and you have a rule “if you read a 1, erase it and replace it by a 0, and halt” it is NOT arbitrary what you must do if you read a 0 or a 1. You must execute the rule on a 1, not on a 0.

      In other words, computation is just syntax, based on the shape of the symbols, and on the rules as applied to those shapes (whatever shapes are used). The symbols and symbol manipulations may also have meanings, semantics, but those meanings are not part of the computation, which is based only on the shapes. Not only is “2 + 2 = 4” a correct computation (according to the rules of Peano arithmetic) but it also means something.

      So does the root (x) of a quadratic equation “ax2 +bx +c = 0” which, as you recall, is
      “x = (-b+/- (b2 - 4ac)-2)/2a” – although you can, like a computer, follow the recipe, i.e., you can execute the rules on a particular quadratic equation “3x2 +7x +4 = 0”  -(3+-… etc. and compute the x without having any idea what any of it means.

      That’s computation. And it does not depend on what symbols you use as x and 3 and + etc. They’re just arbitrary shapes (just I could have said all of this in Hungarian).

      So not only are symbol shapes arbitrary, not only is computation all just syntactic (shape-based) but the “shape” of the physical hardware that is executing the symbol manipulation rules (software) is irrelevant too. Lots of radically different hardwares (including your PC, or you yourself) could execute the very same software (as we all did in cookbook maths by just plugging math exercises into formulas, based on shapes, even when we did not yet know what it all meant: all we knew was how to manipulate the symbols according to the rules).

      Now: Re-read my eralier reply about the difference between an object and the Turing Machine that simulates that object. And then re-think what you said about neuronal activity.

      And explain to kid-sib what T2, T3 and T4 mean – and what, if anything, that has to do with what you (or the skyreading) mean by “levels of emulation”.

      Delete
    2. Thank you for your detailed response! I was indeed overcomplicating the word "symbol" by asking its definition, which now I understand to be unnecessary because the physical/neuronal details of the entity that carries the computation are independent/irrelevant to how the entity is performing, according to computationalism. I have also noted that the shapes/symbols are arbitrary, and I would like to bring it up in my skywriting under section 1b.

      In terms of different levels of Turing Test passers: a T2 is capable of doing what humans can do verbally to the extent that it would not be distinguishable from humans; a T3 is capable of doing what humans can do both verbally and physically; a T4 is capable of doing what humans can do both verbally and physically with the same inner mechanisms. Higher levels of emulations are more abstract and do what a human can do, like T2 or T3. A lower level of emulations of AI focuses more on the detail and is necessary to build a T4.

      Delete
    3. T4 actually explains more data, not just external observations (behavioral capacity) but internal observations (neural activity).

      Delete
    4. As I’m preparing for the midterm, I wanted to return to this reading and specifically this comment on the difference between semantics and syntax… In our most recent lecture, we concluded that math (as an example) is just syntax, while language is semantics AND syntax. So, to ensure I’m understanding correctly, language would NOT be considered computation because computation does not care for the meaning of symbols. Much like math, computation is just syntax.

      But this poses a big question for me in regards to thinking… As humans, we rely heavily on language for our thinking, and I think it would be safe to say that thinking is NOT computation. This is a fundamental principle of cognitive science that I had never thought about before, and it makes me wonder how we can ever think to reverse engineer our cognitive capacity when the two biggest aspects of it (language and thinking) are not even considered computation?

      Delete
  11. The "What is a Physical Symbol System" article briefly mentioned the example of written words and sentences as symbols. Along with syntax, these symbols form a physical symbol system, language, that we use to describe the world and communicate with each other. This got me thinking about how language, as an example of a symbol system, is used by us.

    According to Chomsky's theory of universal grammar, we are born with an innate understanding of how language works. This innate understanding is what allows toddlers to learn a language so quickly, no matter what kind of language they were taught. The fact that these sets of universal rules can adapt to any language the child grew up with is fascinating, considering that different systems with semantically distinct symbols follow the same innate syntax.

    ReplyDelete
    Replies
    1. We’ll be discussing language, “Universal Grammar (UG)” and Chomsky in Weeks 7 and 8. Some features of language (such as the fact that it is (preferentially) spoken) are innate, but others, such as vocabulary and Ordinary Grammar (OG), are learned. UG is not learnable, because of what Chomsky called the “poverty of the stimulus,” but we can only understand what the poverty of the stimulus is once we understand what categorization and category learning are (Week 6).

      So stay tuned. But keep in mind that words alone, being symbols, only have shapes, not meanings, hence no built-in semantics. Symbol manipulation (computation, including mathematics and logic) is just syntax, i.e., shape-based rules. Both computations and language are semantically interpretable – by thinking minds like ours – as meaning what they mean; but that meaning does not enter into the computation, which is just a formal recipe for manipulating arbitrary shapes. The meaning comes from the head of the user, and that’s what cogsci needs to reverse-engineer.

      But if computation is just syntactic, language certainly cannot be just syntactic, because words have meaning to our heads, and we “manipulate” words on the basis of not just their “shapes” but also their meanings. And those meanings have to be learned by the toddler; they are not innate.

      OG rules are learned (by imitation, trial and error and correction, or by instruction) alongside learning what words mean. OG rules can be taught by referring only to the part-of-speech categories of words (noun, verb, adjective, conjunction, determiner) and their syntactic rules, not words’ individual meanings. Perhaps something similar is true of UG, except toddlers seem to already know the UG rules without having to learn or be taught. But that, too, has to go hand in hand with learning vocabulary and OG.

      And to earn the meanings of words, words are not enough: they need to be connected to things in the world through our senses and actions, which are not just syntax.

      Delete
  12. The "What is a Turing Machine" reading breaks down the elements of a Turing machine to understand its different components and functions, such as the head and the tape. The head is the scanner while the tape is what shows either the symbol '0' or '1'. A Turing machine shows how a computation is essentially a sequence of distinct transitions. It starts in one state and is able to read a symbol, write a new symbol and transition into a new state according to the new symbol.

    An element of this reading that confuses me is the idea about a Turing machine being able to compute more than any physical computer. How does it work that there is no bounded amount of memory on a Turing machine? Or no constraints on its speed?

    ReplyDelete
    Replies
    1. Read the replies about "Universal Turing Machine."

      A digital computer with capacity to story programs and data is an approximation to a Universal Turing Machine, but it is still a finite state machine with finite capacity.

      Read also the replies about how a finite program can approximate an infinite-decimal Real-Number like PI.

      IMPORTANT: Please read all the preceding commentaries and replies before posting yours, so you don't repeat what has already been said.

      And post early in the week, not just before the next lecture, or I won't have time to reply.

      Delete
  13. This group of readings was instrumental in clarifying many definitions and concepts which were previously murky.
    In the "What is Computation?" by Horswill, I found the distinction between the functional and imperative models of computation particularly helpful.
    The functional model is very comprehensible and straightforward, considering inputs and outputs and connected procedures to obtain the desired outcome of the function. However, this definition excludes numerous other actions that are also computations, as the author's example -making the X move up and down the screen- demonstrates. However, the imperative definition "procedures are sequences of commands (imperatives) that manipulate representations" (Horswill, 2008) is much harder to conceptualize. This led me to reflect on definitions and their usefulness: it seems to depend on the context and the desired outcome (whether it be simplicity or inclusion, for example).
    In the "What is a Turing Machine?" reading, I am still a little confused when it comes to the distinction between computable and uncomputable numbers. According to Copeland, computable numbers are those that a Turing machine can write out: "if and only if there is a Turing machine that calculates in sequence each digit of the number's decimal representation" (Copeland, 2000). On the other hand, uncomputable numbers seem to be ones that, therefore, cannot be written out by a Turing Machine.
    "The decimal representations of some real numbers are so completely lacking in pattern that there simply is no finite table of instructions of the sort that can be followed by a Turing machine for calculating the nth digit of the representation, for arbitrary n." (Copeland, 2000)
    I am having trouble comprehending this concept of lacking any pattern. Are irrational numbers, therefore, uncomputable numbers? What would be a concrete example of an uncomputable number?

    ReplyDelete
    Replies
    1. Hi Kayla, please always read the prior comments and replies in the same topic thread before posting your own: your question about infinite decimals was already asked and answered here.

      Yes, Horswill's account of computation leaves a lot out. It would take something closer to an introductory course in computer science to describe it all (or most). Remember, though that all definitions and descriptions and explanations, no matter who long, are just approximate (just as all scientific explanations are approximate and only highly-probably true, not certain).

      Delete
    2. I just looked through your linked answer and it was very helpful, thank you.

      Delete
  14. I found the readings very interesting, and they helped clarify a lot of terminologies. But they left me wondering whether cognition is just the execution of computations. If that is the case, could we say that the limitations of computability (when something is said to be uncomputable) might be the limits of our knowledge (which, in this context, is cognition)? I don't know if I agree with the statement (in a way that I don't entirely understand yet).

    ReplyDelete
    Replies
    1. Cogsci’s mission is to solve (at least) the “easy” problem of explaining how we can DO all the things we can DO.

      Computationalism says it can all be done by computation.

      But uncomputable numbers cannot be computed. Therefore we cannot compute uncomputable numbers. So that’s not one of the things we can DO. And that’s true whether or not computationalism is true.

      Many silly things have been said along these lines. A well-known exmple is: The Lucas-Penrose Argument about Gödel’s Theorem -- even though one or perhaps two of these three are not Lilliputians…

      For this course you need only understand Searle’s Argument (Week 3) and the Symbol Grounding Problem (Week 5). No need to trouble your heads about supposed “cognitive” implications of Göedel’s Theorem.

      Delete
  15. I found the concept of levels of abstraction discussed in both the “Computation is Symbol Manipulation” and “What is a Physical Symbol System” readings to be especially interesting. While the Symbol Manipulation reading posited that at lower and lower levels of abstraction, the smaller steps of a computation eventually just shows them to be continuous processes, but that “we rarely go this far, because at some point the steps are simple enough that we trust they are correct steps specified by an algorithm,” while the Physical Symbol system reading says that “we conjecture that you do not have to emulate every level of a human to build an AI agent but rather you can emulate the higher levels and build them on the foundation of modern computers.”

    I’m not sure if I’m getting the right read on this, but it seems to me that this implies that when it comes to considering a theory of computationalism in humans, the assumption is that at the lowest possible level of abstraction, there still remains some sort of mysticism in the steps and processes involved in cognition that aren’t even worth considering because they’re simple enough to be overlooked. If that is the case, it seems quite reductive to me.

    ReplyDelete
    Replies
    1. If you understand what a Turing Machine, especially a Universal Turing Machine (a digital computer) does (rule-based symbol-manipulation), then you see that finite discrete states and symbols are what count in computation. If you go down too fine-grained in the computer, you’re going into hardware details, which are irrelevant. All that’s relevant is the software (rules), not the physical details of how the hardware implements the software – as long as it does implement the details, and is executing the right software.

      PS “Reductive” is not necessarily always a bad word!

      Delete
  16. 1a. What is Computation?

    I enjoyed reading the “What is computation reading” because it made me realize how difficult it is to define computation.

    I found that quote in particular striking: “Changing our ideas about computation changes our ideas about thought and so our ideas about ourselves.” I found that quote interesting because it really shows that the way we define computation and how we think of it rises a lot of new questions that go beyond how computation works or whether or not it is only about numbers, etc. We can now ask ourselves whether or not what our brain does is computation, and if so, if every thought is computation. If that’s the case, that makes us reflect about what differentiates humans from animals perhaps, or even machines.

    Throughout the reading, we also come to realize the ambivalence that exists when we think of what computation is. It is indeed said that in the Western world, most of what we consider to be computation is done by machines nowadays. However, we still believe that our thought process and what happens in our brain is what defines our identity and personality.

    One thing that I am wondering is if the definition of computation differs in other parts of the world, outside the Western culture.

    ReplyDelete
    Replies
    1. Computation is what a Turing Machine does (rule-based symbol manipulation). It has been the same, worldwide, ever since several mathematicians (Church, Turing, Kleene, Post, Gödel) defined it, differently, but independently, and then all the definitions turned out to be equivalent. This has come to be called the (Weak) Church-Turing Thesis (CTT), and so far no exceptions have been found, anywhere (though some people, for a while, have occasionally thought they might have found an exception, and then it turned out they had not).

      A ”Thesis,” however, is not a mathematical theorem. It cannot be proved, though it could be refuted by a counterexample (none so far). Nor is a Thesis a scientific Theory, supported by experimental or observational evidence. CTT is just a Thesis.

      But, so far, the (weak) CTT stands, unrefuted, for every case so far. (We’ll talk about the “Strong” CTT later.)

      The reason the three readings:

      What is a Turing Machine?
      Computation is Symbol Manipulation
      What is a Physical Symbol System?

      are not as clear as they ought to be is that they are a mixture of computation and computationalism. And computationalism (but not computation) is full of confusion:

      Defining computation has (hence with computationalism), although the defining computation is necessary to define computationalism. In these three readings, all three C’s were jumbled up. This week we’ll get a very simple cognition-independent definition of computation (Turing’s) and then you’ll be in a better position to sort them out and make your own minds up about it.

      Delete
    2. Thank you for your reply, it made things more clear, especially when you talked about the nuances between computation and computationalism that were not that obvious in my head.

      Delete
  17. Concerning 1.a.3: What is a Physical Symbol System?

    The talk of levels of abstraction in a model was something I found particularly interesting. While it seems to apply well to AI, using that logic didn't seem to fit for humans.

    If we consider the mind as a computational model, and physical systems to be some internal unit thereof, how can we actually equate or seek to categorize the diverse inputs and internal states constantly (and often unconsciously) processed and acted upon by humans? The problem with the physical system hypothesis seems to be that it includes basically all facets of everything in human existence- all of which are not necessarily processed identically by consciousness (or different individuals), and not an easily manipulable or even comprehensible set of data. If all processed data are symbols, have we actually learned anything about human consciousness besides giving it a new name?

    All of this is not to say I disagree with anything said- on the contrary it's very interesting, and these are questions lead to the position the chapter seems to take: a general ambivalence on the nature of the mind as such, and a focus on a realistic representation of human intelligence, if not its literal replication.

    I'm not sure I understood these concepts correctly, coming from a nontechnical background, but it doesn't seem that AI strives towards the literal creation of a replication of humans, and seems rather to be an extension and mimesis of certain aspects and qualities of human consciousness- which is a different understanding that that which I had.

    ReplyDelete
    Replies
    1. Hi Jacob, I find your comment quite resonating with my view. In the article, the author mentioned "The principle of Behavioural equivalence" -- "if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used." From what I understand, if we give the same input to a program and a human, if both can provide the same correct output, we can say that they are equally computational. And you brought up the fact that a massive part of our cognition takes place unconsciously. This unconscious part of our cognition doubtlessly plays a part in our "answer-giving." With that said, even if the two entities (a computer program and organic cognition) can provide the same answer given the same input, viewing cognition as a computational model dismissed its "experiential" part. And unluckily, the experiential part of our cognition is essential to our consciousness. This also begs another pragmatic question: hypothetically, if we can map all our neurons artificially using a program, does this program also replicate the conscious part of our mind? And if so, how does it do it when even we cannot access our unconsciousness?

      Delete
    2. Jacob and Yucen:

      What is computation?
      What is a “computational model”
      What is the Turing Test?
      What is the “easy problem” of cogsci?

      When you think of what a TT-passer does, and can do, think of what Kayla does and can do.

      Read the other comments and replies on Week 1a and 1b.

      Delete
    3. "What is computation":
      Computation is symbol manipulation. To compute is to execute an algorithm based on the shape of the symbols. The algorithm here refers to the symbol-manipulating rules. That said, any entity capable of executing a symbol-manipulation recipe is capable of "computing". Computation is also said to be strictly syntactical -- meaning that the symbols are manipulated only by their shape. Therefore, symbol manipulations do not have innate semantic content even though they can be interpreted semantically. The symbols do not mean anything because their shapes are arbitrary. A system can execute the same symbol-manipulating rules using different symbols.

      "What is a 'computational model'":
      If I understand correctly, a computational model is an abstracted representation of a set of symbol manipulations. A computation model is meant to simulate (hopefully explain) a complex causal system.

      "What is the Turing Test?":
      The Turing Test is meant to discover the causal mechanisms that facilitate the human brain to do what it does. In other words, the Turing Test is a methodology for reverse-engineering human cognitive performance capacity.

      "What is the 'easy problem' of cogsci?":
      The easy problem of cogsci refers to how we are able to do what we are able to do -- our performance capacity. To solve the easy problem, we need to reverse-engineer how and why organisms can DO what they CAN do (reverse engineering our performance capacity).

      Delete
  18. I found "what is arithmetic" very interesting. For example, when it mentions that "Arithmetic is no longer just a mental operation. It’s also a physical operation", using mental and written arithmetic as examples, the process of arithmetic was broken down into a programming language, which led me to match the processes in our minds when we calculate with the processes of the Turing machine. This helped me to better understand Turing machines
    In the text I felt as if the binary was simpler and more efficient than decimal, so why did humans not become thinking in binary during the long evolutionary process? Or, since humans have chosen the most efficient decimal, would there be a breakthrough in computers if engineering conveniences could be overcome to make computers use decimal as well?

    ReplyDelete
    Replies
    1. Decimal counting is counting is easier for humans, but computers do just fine with binary.

      There's no doubt that we're doing computation when we add, subtract, multiply and divide. But computation is much more general than that. Once we've sorted out what computation is, we'll look at how much of our cognitive capacity it can produce, (Remember that it would have to be able to DO anything we can do, not just add and subtract...)

      Delete
  19. I found the optional reading by Pyshlyn very intriguing. I found it gave a more comprehensive in depth view of computation in comparison with the assigned readings. I found the part where he explains computational complexity and reaction time across multiple levels very intriguing. For example, say someone asks me my name, immediately “Sophie” comes to mind. But as they are asking the question, someone I know walks behind them and distracts me, causing me to pause for a second before I am able to respond. I computed the answer to their question immediately, yet they were not privy to that information until a bit later. In this case yes, the “algorithm” becomes more complicated as I am intaking more information, but the observer themselves is unaware of this. They may believe it is just taking me longer to “compute” the answer to their question, due to a processing difference in my cognitive architecture. Although the computational complexity did increase due to more information, the observer does not know this. Ultimately reaction time can be taken as a proxy for complexity, but the observer cannot always be sure at which level the complexity is added. This raises interesting problems for cognitive science and how we measure computational strength.

    ReplyDelete
    Replies
    1. Apologies for the misspelling of Pylyshyn, need better proof reading!

      Delete
    2. Computational complexity (and reaction-time as a way of estimating it) is only relevant to computationalism (“cognition is just computation”). Pylyshyn is a computationalist (and one who seeks “strong equivalence” – what is that? equivalence between what and what?). But if computationalism is wrong, and the brain does not just compute, but also, say, secretes, then questions of computational complexity and equivalence are no longer very relevant.

      Delete
  20. For me, the "What is computation?" reading was very helpful. It was really interesting to have a more in-depth and laid out view of even basic (input-processing-output) computing and the system that it operates in. Incorporating functions that act on functions gave me a good starting point for better understanding mental processes and phenomena that do not seem follow a strict written-out procedure, such as creativity and decision-making. Additionally, it was really interesting that the reading pointed out that the representation of information has a massive impact on what we do with it. My question is, is this a hard limit? In other words, are there representations that completely rule out one or more types of manipulation? Or is it just that certain representations are more conducive to certain types of manipulation?

    ReplyDelete
    Replies
    1. See the reply to Charlene above: To disentangle computation, cognition, and computationalism you first need to understand what computation is: what is it?

      Delete
    2. Computation is rule-based symbol manipulation. Also, I've realized that if a Turing machine can conduct anything that any computer is capable of, then representation can likely only make one operation easier or more obvious than another, rather than preventing any operation entirely.

      Delete
    3. Elena, I couldn't understand your second sentence.

      Delete
    4. I apologize for the lack of explanation. My initial reasoning was that if the Turing machine is limited to using one type of representation and is capable of any type of computing, then representation does not pose an absolute restriction on the types of computation that a machine can do. However, to my understanding, it is possible for a Turing machine to represent the same concept in two different ways.

      Delete
  21. The “What is Computation” article discussed the evolving meanings and understanding(s) of computation that I personally have not previously considered. As the article stated, I initially believed that computation was simply tied to numbers and symbol manipulation, when the reality of the definition is much less straightforward. What was particularly interesting was the comparison between thought and computation through the following statement: “thought is computation” which is later followed by “all computation is thought”. This statement creates difficulty in establishing what is considered to be computation, as the majority, (of at least my own thoughts) are often arbitrary, jumbled, and lack any sort of systematic approach. The link of thoughts as computations resonated with me, as I still somewhat view computations as systematic, despite not always dealing with numbers, as demonstrated through computation being a form of information processing. However, information processing still seems to be relatively systematic in our (limited) understanding of it. Another point I found interest in was found near the end of the article where the author posed the idea of a computer simulating a brain. Since modern-day science has a limited grasp on how our brain truly works, surely a computer cannot be programmed to do so, as its intelligence relies on ours, (more specifically those who program the hardware of computers).

    ReplyDelete
    Replies
    1. See reply to Charlene above. "Thought is computation" is just computationalism. What is computation?

      And what cogsci is trying to do ("easy problem") is to explain how the brain actually does all the things we can do (not just how we think we're doing it when we introspect). Computionalism says the brain does it through computation.
      (But, again, what is computation.)

      ("Simulating" the brain -- or anything -- computationally is not the same as doing everything that the brain can do. Simulating a chemical reaction computationally is not a chemical reaction: It's just symbol manipulation that can be interpreted (by us) as if it were a chemical reaction.

      Delete
  22. A quote from this week’s reading ‘What is a Turing machine?’ stood out to me regarding the distinction between Turing machines and physical computers: “since a Turing machine is an idealised device, it has no real-world constraints on its speed of operation [whereas] a physical computer's speed of operation is limited by various real-world constraints”. We are dubbing the Turing machine as an ‘idealised computational model’ because it reduces a device’s complex structure into its simplest form. In computationalism, as we had discussed in class, it reduces and defines human thought and cognition to our ability to compute, taking in an input, processing it, and providing an output, just as a physical machine would do. But I have now understood that this manner of approaching cognition is too limited, and does not encompass what we are trying to get at when we are discussing human cognition and what ‘thinking’ is. Upon reflection, the overall theme of this week’s readings regarding computation and physical symbol systems, and the ongoing discourse about the human mind and machines is shaping the trajectory of our understanding of cognition as we progress through the Information Age.

    ReplyDelete
    Replies
    1. The goal of cogsci is to solve the easy problem: discover how and why we can do all the (cognitive – not vegetative) things that we can do, by reverse-engineering the capacities of our brain to do them. The test of whether we have succeeded is whether we can design a model that can pass the Turing Test (i.e., do all the things we can do). Computationalism is the theory that the only thing the model needs to do to pass the TT is computation (symbol manipulation). The first three weeks of this course are about what computation is, and whether computation alone can pass the TT.

      Delete
  23. These readings really helped me grasp the concept of the Turing machine and how it contributed to cognitive science. It brought a lot of insight into how computations work and how it is possible for machines to perform computation through functions. I particularly enjoyed the “What is computation?” reading as I found it very insightful in defining the terms in the evolving definition of computation. I also found the “Representations of symbols” reading very interesting as it directly touched on the role of computation in computer science. I also understood better symbol manipulation.

    This quote defines intelligence as the manipulation of physical symbols (which could be the nucleotides in the case of DNA translation). However, I am guessing that not every causal mechanism is a Turing Machine. The aim of the Turing Machine is to simulate any machine. As a result, not every machine is intelligent. Knowing how much value is put on intelligence today, it got me to think about how much intelligence can also be attributed to adaptability and sensing the world around us. Indeed, before this reading, I always associated intelligence with a thought process and even consciousness of oneself not necessarily a physical mechanism.

    “This is a strong hypothesis. It means that any intelligent agent is necessarily a physical symbol system. It also means that a physical symbol system is all that is needed for intelligent action; there is no magic or an as-yet-to-be-discovered quantum phenomenon required. It does not imply that a physical symbol system does not need a body to sense and act in the world. The physical symbol system hypothesis is an empirical hypothesis that, like other scientific hypotheses, is to be judged by how well it fits the evidence, and what alternative hypotheses exist. Indeed, it could be false.”

    ReplyDelete
    Replies
    1. A system is “intelligent” (= thinking, cognizing) if it can do the things (cognitive, not just vegetative) that we can do: recognize and manipulate objects in the world, learn, remember, reason, communicate, speak).

      The “Strong” Church-Turing Thesis is that a computer (a Universal Turing Machine) can “simulate” any object or process in the world (including a rock, a rocket, a body or a brain).

      But simulating an object does not mean that object. A computer-simulated glass of water is not wet; you cannot drink it. It is just a computer, manipulating symbols that can be interpreted as the properties of glass and water.

      The computation can also be piped to a Virtual Reality machine (gloves and goggles, not a computer) that can fool our senses into feeling as if they are seeing and touching a glass of water. But you can’t drink the glass of water; and if you take off your goggle and glove, you can’t see it any more either. Liquidity is not a computational property.

      Does a simulated brain think?

      Delete
  24. The reading "What is a Turing Machine?" not only helps me to recap the concepts of computational theories I learned from other courses but also lets me think about an interesting question: The author mentioned that "none of these computers can outdo a Turing machine" and "not every real number is computable,” which means there are a huge amount of meaningful symbols (uncomputable numbers) that Turing-Machine based computers cannot deal with them as they are even not in the computer's “reasoning field.” In other words, computers cannot even imagine an uncomputable number. However, humans can define these numbers and even generate theories about them. Does it mean we probably contain a totally different reasoning system from computers or at least have a wider reasoning scope than current computers?

    ReplyDelete
    Replies
    1. Neither computers nor people can produce a number with infinite decimal places. But they can produce and execute a finite and short algorithm that can generate infinite decimal places for as long and far as we like, (Please see the replies about this on this thread.)

      Delete
  25. I found the "What is a Turing Machine?" reading insightful because, after taking an introductory level computer science course last semester, it reminded me about the immense computational power of a Turing Machine. I am astonished that a Turing Machine can compute more than any physical computer. For me, this raises the question: can a computer ever be conscious if the most powerful computer has only six operations? Surely consciousness and sentience, even in their most basic natural manifestations, involve more than six operative elements. Furthermore, it is easy, or at least possible, to explain how a Turing Machine works, but it is currently contentious to explain how an organism feels. If something is explainable, is it conscious/sentient? I'd love to hear others' thoughts on this during class on Friday.

    ReplyDelete
    Replies
    1. Your point regarding consciousness and sentience is an interesting debate. Although I agree that my instinct is also to believe that six operative elements is nowhere near enough to result in consciousness, it is difficult to come up with an exact estimation that would be believable. It is also possible that it is not the number of elements, but rather the complexity of their interactions that could result in the development of these states. Within the human mind, it would be very difficult to quantify how many elements are required to produce certain mental and physical behaviours, as they are heavily reliant on interaction and interconnected relationships.

      Delete
    2. Before asking whether computation can produce consciousness (FEELING, which is a hard problem whether or not computation can produce it), can computation even solve the easy problem of DOING all the things we can do?

      On the other hand, the Weak Church-Turing Thesis (that computation can do anything a mathematician can do) and the Strong Church-Turing Thesis (that computation can model or simulate just about any object or property in the world) are both probably true.

      Delete
  26. The reading “What is a Turing Machine?” was particularly interesting to me as I already had some background knowledge on Alan Turing and his career. At the beginning of the article, it is mentioned that “The input that is inscribed on the tape before the computation starts must consist of a finite number of symbols. However, the tape is of unbounded length--for Turing's aim was to show that there are tasks that these machines are unable to perform, even given unlimited working memory and unlimited time.”

    This passage was especially interesting to me as I wondered if his intention with this machine could be reflected in the ability of the human mind. Although no human mind has unlimited time, there are certainly tasks that exceed its capabilities and that will never be accomplished. I would love to know more about the symbolism that Turing was trying to convey with this choice.

    ReplyDelete
    Replies
    1. Read the other replies about the potential to produce unlimited numbers of decimals using a finite (and even short) algorithm (and as much time as you like).

      The aspects of computation are probably not relevant to cogsci.

      Delete
  27. Re: Levels of abstraction in “What is a Physical Symbol System” reading, 1a

    I found the discussion on levels of abstraction in relation to biological and computational entities particularly interesting in terms of how to model intelligence. I had never considered that these levels could be specified, almost separate from one another, in the things they would account for in terms of modeling biological entities. However, once thinking about the ways in which you can account for an infinite amount of variables (phenomena) in computational programs, it makes sense to me now that there is an increasing amount of complexity that computational systems can account for in terms of biological systems, as there can always be more detail added to as long as the system works*- whether that be the specifics of the sensory input it is absorbing, transmitting, and outputting (these can be described at the biochemical, chemical, physical level, etc.). In this way, this idea of the complexity of models and the level of abstraction chosen also relates back to the notion of behavioral equivalence as referenced in the “What is Computation” reading for 1a. As long as the system reliably produces the right answer*, whatever level of complexity implemented in the computational system tested using the TT would be sufficient as it would exhibit behavioral equivalence (produce intelligent behavior).

    *”works”and “right answer” in these cases means that the system accounts for intelligence, in a way that matches the level of T2, T3, T4, T5 chosen as measured by the Turing Test (TT).

    ReplyDelete
    Replies
    1. What does "levels of abstraction" mean? What is abstraction?

      Delete
  28. In "What is a Turing Machine", it is assumed that any mathematical function that can be computed by human beings can actually be translated into a procedure like this, with many many symbols and instructions. I am a little confused about the relationship between functions and programs. It is because real numbers are uncomputable, and only, for example, addition over computable numbers is defined as function? In class we discussed the T2, T3 and T4 hierarchical models which came into my mind while reading through the six fundamental operations, this connection led to spend time thinking about what is it and up until what point does the distinguishability between us human and machines fades.

    It interests me tremendously that the term "behavioural output" has been mentioned a lot in the "What is computation" article. I think it is limited to consider human-like behavioural outputs are the golden measurement of simulation, as Horswill's discussion of intelligence on downloading ourselves into a hard drive, there are much more insights, emotions and internal feelings that are not expressed at all. This discussion takes a step away from that and extends onto software predictions and computational sciences. After reading this article and looking at the question "What does it mean to say (or deny) cognition is computation", I have a thought that it seems like computation is predicting with symbol manipulations, however this process or model does not necessarily simulate cognition.

    While reading the piece on representation in "What is a Physical Symbol", I have realized that perhaps most (if not all) models are wrong since they are all abstractions of reality, which makes sense to evaluate them on effectiveness of factor manipulation and not on accuracy. In the light of this, the knowledge level that seem to be common for both biological and computational entities brings a question of how robotics are considered to be on this level, i.e., what does it mean by knowing and believing in goals?

    ReplyDelete
    Replies
    1. 1. All causal systems are “machines”: asteroids, autos and animals (including humans). Cogsci is just trying to reverse-engineer what kind of machine animals that can feel and think are: how and why can they do what they can do?

      2. You wrote “computation is predicting with symbol manipulations, however this process or model does not necessarily simulate cognition” That’s almost right:

      3. A computational model of anything (whether a brain or a bus,) is just simulating, with symbols and rules, what a brain or a bus is, and what they can do. It is not really a brain or a bus; it does not think or move. Its symbols are just interpretable by us as a brain, thinking, or a bus, moving. The computatioanl model is predicting what they buses and brains do, and maybe even predicting how they can do what they can do; but it is not doing what they are doing. It is just manipulating symbols that are interpretable, by us, as doing what they are doing.
      4. A computational model of knowing and believing faces a special problem, because it feels like something to believe (or to [think you] know); and my feelings are not observable to anyone else; jst my doings. That’s the “other-minds problem.”

      5. The only data a computational model can simulate and predict are observable data. We can infer from what a human does what the human probably thinks and feels, and we are probably right -- but unlike predicting and explaining what the human DOES, which we can test directly, by observation, there is no way to test directly what or whether the human FEELS.

      6. That’s why explaining how and why humans (or any other organism) feel is called the ”hard problem” – and that’s also why all TTs are just based on what the model can DO, not on whether or what it FEELS.

      Delete
    2. Thank you so much for the reply! I believe now I understand the hard problem and computational models. Can I then say the hard problem is really hard to solve in that the observable data we base the evaluation on is true, but not necessarily true which add difficulties onto answering the hard problem?

      Delete
    3. A stronger reason than that feeling is unobservable (except by the feeler) is that it seems to be causally superfluous: DOING capacity alone seems enough...

      Delete
  29. Like other commenters, I particularly enjoyed the "What is Computation?" and physical symbol systems readings. The former I found to be very effective in bringing together a number of concepts I'd encountered partly disparately in a natural and engaging way, while asking interesting questions besides. While I'd of course had a notion that the meanings of "computing" and "a computer" have shifted significantly over time, I hadn't considered the view that the concept(s) of computation is inherently resistant to being assigned a static definition moreso than many coined technical terms. For one, I mentally identify Turing's approach more with the functional view, and so it is interesting to consider computation is now taken to refer to a much broader range of capacity or phenomena.

    ReplyDelete
    Replies
    1. (I cut this comment so as not to stray too far above 100 words, but for the sake of publishing it here is the rest of what I wrote though I don't expect a response to it:

      As for the second reading the idea that most stuck with me was, as also with some others here, the point about varying levels of description/analysis. While reading, I took the notion of a "symbol" to vary at each level of analysis; thus a "thought" (which is a messy concept but a persistent one in these sorts of discussions so maybe helpful here) is a symbol, as is a "reverberating" neural circuit (again more a conceptual aid than a reality), as is an action-potential particular molecule of serotonin binding to a receptor.

      One issue here is that it would seem to allow for one symbol at one level to be decomposable into many symbols at a lower level, thus calling into question the meaning and utility of "symbol" overall. For example, a single action-potential is a discrete unit involved in (reputedly) computing something meaningful, but means little on its own much like a 0. If we take a thought to be a symbol itself, however, this will be associated with many action-potentials and related processes. I suppose that "symbol grounding" as we see mentioned in the next reading might help to clarify questions like this, and so I look forward to it.)

      Delete
    2. See other replies. What is a "symbol"? Lots of ideas may have changed regarding computation, but the Church-Turing-Kleene-Post-Gödel definition of computation remains the same, simple (in the few operations of a Turing Machine), and with no exceptions so far,

      Delete
  30. As a cognitive science student, I have always had an interest in the main goal of this field and in the question whether our brains can also act as computers. The “Computation is Symbol Manipulation?” and “What is a Turing Machine?” readings really changed my perspective about the brain. As the reading states: “a computation is a sequence of state transitions”, wouldn’t this make our brains similar ro computers? Furthermore, aren’t our brains actually similar to Turing machines? Eventually, the inputs in the Turing Machine can be our thoughts and ideas. With state transitions and computations, we either end up in another state (our thoughts my change), or we end up taking actions (which can be thought as the output). So along with Turing’s question “can machines think”, I also like to think about it the other way around and focus whether our brains are a type computing device.

    ReplyDelete
    Replies
    1. I would like to redirect you to Gabriel Rodrigues' comment below that "Many cognitive scientists and philosophers (e.g., Hillary Putnam) have argued that the human brain is equivalent... to a Turing Machine. Some (e.g. David Deutsch) even add the qualification that the human brain approximates a Universal Turing Machine, in the sense that it can compute anything that can be physically computed". If our brains can be approximated to a Universal Turing Machine, I wonder the implications of reprogramming our brains like just another machine. If our brains are concious, what would it mean for our brains to be reprogrammed? What would happen to the 'conciousness'?

      Delete
    2. Alara, don’t be to impressed by the analogies between a computer and a brain. There have been analogies between a brain and a clock too, and between a brain and a thermostat. And multistage rockets soaring into space have state transitions too, The readings are meant so that we can understand what computation is (hence what and how computers do what they do). We’ll get to whether that’s what brains do – hence whether cognition is computation – soon.

      Shopheara, see reply to Gabriel below: don’t mix up the possibility to model just about anything computationally (the Strong Church-Turing Thesis) with the notion that everything is just a computer, computing. Humans can compute (execute algorithms) (e.g., factor quadratic equations, do payroll salary equations), therefore brains can compute. But it does not follow that brains are just computers, computing (manipulating symbols).

      Delete
  31. The reading “What is Computation?” opened my eyes to how ambiguous the term “computation” is. The text highlighted that computation remains an idea in flux, and thus computationalism, which models cognition as computation, is not fully cemented. Computationalism is a more recently developed theory, but it seems to have circled back to share elements of the past behaviouralist perspective. Both computationalism and behaviouralism emphasize the importance of external behaviour; all that matters is the input and output. They hold a detachment from the “in-between” factors that are unobservable. However, I think that all factors, even (and perhaps especially) the unobservable ones, are important to consider. The computational model holds that “as long as the procedures are behaviorally equivalent, they’re in some sense interchangeable.” This hints that anything producing the same behaviour as another thing can replace it. Could this be extrapolated to imply that a machine producing the same behaviour as a human could replace that human?

    ReplyDelete
    Replies
    1. Please see other replies in the 1a and 1b thread, about not conflating computation with computationalism “cognition is computation”). The reading on computation are meant to convey what computation. Computation is Turing computation, and it is not in flux. What is in flux is whether cognition is computation.

      Delete
  32. “The models are judged not by whether they are correct, but by whether they are useful.” (What is a Physical Symbol System?)
    This idea of taking the focus off finding an objectively “correct” process is refreshing to me. To produce an action or behavior (or output) to navigate their world, an agent’s use of strings of information bits (the input; or in the articles definition, a “symbol”) can be used to model the relevant environment the agent is exposed to. I think this processing is the essence of computation. As ‘thought’ is commonly associated with computation (What is Computation?), function of a machine is also many times associated with computation (Computation is Symbol manipulation). It seems as though the model animals make or machines are coded with mustn’t necessarily be universally correct, but they necessarily must be useful to them. For example, a mathematical computer (calculator) stores a model of the arithmetic world, which is useful to it because it is to navigate the world of mathematics. Just the same as a student in philosophy stores models of the philosophical world. I believe this notion of modeling useful representations at different levels of abstraction is a great place to start when defining different definitions of computation. It seems to me that machines and animals are both performing computations, just with varied levels of abstraction that make it useful for them to navigate their world.

    ReplyDelete
  33. As a psychology major, the area of cognitive science and computation is not exactly familiar to me and I am looking forward to having a better grasp on the subject. I found the assigned readings interesting and they made me ponder on the topic of computers (the Turing machine) vs brains, more specifically when it comes to programming. In the What Is A Turing machine? Reading, programming is explained as ‘altering the head's internal wiring’, which lead to my question: to what extent is computer programming similar to the brain’s learning/programming functions? As humans we take in from our environment, mirror what we experience around us and are taught certain theories and concepts in school and whatnot, and when it comes to computers they are given an algorithm and a memory in order to wire the device to perform certain functions.

    ReplyDelete
    Replies
    1. What is computation? We'll get to thinking (computation) later...

      Delete
  34. Hi, everyone! I'm sorry for posting so late in the week. I've just moved to a new apartment, so things are a bit chaotic at the moment!

    Anyway, I really enjoyed reading Horswill's "What is Computation?", as it helped me consolidate some basic concepts I had heard in passing in previous Psychology courses, such as "information processing", "Turing machines", "programmable machines", etc. I don't have any questions about the text per se, so I will use this comment to share some thoughts I had while reading it.

    Many cognitive scientists and philosophers (e.g., Hillary Putnam) have argued that the human brain is equivalent (approximately speaking) to a Turing Machine. Some (e.g. David Deutsch) even add the qualification that the human brain approximates a Universal Turing Machine, in the sense that it can compute anything that can be physically computed. However, I have yet to see similar arguments being made about non-human animals (admittedly, this may simply be due to a lack of knowledge on my part). Why is that? Is the idea here that all brains, regardless of species, are computers, but only the brains of some species are Turing-complete? If so, what species? Only humans? Or could, say, a chimpanzee's brain be in principle programmed to compute everything a human brain can compute? (Leaving aside the fact that it would be immoral to do so.) These questions are worth asking, I think, insofar as computational universality may (or may not) relate to intelligence, and how far apart humans are in that department from other species.

    ReplyDelete
    Replies
    1. 1. What is “information”?

      2. What is the difference between being a Turing Machine and being equivalent to a Turing Machine (see previous replies).

      3. What is a computer simulation? (see previous replies)

      4. What is the difference between a Turing Machine and a Universal Turing Machine? (see previous replies).

      5. Yes, humans (hence their brains) can do what a computer does (add, subtract, follow recipes [execute algoritms], manipulate symbols). And humans (hence their brains) can also do a lot of other things. How? Since they can do what computers do, does it follow that they do everything the way computers do what they do? Can computers do everything humans can do? (See 2 and 3 above.)

      6. Can nonhuman animals do everything humans can do? Can all animals species (e.g., jellyfish and chimpanzees) do all the same things?

      First show you understand what computation is, then we can start asking whether that’s what thinking (computation) is too.

      Delete
  35. I think all the readings came into play to give us a better understanding of what computation really is and how one can define it. I was particularly convinced by the explanations given in the reading “Computation is symbol manipulation” as I found them detailed yet simple to understand. I understood the main idea behind the idea of abstraction however I still feel like it is a little subjective. Indeed, in the article, he implies that most computations can be described as a result of more detailed sub-computations and that eventually the certainty of the operations must be taken for granted: “we eventually reach a point where the most primitive operations are best described as the result of continuous processes. “
    I am wondering where the fine limit stands between what is considered primitive and what needs explanations? In other words, when do we consider that we need not go further into details because the explanation of these lower levels would be futile?
    A side note could also be made about certainty that we have encountered in class. How can we be so sure that these lower states of computation are correct and valid? Can “the result of continuous processes” assure certainty?

    ReplyDelete
    Replies
    1. What is computation? If it’s symbol manipulation, then it’s not “abstraction.” (What is abstraction?)

      Symbol manipulation is following a recipe. The important thing is the recipe (the algorithm, the software) not what the symbol-manipulator is built out of (the hardware) and how it’s getting the recipe to be followed.

      If you have a 3-D printer that can follow a recipe to bake a (vegan) cake, manipulating the symbols in Montreal, and then combining and baking the ingredients remotely, in a factory in Vancouver, the computation is done in Montreal and the cake is baked in Vancouver. You can’t eat the symbols in Montreal. And what you eat in Vancouver is not symbols.

      (Now see 2, above, in the reply to again. I[t’s hard to imagine that Putnam did not know this too]: Two things that are Turing-equivalent are not two things that are identical. One might be just a recipe for manipulating symbols, in Montreal, and another might be the execution of the recipe, using real flour and vegan eggs, in Vancouver. One is the recipe (symbols and manipulation rules); the other is the “interpretation of the symbols” in the form of mixing flour and vegan eggs and baking them in an oven.

      (About David Deutsch, and “quantum computing,” it’s too early to say; the knowledge about that is still too “entangled”,,,)

      Delete
  36. A common recurring notion is the comparison/analogy between the human brain and a computer. But, before I go any further, in all transparency, I'd like to remark that the concept of computation and other themes explored in this week's readings is something I'm still trying to grasp and fully understand. So I could be completely misunderstanding this whole thing.

    That aside, based on the reading "Computation is a symbol manipulation," if computation is defined as the physical “manipulation of symbols and sequence of symbolic states” without regard for meaning. Why, then, do we constantly refer to the brain and computer as being similar when they appear to me to be rather individualistic?

    I say this because computers will compute based on rule-based symbol manipulation and leave it at an end with no meaning regarded. But as for humans, yes, I guess one can argue we do, in a way, possess some computer capacity (to an extent). However, the symbols in our lives and how we perceive everything around us are based on the meanings we assign to them. Therefore, wouldn't this technically demonstrate us as very distinct from computers such as at a higher level due to our higher abilities? Thus, providing the idea that this analogy is nothing but purposeless?

    ReplyDelete
    Replies
    1. Yes, symbols are arbitrary, meaningless shapes, manipulated on the basis of syntactic – i.e., shape-based – rules. So even if it were true that thinking in the brain is just computation (symbol manipulation), that still does not explain where meaning comes from. (We will get to the symbol grounding problem in Chapter 5, but that can’t be the whole story either.)

      But be careful, even if it were true that thinking is just symbol-manipulation (it’s not true!), it would not follow that we know the meaning of all the symbols our brain is manipulating when our brain is manipulating them, nor that we can feel that our brain is manipulating them: We already know that’s not true from having sat in an armchair and introspecting about what’s going on in our brains when we are asked, and answer correctly, who our 3rd grade teacher was. We “find” the answer, but not how we find it. If the brain does that computationally, cogsci will eventually tell us how, and will tell as the algorithm too, and we will understand what it means. But that doesn’t mean we knew the algorithm when our brain was executing it, before cogsci told us what it was!

      Delete
  37. Sorry for the late response, was experiencing technical issues with Blogger.
    It was very interesting to read the assigned readings being rather new to cognitive science because throughout my psychology degree, I’ve always heard the metaphor about comparing the brain to computers and the Turing machine brought it into reality. I am familiar wth terms in regards to information processing and basic information about computational psychology.
    An aspect I found intriguing about the Turing machine was that it does force us to think about how much we really know about being sentient creatures because AI is aimed at being able to “execute core mental tasks such as reasoning, decision-making, problem solving, and so on”. Furthermore, aims to be able to replicate any algorithms human execute in their daily life.
    Horst addresses a major issue about early AI was the over-emphasis on logic and how it didn’t address how people tend to think with a degree of uncertainty when making decisions. For example, a majority of people as they grow-up and age, experience a change in morals and sociopolitical views? Will AI reach a point in can adopt ethics and change positions like people do? From my understanding, this has been addressed to a degree but when will there be enough AI training to replicate the nuanced views of humans and ethics?

    ReplyDelete
    Replies
    1. Computation is the manipulation of symbols according to rules (algorithms), and that’s what logicians and mathematicians do. It does not follow that humans only think logically, even if computationalism is true (which it isn’t). They can also think probabilistically (although probability, too, has algorithms) as well as illogically (and often do!).

      But don’t mix up what computation is with the question of whether thinking is just computation.

      Delete
  38. What is a Turing Machine?
    The concept of computation and thus Turing machines often comes up in the discussions of Cognitive Science. However, regardless of its familiarity, different features of this concept are always revealed each time it is brought up. For instance, It's interesting to observe the limitless nature of a Turing machine. The article writes "how can x be input, given the restriction that the input inscribed on the tape must consist of a finite number of symbols? The solution is to input x in the form of a program". The fact that this is a computable possibility indicates that you can integrate programs within each other despite having infinite decimal representations. While this illustrates a level of complexity that Turing machines are able to express, it also illustrates the level of simplicity it can provide given any possible computable number, even if represented by an infinite sequence of symbols. I'm curious to see more applications/examples of integrated programs to observe some of the complex representations a universal Turing machine is capable of.

    ReplyDelete
    Replies
    1. See the other replies in this thread about infinite decimals. But how algorithms can provide the means of approximating infinite decimals as closely as desired is not what humans often do; in that sense it is more about computation than about cognition.

      Delete
  39. Representations
    We know that the human mind is complex and contains an indescribable (as of right now) feature that continues to separate us from robots and computers. This article explores symbols as representations stating that "A symbol is a meaningful pattern that can be manipulated." With that being said, I find it interesting that the article specifies a symbol as a "meaningful pattern", in stating that, is it saying that a symbol has to hold some physical rendition. It sites examples as "written words, sentences, gestures, marks on paper, or sequences of bits" but could there be symbols that didn't hold a physical rendition? For instance, emotions are subject to manipulation. Consider individual A who is upset over a situation observed through their point of view, had another individual, B, presented a differing point of view (symbol system) explained it, causing A to change their emotion/reaction to a situation (an internal change), would that not act as a form of manipulation? What I mean to say in all this is what if the distinctive internal quality between computers/robots is the fact that humans can manipulate non-physical patterns or symbols. After all, the article writes "They may also need to physically affect action or motor control." which would be the case here. Thus, would it stand to be true that "A physical symbol system has the necessary and sufficient means for general intelligent action."?

    The idea of low vs high level descriptions indicates an explanation of the search strategies we use in real life. Rather than representing it in terms of the mode of action itself (utilize every path) it represents it in terms of the information present (like a modification of a heuristic search strategy). I feel like this may be a better way of describing how we actually might reason as it leaves room for the errors that we do in fact make.

    ReplyDelete
    Replies
    1. n computation, symbols are not “representations.”

      In computation, symbols are arbitrary shapes, like 0 and 1, manipulated on the basis of their shape, not their meaning.

      “Representation” has a formal meaning in mathematics (e.g., in group theory), but in ordinary language it’s a weasel-word that can be interpreted as meaning a lot of different things. It’s especially weaselly in cogsci: What do you take it to mean?
      But first tell me what is computation?

      And separate your answer from what you or anyone else might think cognition is. We’ll get to that after we’re clear on what computation is.

      And please read the other replies in the 1a and 1b threads.

      Delete
  40. I have found the “What is a Physical Symbol System?” reading very interesting. A symbol system is a symbol system that I have read; while what is a non-symbol system? I tried to think of examples, but I cannot find one. (I saw some saying that neural networks are non symbol systems, while I think they still fit into the definition of symbol systems - they are just not representing it in a very intuitive way.)

    If the human brain is a symbol system, is it possible that the only systems that are capable to be perceived by humans are the symbol systems? That is, we cannot even perceive of any system that is out of the scope of symbol systems, let alone understanding it? This reminds me of the movie Arrival that questions the border of human thought; is it limited by human language (or are the limits of human language caused by the human capacity of cognition via the Universal Grammar?), so limited that humans cannot perceive languages that use a completely different system, not even a symbolic one, to process information?

    (I don’t know if it makes sense.)

    ReplyDelete
    Replies
    1. I agree with your first paragraph. I'm bad at computer science, but from what I learned about the neural networks, they also take input that's represented by matrices and number embeddings, namely, symbols. Then they produce an output after the numerical features that represent things are classified. I think it becomes more intuitive to see them as synbol systems if one tries to play with some predictive models, but again I could be wrong.

      Delete
    2. What is a symbol? What is a symbol system? What is computation. We'll get to neural nets later.

      Please read the other replies in 1a and 1b

      Delete
    3. The following is my attempt:
      If I understand correctly, symbols are arbitrary objects and ungrounded which will eventually leave the system and turn into something else. e.g. Apple refers to those round red things. What’s an apple? It’s round and red, what about round and red?
      I think we are definitely understanding and manipulating symbols as we are all machines, but we can also do more than just symbol manipulation (therefore proving computationalism is wrong). Let me know what you think!

      Delete
  41. The theory that “a physical symbol system has the necessary and sufficient means for general intelligent action” in the reading is deeply intriguing to me, because it implies that the definition of intelligence that the majority of the world holds is incorrect. Intelligence is often associated with consciousness and conscious thought, but Newel and Simon’s theory postulates that these poorly understood phenomenon are not involved, and instead agents with intelligence just manipulate symbols in order to produce actions. What I am curious to learn more about is the knowledge level of abstraction, since it is clear that the knowledge level of humans for example is at a much lower level of description than the knowledge level of a mouse.

    ReplyDelete
    Replies
    1. What is computation? (And please read the other replies in 1a and 1b.)

      Delete
  42. 1a. What is a Physical Symbol System?

    "Although no level of description is more important than any other, we conjecture that you do not have to emulate every level of a human to build an AI agent but rather you can emulate the higher levels and build them on the foundation of modern computers."

    I had to think profoundly for awhile after reading this because I had always for some reason thought that to build real AI you had to model the human brain piece by piece. After reading about the different levels of abstraction I suppose it seems right to focus on modeling the higher level systems. This naturally raises questions such as how would one test that the higher level systems have in fact been modeled correctly and would the AI be able to help explain consciousness in humans if the AI does not include the lower level systems that humans do? Both of these questions it seems would not arise if we were to emulate every level of a human.

    ReplyDelete
    Replies
    1. I also was always under the impression that in order to build a sophisticated AI, a comprehensive model of the human brain had to first be achieved, not only relating to the location of different brain regions but also their functions in relation to each other. I suppose it makes sense that one only needs to study the higher-level brain systems because computers exist which we already know are adept at performing lower level computational/motor skills. Since we can already trust these systems, we focus on understanding high cognitive processes when trying to build a sophisticated A.I system

      Delete
    2. Please read the other replies in 1a and 1b.

      What is computation?

      Delete
  43. Wouldn't the consideration of the human mind as a symbol system suggest a very broad approach into how we process symbols individually? Would cognition then have to be evaluated on an individual and cultural basis? When reading the definition of a model being a “representation of the specifics of what is true in the world”, the first topic that came to mind is perception and how it may alter the possible computations of the mind, what is considered to be “true”. This ties a little into Charlene Thauvin’s point above on Computation differing outside of a western context. Assuming the infinite possibilities of models, I guess each of those would be classified to a varying degree of abstraction but this would be based on utility to each individual rather than humans as a collective? The use of the term “useful” to further define models also speaks to the possible variations of symbol systems. I very much look forward to our discussion on the influence of language on perception to further explore whether this potentially links to differing manners of computations

    ReplyDelete

  44. I am very intrigued by the “What is physical symbol system” article, and it helped me to understand many different terms and what it actually means by physical symbol system. However, there are some questions/sentences that I did not quite understand. What does it mean by “The models are judged not by whether they are correct, but by whether they are useful.”? Is it that there is no such thing as wrong models, instead, it is all about choosing the appropriate level of abstraction, which makes them useful? Another question is that since an intelligent agent manipulates symbols to produce action ( -> model the world), if two different agents produce the same action, does it make them have the same model? My last question is, how do we decide how many levels of abstraction we need to model an environment? Also, are the boundaries between different levels always very clear?

    ReplyDelete
    Replies
    1. What is computation? If it’s symbol manipulation, then it’s not “abstraction.” (What is abstraction?)

      Delete
  45. My interpretation of the line “the models are judged not by whether they are correct…” is that in reference to the delivery robot, it is not whether the model is accurate for every specific situation that the robot will encounter; if the model is for the robot to travel without there being wind and there is a gust of wind that blows the drone over, the model that the robot had for the environment isn’t necessarily useless, it just isn’t correct for that situation. Admittedly, it would be difficult for that robot to have a model to deal with every situation it might encounter, hence why building sophisticated A.I is difficult.

    ReplyDelete
  46. The "What is Computation?" reading supplemented my understanding of computation from the previous Turing reading. The definition I have learned previously is that computation is the manipulation of symbols according to rules based on their shapes. In this reading, I learned to think about computation as a sort of "question and answering". I learned that (at least in one model) it doesn't matter how you get to the answer as long as you reliably get the correct one (from the same inputs); known as behavioural equivalence. In that case, is behavioural equivalence the same thing as simulation?

    ReplyDelete
    Replies
    1. Computation is the execution of algorithms that give the right output (O) to input (I)

      More than one algorithm could produce the right output. All would be valid, as computation, if they give the same I/O.

      When computationalists who seek to explain cognition as computation, some of them (e.g., Pylyshyn) require it to be done with the same algorithm the brain uses. (Remember: they are computationalists, so they think the brain is just computing.)

      The equivalence between two computers, computing, can be either “Weak” equivalence or “Strong” equivalence

      Weak equivalence (also called input/output I/O equivalence, or “behavioral equivalence) does not require the same algorithm, just the same I/O

      Strong equivalence requires the same algorithm and that’s what computationalists like Zenon want.

      But of course if computationalism is wrong, and the brain is not just doing computing, there may be no algorithm in the brain, so nothing to be equivalent to.

      Do you think Turing was a computationalist? In any case, the Turing Test only requires weak equivalence for T2 (and T3 can’t be just computational).

      Delete
    2. I think Turing was a computationalist because the Turing tests themselves were developed to determine whether or not a computer is capable of thinking like a human being, so he has already attempted to make this comparison that a computationalist would. The way I understand it is that passing all of the Turing Tests would provide evidence for computationalism.

      Delete
    3. Passing all of which TT? T2, if passed by just a computer, computing, would be vulnerable to Searle’s Periscope (what’s that?), and it would be ungrounded. T3 (Kayla) would not: why not?

      Delete
    4. T3 wouldn't for the same reason that T4 would? T4 involves being indistinguishable in terms of its physiology and its internal workings, as well as what's apparent on the outside. Passing T3 is simply being indistinguishable in its verbalizations and its ability to interact with the reference of its words, but wouldn't have similar internal workings like a T4-passing machine would.

      Delete
  47. The description of the knowledge and symbol levels in the “Artificial Intelligence” article is very straightforward and clarified my understanding of the problem of cognition. Our knowledge level is known to us, meaning that we can retrieve facts about the world and use them. However, the symbol level is the hard problem of consciousness that has yet to be solved: we aren’t aware of how we manipulate the shape of external symbols (computation) to produce our thoughts and feelings. This article is referring to AI, so it doesn’t cover the cognitive processes that occur beyond machine computation. AI utilizes algorithms, but could it be that consciousness occurs on the basis of some sort of non-algorithmic symbol manipulation?

    I apologize for the late comment.

    ReplyDelete
    Replies
    1. Depends what you mean by “nonalgoritmic.”

      According to the Strong Church-Turing Thesis, just about everything in the universe can be simulated>/i> with an algorithm. But that does not mean that everything in the universe is just a computer, computing that algorithm (i.e., manipulating symbols). (See other replies about simulation and the Strong CTT.)

      What Searle is calling “Strong AI” is not the Strong CTT. It’s computationalism (“cognition is just computation”).

      And, as I explained in class (and will explain many times again!), what Searle calls “Weak AI” is the same as the Strong CTT.

      (Please try to do your skies early in the week, and keep up week by week!)

      Delete
    2. Hello! I believe your question is tied to the second reading by prof. Harnad! In particular, Searle's experiment rejected the idea that all brain processes are computations (computationalism), and that we should study the the brain processes themselves instead to understand cognition. Whereas prof. Harnad's view is we should study the computer metaphor of the brain, AND also the dynamic sensorimotor processes (these are unconscious) to understand cognition (if by consciousness you mean cognition).

      Delete
    3. Computing and computers are not a metaphor. They can be useful in modelling or simulating the brain and its capacities (that’s “Weak AI “= the “Strong Church-Turing Thesis” that computation can model or simulate just about anything: but when you computationally model a plane, flying, or a brain, thinking, your compuational model is just a computer computing (manipulating symbols); it is neither a plane nor a brain, nor is it flying or thinking..

      Delete
    4. What I meant by nonalgorithmic: I meant mind processes that were too "complicated" to be simulated by AI, that could be identified but never replicated. This is similar logic used by people bringing forth concepts too "abstract" to be grounded in symbols, such as morality or justice. Obviously, we have seen in later weeks that this sort of hypothetical argument strays away from the problem of consciousness we are discussing, and tend to look like Turing's argument from various disabilities (a machine could never do X because it seems complicated, but without support to the statement). So, if we adhere to computationalism, then no unreplicable ("nonalgorithmic") processes of the brain exist.

      Delete
  48. Computation means manipulation of symbols according to rules. Symbols are also referred to as states. Symbols are meaningless themselves and connections must be made with external things to have meanings. In other words, symbols represent external things (symbols are representations). The rules are instructions must be followed in computations are programs or algorithms (a program is a set of algorithms). I hope my understanding is correct despite finish the readings late.

    ps: Sorry about posting this late! I joined course late in the semester. pps: I believe the second reading appeared in the syllabus (above) is not the same as the second reading appeared here.

    ReplyDelete
    Replies
    1. Anything can be used as a symbol by a Turing Machine (computer, symbol-manipulator), including, pebbles or boulders, electrical states, chemical states, or 0’s and 1’s on a tape.

      But in Turing’s Machine definition of what computation is, the computer is a machine that has a finite number of states, and it reads a tape that has symbols on it, and its current state determines, mechanically, what it must do with each symbol as they pass one after the other on its tape, and into what state it should transition next.

      The machine states are mechanically implementing an algorithm or recipe for manipulating the symbols and changing states, So there’s a difference between the state the Turing Machine is in and the symbols on its input tape.

      (But since the hardware details are irrelevant and symbol shapes are arbitrary, the series of symbols on the input tape could be implemented as a series of electronic states too, rather than the shapes “0” and “1.”. And, in fact, in a digital computer, which is a Universal Turing Machine, it’s all implemented as electronic states, even when the input is scanned one by one from the letters on a printed page.)

      (I’m not sure what you mean by the difference between 1b in the syllabus and 1b in Week 1: are they not both the “Cohabitation” paper about Pylyshyn?)

      Delete
  49. My understanding of the computationalist claim is that the human mind is a computational machine, i.e. a system which performs algorithmic manipulations of arbitrary symbols from one state to another. Computation thus defined provides a framework within which computationalists explain various phenomena, for example, our ability to move from lower levels of abstraction to higher ones, presumably using useful tools like analogy through images and language to form models of the world at degrees of resolution appropriate to varying situations. Mental computations would proceed by registering symbols through sense perceptions (or other mechanisms analogous to the Turing Machine’s reading capabilities) and manipulating them algorithmically to decide solutions to certain basic problems, those solutions would then become the input states for further computations which eventually lead to decided upon actions (analogous to halting?). This view proposes an answer to the easy problem of consciousness, which inquires about the mechanisms which make our observable behaviours are made possible; as I understand it the attribution of “meaning” to symbols, to symbol states, and our overall conscious awareness of feeling things, and the conscious experience of our mental representations remain to be explained.

    This view brings up questions for me about whether the part of our body beyond the CNS that moves and acts in the world is seen as an integrated part of a single Turing-equivalent computational system, or if it is seen as a separate, non-Turing-equivalent computational system taking for input solutions to the mind’s computational problems. In class, we alluded to the fact that actions such as flinching which result from cognitions are also cognitive phenomena; from the computational perspective, would cognitions then be computations which take as input the solution state of previous cognitive computations (as opposed to vegetative computations such as heartbeat or DNA transcription which do not take cognitive input)? I recognize this as a rather useless definition, a recursive clause without a base case, so is there a relevant base case which would create a useful definition?
    (I will keep next comments shorter)

    ReplyDelete
    Replies
    1. I think you still haven’t quite understood what “symbol” means in computation.

      And there is no “easy problem” of consciousness. Cogsci has an easy problem and a hard problem. The hard problem is explaining consciousness (feeling).

      Remember that the first objective here is to understand what computation is, forgetting about cognition. Then we ask whether cognition could be just that.

      See other replies about the difference between being a physical object and being Turing-equivalent to that physical object. See “simulation” and “computational modeling.”

      **And please make sure to read all the prior comments, and especially my replies. They’re written for everyone, not just the one I’m replying to.**

      (Yes, please keep skywritings short: I will be getting over 100 every week! But make sure I can tell you’ve read and understood each reading.)

      Delete
    2. As I understood it a symbol is a discrete shape that is recognized by a computer as signifying a certain operation specified by the computer's program.

      Delete
  50. In an attempt to learn to properly use the terms in the readings, would it be a fair statement to say that since the value of the universal turing machine is determined by its ability to be behaviourally equivalent to all other machines, its value was derived through the functional model of cognition?

    ReplyDelete
    Replies
    1. Maybe the discovery and design of computers was motivated partly by trying to understand how the human mind works (hence it was the first stirrings of cogsci). But it was mostly motivated by trying to get technology to do things for us (whether or not we could do it for ourselves). This was called the distinction between “cognitive modelling” and “artificial intelligence.”

      Delete
  51. I found all three readings extremely interesting. As a computer science and psychology double major I found the ‘what is computation’ article extremely well written and gives a very understandable explanation of what algorithms are and how they are useful. Furthermore, I found the ‘symbol manipulation’ reading very intriguing… More specifically, I got fascinated about the description of the knowledge and symbols levels and how that is the base of AI algorithms.
    PS: I joined the course late and that is why I was only able to catch up to the readings now

    ReplyDelete
  52. I decided to go back and consider how these readings integrate with one another to review my own understanding of computation.

    First, why understand computation? The goal of cognitive science is to understand how we do what we (cognitively) do. By reverse engineering our cognitive abilities, we can get closer to understanding how this works. Computationalism is the idea that computation is the only thing necessary in order to become a cognizer.

    The second reading, "Computation is Symbol Manipulation", answers this most directly by saying "A computation is a sequence of simple, well-defined steps that lead to the solution of a problem". The reading goes on to define different aspects of computation, but one important quality to note is that a problem and its solution must be in the form of symbols, under the same symbol system (in order for symbol manipulation to occur). It doesn’t matter whether these computations are terminating or indefinite, part of physical or biological systems, or even part of algorithms, as long as the symbol system and rules of state manipulation are consistent, computations can be performed.

    The third reading, "What is a Physical Symbol System?", recognizes that these symbol systems are abstractions of the world that it represents. (It's interesting to note that this reading considers symbols to have meaning, where others consider symbols within the context of computation to be meaningless). When something is abstracted, it means the representation only includes a subsection of the details (can vary the level of details) from the original thing it is trying to represent. Even the steps of a computation are typically abstracted, because as one digs deeper, it's nearly always possible to describe with another group of state transitions.

    The first reading, "What is a Turing Machine?" provided an overview of the different components of a Turing Machine and how it performs computations. In this case, the control head of the Turing Machine is the symbol system manipulating the set of finite number of symbols. This reading additionally defined the Universal Turing Machine, which can simulate any other Turing Machine. Given the brain could be abstracted to a turning machine, one could theoretically simulate the brain using a Universal Turing Machine (program it to act like a brain). This isn't enough to understand the brain however, a simulation of something does not mean it is that something. Just because something provides the correct outputs to the given inputs (manipulate symbols), does not mean this machine is able to cognize or understand what it is working with.

    ReplyDelete
  53. Going back on the readings for the midterm, I’ve wanted to gain a bit of clarification on a few core themes brought on:

    Firstly, I understand the turning test really is just about reverse-engineering our capacity to do anything we can do. However, I just wanted to clarify this notion further. Reverse engineering is figuring out what we can do regarding cognitive abilities (seeing, talking, remembering, etc.). Consequently, this can be connected to the easy problem of figuring out HOW and WHY humans can do what we can do.

    From this comes the goal of figuring out how we can build a mechanism that encompasses what we can cognitively do. This is where the turing machine hierarchy comes in. As mentioned, the Turing test is about building something that helps us figure out whether we have the correct theory on how cognition works, i.e.m, building a system that has the capacities of a human being to the degree that we no longer can tell a human and machine apart. Here come the levels of testing hierarchy. By the testing hierarchy, we want to determine what can pass the Turing test (aka the “best level/what defines the Turing test” ). In other terms, figure out which level best represents what we can do cognitively.

    Currently, the debate is whether T3 really is the best level to represent us/create a mechanism that is indistinguishable. This is because T3 encompasses many of which we can do, unlike T2, where we can do more than email/communicate in a non-verbal way.

    Could you possibly confirm whether I’ve understood this correctly? Apologies for the extra skyblogs recently. I realized I’m a few skyblog posts short of 18 posts to receive full marks, hence the last minute new skyblogs.

    ReplyDelete
    Replies
    1. Maira, you’ve understood some of it, but not deeply enough to put it together.

      The easy problem of cogsci is reverse-engineering cognitive capacity: HOW and WHY humans can do what we can do. Success is tested by building T2, T3, T4. What are those? T2 could perhaps be passed by just a computer. But T3 and T4 cannot. Why not?

      There is no correct answer for which TT level is the “right” one: T2, T3, T4. See replies in Week 2a, 2b, 3a and 3b. The hierarchy is the Turing Test hierarchy, not a Turing Machine hierarchy. (What is a Turing Machine?) (What is Reverse-Engineering?)

      Skywriting needs to be done as we go along, and reading also my replies to others. All the points you raised have been discussed many times in the skywritings.

      Delete

    2. T2 - this is a candidate that can do tasks a human could do verbally. For example, because humans can communicate via email, a candidate here would be able to communicate via email and could be mistaken for a human.

      T3 - this is a candidate that can do what T2 can do, however, can also do so robotically
      -> However, I’m still confused about what it means to do stuff “robotically”?

      T4 - This is basically a candidate that can do anything T2 and T3 can do. However, the internal structures and abilities a human can do (neurologically) would be the exact same.

      T2 could be passed by a computer, but not T3 and T4 because these involve higher aspects of human behaviour. For example, meaning becomes taken into consideration at these levels - especially T3. Specifically, in T2 a computer would be able to do verbal actions such as email. However, it would not truly understand what it was saying. Rather, the computer is just acting on a basis of symbols that make up the words it is using to communicate. Thus, when it comes to T3 and T4 there is more to these symbols which is the meaning.
      -> Though, could you further clarify what it means by saying “success is tested by building T2, T3, T4”?

      Lastly, a turing machine is just a machine that works on executing an algorithm - aka symbol manipulation recipe.

      Delete
    3. Maira, "Robotic" means sensorimotor -- whatever we can do with out bodies, in the world.

      What is symbol grounding? What is meaning?

      If you think you have reverse-engineered a mechanism that can produce our DOing capacity, you have to Turing-Test to see whether it can really do the job.

      Delete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2022 Time : FRIDAYS 8:30-11:25  Place : BIRKS 203 Instructor : Stevan Harnad Office : Zoom E...