How do I know what you are thinking right now without having direct access to your thoughts? Is it because I have a theory, or framework of propositions memorised that guide me, or do I run a simulation in my mind of what I would think if I were you? The first is called theory-theory (TT), the second simulation theory (ST). ST can be further divided into explicit accounts or those that use mirror neurones to explain how simulation takes place. Here I make a stance against strong versions of ST, especially those that mix mirror neurones with explicit accounts and will conclude that although explicit ST is problematic, it will be required to find a hybrid between ST and TT that may be able to incorporate empirical data in favour of both.
The most prominent mindreading approach to the question of how we understand other people’s behaviour is the TT. Here an agent uses a simple theory of how the mind works to attribute mental states to others. One possible way in which this could happen is by making propositions in the form of: “If A exhibits behaviour x, then A’s mental states, i.e. her beliefs and desires are usually y.” (Note that the format of these propositions is not undisputed amongst TTists.) This and many other rules build a body of beliefs that governs how agents interact. In contrast, ST claims that we do not have a theory like system of propositions to understand others, but rather that we use our own mental states as template to infer those of others. For the purpose of this essay, let me employ Gallagher’s division within ST: He claims that there are 1) implicit accounts that centre around mirror-neurones in their explanation of the simulation process; and 2) explicit accounts wherein the agent has a “conscious or introspective process” of imagining running a simulation (2007: 355). This essay shall be mainly concerned with the latter.
To give an example hereof: If I try and understand A I run a simulation of the type “If I exhibited behaviour x, then my mental states would be y. A and I are similar in relevant respects, therefore A’s mental states should be y also.” By providing our own decision making system with these “pretend” inputs, we calculate our own actions, but instead of executing them, we take the action system “off-line” and leave it at producing “anticipation” rather than action (Gallagher, 2007: 355; Gordon, 1986: 164, 170; Stich and Nichols, 1992: 40). What follows is that simulation is more immune to systematic errors since any unexpected variables can be compensated for if only I am similar enough to the agent whose behaviour I am trying to understand. This is because in TT an unexpected occurrence that does not match anything in your belief system leaves you helpless, while a simulation if it is fed with the right inputs should give you the unexpected behaviour as an output (Stich and Nichols, 1992: 58). Let me give some experimental evidence in favour of this theory: In the Maxi and the Chocolate experiment (for a description see Wimmer and Perner, 1983 as well as Gordon 1986: 168) a child is asked about the belief of an agent who is ignorant of the changed location of some chocolate. Instead of indicating just that, the child will make “egocentric errors” and use her own knowledge to answer the question for the agent (Gordon, 1986: 169). Hence, it simulated what it would answer in the agents place with knowledge of where the chocolate actually is.
The question that needs answering is to what extent ST is central in mindreading. Strong versions claim that all mindreading is simulation based. If this is right, then folk psychology and TT must be wrong (Stich and Nichols, 1992: 38; Gordon, 1986: 170; Goldman 1989: 182). Let me rather discuss a more moderate view wherein simulation is the “default” version of mindreading with exemptions (Gallagher, 2007: 356; Goldman, 2002: 7-8).
Saxe forcefully argued that these versions of ST are wrong by showing that the systematic errors that ST claims to be immune against occur frequently; and if they occur only frequent enough, then it cannot be the case that ST is central to all mindreading. Let me pick out but one such systematic error from her argument: In the bead experiment (for a description see Ruffman, 1996) children observe an occurrence of either type red or green that is unknown to an adult participant. When asked which type the adult thinks occurred, instead of answering that the participant does not know, the child picks whatever answer is wrong, i.e. the child knows that it is green, and answers the participant thinks that it is red for it is the incorrect option. Saxe concludes that this mistake arises out an incomplete theory of mind or afore mentioned body of beliefs that includes a proposition like “when she does not know, she will get it wrong” (Saxe, 2005: 175). A simulation could not have been used, since the child has both beliefs and desires and should therefore be able to feed the right inputs into a simulation. There are many more experiments of this kind that point towards children using “beliefs about how the mind works” or a theory when calculating their answers.
Furthermore she backs this point with evidence from brain imaging, whereby it was shown that the area responsible for thinking about other’s beliefs that is allegedly used by children in the experiment is not the same as the mirror system where simulation takes place. Let me explain what she means by briefly coming back to the implicit version of ST according to which mirror-neuronal systems are part of brain that fire not just when performing an action, but also when observing an agent performing those actions that I have performed before. In essence, this discovery has been linked to ST because it gives neurological support to the idea that we can “off-line” simulate without acting. Saxe’s point is hence that if ST were true than the part of the brain employed by children in the bead experiment should be the mirror system, but it is not.
There are two ways of attacking her argument: First, Saxe is confusing explicit and mirror-neuronal ST by presuming that they are the same. Her critique works if mirror systems are the “unifying basis” for all mindreading (Gallese, 2004) by showing that they are uninvolved in many experiments. However, if we maintain a distinction between explicit and mirror-neuronal ST, then her argument from brain-imaging merely attacks mirror-neuronal ST. This is because explicit ST as introspective imagining of things is independent of mirror systems and a more conceptual theory of social cognition. Therefore, her argument from brain imaging fails against ST in general.
The second, yet weaker response is against the remaining argument from error against explicit ST. Goldman claims that the bead experiment itself was set up improperly. Let me reconstruct his argument: ST allows for mistakes in inputs, which are predestined to occur in the experiment, since the children are interviewed in a manner that makes them feel that they have to choose one colour. Therefore, if their misconstructed simulation inputs are 1) someone is ignorant and 2) colour must be chosen, then “this could easily steer them toward an opposite pretence” (Goldman, 2006: 84, see also Gordon, 2005). Saxe would be quick to point out that this does not explain the systematicity of the error across experiments. In fact, I did not reconstruct an argument here, but rather a straw-man.
Let me go even further than Saxe though by phrasing her concerns more conceptually. In our first example of ST, “If I exhibited behaviour x, then my mental states would be y. A and I are similar in the relevant respects, therefore A’s mental states should be y also.”, must I not have some theory-like understanding of social cognition to make these inferences in the first place? Without appealing to mirror neurones, the explicit STist must provide some explanation of how an agent figures out her own mental states as well as how she is similar to other agents. Gallagher would agree with me by claiming that one’s own “narrow experiences” cannot give “a reliable sense” of the workings other minds (2007: 355), hence pure ST without theory-like propositions embedded will not be able to do the job. Stich and Nichols also share this fear. They argue that such propositions are always “lurking in the background” when interpreting experimental data in favour of ST (Stich and Nichols, 1992: 46).
This is not necessarily a problem. What this merely points towards is a hybrid between ST and TT. What I hope to have shown is that explicit versions of ST are problematic. Still, there is empirical data like “Maxi”, that TT is not able to easily handle. What even Goldman agrees upon is that neither theory on its own has the explanatory power required, and that a ST that embeds a few theory-like propositions is needed (Goldman, 2005; 2006); or was it a TT that in some circumstances utilises simulation (Saxe, 2005: 178)? This question of extent and how to formulate such a hybrid theory is the next big step in the debate, but shall be left for another essay to discuss.
Gallagher, S. 2007. Simulation trouble. Social Neuroscience 2, no.3: 353 – 365. Gallese, V. et al. 2004. A unifying view of the basis of social cognition. Trends in Cognitive Science 8: 396–403. Goldman, A. 1989. Interpretation Psychologized. Mind and Language 4: 161-185. Goldman, A. 2002. Simulation Theory and Mental Concepts. in Dokic, J. and J. Proust eds. Simulation and Knowledge of Action. John Benjamins. Amsterdam. 1-19. Goldman, A. and N. Sebanz. 2005. Simulation, mirroring, and a different argument from error. Trends in Cognitive Sciences 9, no.7: 320. Goldman, A. 2006. Simulating Minds: The Philosophy, Psychology and Neuroscience of Mindreading. OUP. Gordon, R. 1986. Folk Psychology as Simulation. Mind and Language 1: 158-171. Ruffman, T. 1996. Do children understand the mind by means of simulation or a theory? Evidence from their understanding of inference. Mind and Language 11: 387–414. Saxe, R. 2005. Against simulation: the argument from error. Trends in Cognitive Science 9: 174–179. Stich, S and S. Nichols. 1992. Folk Psychology: Simulation or Tacit Theory? Mind and Language 7, no.1: 35-71. Wimmer, H. and J. Perner. 1983. Beliefs about Beliefs: Representation and Constraning Function of Wrong Beliefs in Children’s understanding of deception. Cognition 13, no.1: 103-128.