The Architecture of Reason – Recommended Reading

Book Review: Robert Audi, The Architecture of Reason: The Structure and Substance of Rationality. Oxford: Oxford University Press, 2001.

Audi develops analogies between theoretical and practical reason. He takes them to have a similar foundational structure based in experience. He indicates how recognition of this similarity allows insights about one sort of reason to be applied to the other sort. In particular, it becomes clear that egoism is not a reasonable account of practical reason. In addition to these main themes, Audi distinguishes and assesses various sorts of relativism about reason and develops an account of what it is to be a reasonable and rational person. There are numerous interesting points along the way, many more than can be addressed in a short review.

Audi supposes that beliefs may be (defeasibly) grounded in other beliefs or in certain foundational experiences, such as perceptual or sensory experiences, whereas desires may be grounded in other desires and beliefs or in certain foundational experiences of liked or disliked experiences. In both cases the foundations are experiences rather than beliefs about experiences. (He argues that Sellars’ attack on “the myth of the given” does not apply to his version of foundationalism.) He allows for other foundational sources of justification for beliefs in addition to sensory experience, namely, introspection, remembering (both remembering the experience and remembering that something is or was the case), and reason (or reflection or intuition).

Justified non-inferential beliefs are based on foundational sources. Justified inferential beliefs are based on justified beliefs. The same foundational structure holds for desires or wants, where intrinsic wants are analogous to non-inferential beliefs. Foundational experiences for wants are intrinsically liked or disliked because they are pleasant or unpleasant, but this does not mean, say, that wanting to swim for pleasure is wanting to swim as a means to pleasure: one envisions the pleasure as in the swimming. So wanting to swim can be an intrinsic want.

Audi distinguishes objectual wanting (wanting the pain to stop), behavioral wanting (wanting to swim), and propositional wanting (wanting there to be no more war). Wanting the pain to stop is objectual wanting (wanting this to stop) rather than propositional (wanting that I am free from the pain), so in an important sense it is not egoistic. He says that a similar point holds for belief. “Many philosophers have conflated the question of what justifies a belief with the problem of how it can be defended . . . The basis of my justification for believing that there is a tree before me is a particular visual experience . . . I am not part of the object of the experience” (102) and “selfreferential beliefs need not be taken as primary in perception” (103). We should reject epistemic egocentrism just as we reject egoism.

There is at least one disanalogy between theoretical and practical reason. Suppose that P and Q are incompatible propositions, that you have conclusive reasons to believe their disjunction, P or Q, and that your total evidence does not favor one alternative over the other. Then from the point of view of theoretical reason, it would seem you must suspend judgment and not believe one or the other. On the other hand, if A and B are incompatible courses of action (for example, representing two ways of getting to a place you need to get to), if you have conclusive reasons to take one of these courses of action, and if you have no more reason to take one rather than the other, then from the point of view of practical reason you really must decide and must not fail to make a decision. Audi notes this difference but describes it in the following way. In the practical case, you are justified in choosing course of action A and you are justified in choosing course of action B, whereas in the theoretical case you are neither justified in believing P nor justified in believing Q; however, he thinks you might be rational in believing P and rational in believing Q in the case described, which seems absolutely wrong to me.

One worry about Audi’s approach lies in the particular way in which it depends on the notion of a “ground” or “basis” of a belief or desire, whether ultimate or not. He says at one point that one’s grounds for a belief are the sorts of things that “a successful justification for it would provide” where a justification is a process of arguing for the belief by supplying premises that support the belief. But, as already mentioned, one’s ultimate grounds can include experiences, where experiences of the relevant sort are not premises and are not the sort of thing that a successful argument provides. He allows that a justificatory argument might include premises about certain experiences, but it is the experiences, not the claims about them, which are the ultimate source of justification. He allows that the current grounds of a belief need not be the original grounds. The current grounds of a belief support it in the way that pillars support a porch (where one pillar might replace another in bearing the weight). Support is a “kind of psychologically unobtrusive evidential sustenance relation.” However, we cannot identify supporting beliefs with those one would offer if asked to justify one’s belief, because “it can be difficult to tell when we are discovering a new ground . . . and when we are articulating one that was already a tacit basis of that belief.”

Sometimes a belief is “simply retained in memory” with no record of the original reasons and no new grounds. It is not completely clear to me what status Audi attributes to such a belief. Suppose the belief is not currently on one’s mind; it is just sitting there in memory. It therefore cannot be grounded in a present memory experience, because there is no such experience. Is such a belief grounded in memory simply by virtue of being retained in memory? Or is it ungrounded? Some theorists would say that the justification of such a belief depends on its earlier history. But Audi appears to accept a kind of internalism of the present moment which rules out such an appeal to history.

He does say that, “An important exception to the view that intentional actions are grounded in intrinsic desires is this. One might have forgotten why one is going into the kitchen, hence have a desire to do so that is merely non-intrinsic: neither intrinsic nor instrumental.” He calls this a “residual desire” (footnote 8, p. 246). So presumably a belief for which one has forgotten one’s reasons without having acquired any others is also ungrounded, a “residual” belief. At an earlier point Audi says, A “merely non-inferential belief will be neither justified nor capable of conferring justification on any belief grounded in it; for similar reasons I suggest that a merely noninstrumental desire will tend to be neither rational (though it need not be irrational) nor capable of conferring rationality on any desire or action grounded on it” (128).

What worries me about this is that a typical reader of this review may have a million separate beliefs—about what words mean, about the names and phone numbers of acquaintances, about historical dates and various other useful and useless facts. In relation to that vast store of beliefs, the reader’s current experiences are quite limited, including current sensory experiences, current memory experiences, experiences of introspection, and experiences of reflection and intuition. A moment’s reflection suffices to indicate that the actual experiences of the reader at this time are insufficient to serve as any sort of foundational justification of all but a few of those million beliefs.

Furthermore, it is quite possible, even likely, that almost all of those million beliefs are “simply retained in memory” with no record of the original reasons and no new grounds in relevant support relations, although Audi denies it.

Once it is clear that a belief can be inferential by virtue of standing to one or more other beliefs in the kind of psychologically unobtrusive evidential sustenance relation just illustrated, it also becomes apparent that a great many of our beliefs are inferential. They are based on one or more other, evidential beliefs of ours, as opposed to being non-inferentially grounded in a current experience or mental state or simply retained in memory (35).

As we have seen, in a typical case, a person will have a foundation of non-inferential beliefs rational on the basis of experience or reason and a vast superstructure of beliefs based on them (205).

On the contrary, this is far from “apparent”. It rests on a very strong and dubious psychological hypothesis about support relations. To be sure, people do indeed provide justifications for their beliefs when asked, but the justifications often have little to do with why they originally held the views or why they hold them now. People tend to rationalize, they fabricate reasons when asked what their reasons are. When the original reasons for belief are undermined or when actually cited reasons for belief are undermined people tend to continue believing as they do. If challenged, instead of giving up a belief, they come up with other new reasons for it (Ross and Anderson, 1982; Haidt, 2001). Audi notes that, “it is philosophically prudent to try to account for rationality without multiplying beliefs, inferences, or thought processes of any kind beyond necessity” (34). It is also philosophically prudent to avoid unnecessarily speculative psychological hypotheses!

The problem I worry about here applies to many other foundational theories and I am not what the best response to it is. Audi ought to be able to adapt that best response to the attractive theory he presents here.


Haidt, J . (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814-834.

Ross, Lee, and Anderson, Craig A., (1982). Shortcomings in the attribution process: on the origins and maintenance of erroneous social assessments,” in Judgement under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Reviewed by Gilbert Harman, Princeton University, 2002.

Buy Leucovorin
Buy Xenical
Buy Miconazole
Buy Ouabain
Buy Sotalol
Buy Allopurinol
Buy Tramadol
Buy Cefotetan
Buy Diflucan
Buy Clidinium
Buy Betaxolol
Buy Butriptyline
Buy Pravachol
Buy Brompheniramine
Buy Moricizine
Buy Efavirenz
Buy Corticotropin
Buy Diphemanil
Buy Ipratropium
Buy Carbarsone
Buy Methacycline
Buy Vincristine
Buy Thiopropazate
Buy Hexocyclium
Buy Mucomyst
Buy Phenytoin
Buy Butalbital
Buy Granisetron
Buy Coumadin
Buy Elocon
Buy Fosamax
Buy Nicotrol
Buy Amiodarone
Buy Cloxacillin
Buy Moexipril
Buy Chlorcyclizine
Buy Bromides
Buy Acetohexamide
Buy Levaquin
Buy Prozac
Buy Thiothixene
Buy Quinidine
Buy Trimeprazine
Buy Octreotide
Buy Metrizamide
Buy Prazosin
Buy Thiamine
Buy Foscarnet
Buy Zolpidem
Buy Phentolamine
Buy Minoxidil
Buy Venlafaxine
Buy Nadroparin
Buy Miglitol
Buy Iprindole
Buy Flomax
Buy Thioridazine
Buy Lopressor
Buy Amoxil
Buy Propofol
Buy Bupropion
Buy Polythiazide
Buy Guanadrel
Buy Tridihexethyl
Buy Metharbital
Buy Arava
Buy Iodine
Buy Methscopolamine
Buy Pentaerythritol
Buy Piroxicam
Buy Cefprozil
Buy Olsalazine
Buy Simethicone
Buy Cholecalciferol
Buy Accolate
Buy Gabapentin
Buy Dothiepin

Conspiracies & Rationality

A conspiracy theory usually attributes the ultimate cause of an event or chain of events (usually political, social, pop cultural or historical events), or the concealment of such causes from public knowledge, to a secret, and often deceptive plot by a group of powerful or influential people or organizations. Many conspiracy theories imply that major events in history have been dominated by conspirators who manipulate political happenings from behind the scenes. Historians often take conspiracy theories as actual theory, i.e., the viewpoint with the greatest explanatory value and the greatest utility as a starting point for further investigation, explanation and problem solving.

There are no less than 10,000 sites on the internet that explore, or further conspiracy theories. Amongst the leading theories are the following (Wired Magazine, Issue 15.11):

Nasa Faked the Moon Landings
And Arthur C. Clarke wrote the script, at least in one version of the story. Space skeptics point to holes in the Apollo archive (like missing transcripts and blueprints) or oddities in the mission photos (misplaced crosshairs, funny shadows). A third of respondents to a 1970 poll thought something was fishy about mankind’s giant leap. Today, 94 percent accept the official version… Saps!

The US Government Was Behind 9/11
Or Jews. Or Jews in the US government. The documentary Loose Change claimed to find major flaws in the official story — like the dearth of plane debris at the site of the Pentagon blast and that jet fule alone could never vaporize a whole 757. Judge for yourself: After Popular Mechanics debunked the theory, the magazine’s editors faced off with proponents in a debate, available on YouTube.

Princess Diana Was Murdered
Rumors ran wild after Princess Diana’s fatal 1997 car crash, and they haven’t stopped yet. Reigning theories: She faked her death to escape the media’s glare, or the royals snuffed her out (via MI6) to keep her from marrying her Muslim boyfriend. For the latest scenarios, check out, the Web site of her boyfriend’s dad, Mohamed Al Fayed.

The Jews Run Hollywood and Wall Street
A forged 19th-century Russian manuscript called “The Protocols of the Elders of Zion” (virtually required reading in Nazi Germany) purports to lay out a Jewish plot to control media and finance, and thus the world. Several studies have exposed the text as a hoax, but it’s still available in numerous languages and editions.

The Scientologists Run Hollywood
The long list of celebrities who have had Dianetics on their nightstands fuels rumors that the Church of Scientology pulls the strings in Tinseltown — vetting deals, arranging marriages, and spying on stars. The much older theory is that Jews run Hollywood, and the Scientologists have to settle for running Tom Cruise.

Paul Is Dead
Maybe you’re amazed, but in 1969 major news outlets reported on rumors of the cute Beatle’s death and replacement by a look-alike. True believers pointed to a series of clues buried in the Fab Four’s songs and album covers. Even for skeptics, McCartney’s later solo career lent credibility to the theory.

AIDS Is a Man-Made Disease
A number of scientists have argued that HIV was cooked up in a lab, either for bioweapons research or in a genocidal plot to wipe out gays and/or minorities. Who supposedly did the cooking? US Army scientists, Russian scientists, or the CIA. Mainstream researchers point to substantial evidence that HIV jumped species from African monkeys to humans.

Lizard-People Run the World
If a science fiction-based religion isn’t exotic enough, followers of onetime BBC reporter David Icke believe that certain powerful people — like George W. Bush and the British royals — actually belong to an alien race of shape-shifting lizard-people. Icke claims Princess Diana confirmed this to one of her close friends; other lizard theories (there are several) point to reptilian themes in ancient mythology. And let’s not forget the ’80s TV show V.

The Illuminati Run the World
The ur-conspiracy theory holds that the world’s corporate and political leaders are all members of an ancient cabal: Illuminati, Rosicrucians, Freemasons — take your pick. It doesn’t help that those secret societies really existed (George Washington was a Mason). Newer variations implicate the Trilateral Commission, the New World Order, and Yale’s Skull and Bones society.


The expression “conspiracy theory” has strongly negative connotations; it is almost invariably used in a way which implies that the theory in question is not to be taken seriously. However careful consideration of what a conspiracy theory is reveals that this dismissive attitude is not justified.

A “conspiracy” is simply a secret plan on the part of a group of people to bring about some shared goal, and a “conspiracy theory” is simply a theory according to which such a plan has occurred or is occurring. Most people can cite numerous examples of conspiracies from history, current affairs, or their own personal experience. Hence most people are conspiracy theorists. The problem is that when people think of particular examples of conspiracy theories they tend to think of theories that are clearly irrational.

When asked to cite examples of typical conspiracy theories, many people will refer to theories involving conspirators who are virtually all-powerful or virtually omniscient.

Others will mention theories involving alleged conspiracies that have been going on for so long or which involve so many people, that it implausible to suppose that it could have remained undetected (by anyone other than the conspiracy theorists).

Still others refer to theories involving conspirators who appear to have no motive to conspire (unless perhaps the desire to do evil for its own sake can be thought of as a motive).

Such theories are conspiracy theories and they are irrational, but it does not follow, nor is it true, that they are irrational because they are conspiracy theories. Thinking of such irrational conspiracy theories as paradigms of conspiracy theories is like thinking of numerology as a paradigm of number theory, or astrology as a paradigm of a theory of planetary motion. The subject matter of a theory does not in general determine whether belief in it is rational or not.

People do conspire. Indeed almost everyone conspires some of the time (think of surprise birthday parties) and some people conspire almost all the time (think of CIA agents). Many things (for example, September 11) cannot be explained without reference to a conspiracy. The only question in such cases is “Which conspiracy theory is true?”.

The official version of events (which in this case I accept) is that the conspirators were members of al-Qaida. This explanation is, however, unlikely to attract the label “conspiracy theory”. Why not? Because it is also the “official story”.

Although it is common to contrast conspiracy theories with the official non-conspiratorial version of events, quite often the official version of events is just as conspiratorial as its rivals. When this is the case, it is the rivals to the official version of events that will inevitably be labelled “conspiracy theories” with all the associated negative connotations. So, “conspiracy theory” has become, in effect, a synonym for a belief which conflicts with an official story.

This should make it clear how dangerous the expressions “conspiracy theory” and “conspiracy theorist” have become. These expressions are regularly used by politicians and other officials, and more generally by defenders of officialdom in the media, as terms of abuse and ridicule.

Yet it is vital to any open society that there are respected sources of information which are independent of official sources of information, and which can contradict them without fear. The widespread view that conspiracy theories are always, or even typically, irrational is not only wrongheaded, it is a threat to our freedom.

Of course, no one should deny that there are people who have an irrational tendency to see conspiracies everywhere, and it would, of course, be possible to restrict the expression “conspiracy theorist” in such a way that it only referred to such people. But if we do this, we should also remember that there is another form of irrationality, namely the failure to see conspiracy, even when one is confronted with clear evidence of it, which is at least as widespread, and which is far more insidious.

We need a name for people who irrationally reject evidence of conspiracy, to give our political discourse some much needed balance.

I think the expression “coincidence theorist”, which has gained a certain currency on the Internet, is a suitable candidate. A coincidence theorist fails to connect the dots, no matter how suggestive of an underlying pattern, they are.

A hardened coincidence theorist may watch a plane crash into the second tower of the World Trade Centre without thinking that there is any connection between this event and the plane which crashed into the other tower of the World Trade Centre less than an hour earlier.

Similarly, a coincidence theorist can observe the current American administration’s policies in oil rich countries from Iraq and Iran to Venezuela, and see no connection between those policies and oil.

A coincidence theorist is just as irrational as a conspiracy theorist (in the sense of someone excessively prone to conspiracy theorising). They are equally prone to error, though their errors are of different and opposing kinds. The errors of the conspiracy theorist, however, are much less dangerous than the errors of the coincidence theorist. The conspiracy theorist usually only harms himself. The coincidence theorist may harm us all by making it easier for conspirators to get away with it.

Also see: Conspiracy Theories: The Philosophical Debate, David Coady, Ashgate, 2006.

Seeing and Knowing

The present paper has two major goals, one of which is to argue that seeing is not always perceiving and the other of which is to argue that visual perception alone leads to knowledge of the world. Let me immediately try to make these two cryptic claims more transparent. Not all human vision has been designed to allow visual perception. Seeing can and often does make us visually aware of objects, properties and facts in the world. But it need not. Often enough, seeing allows us to act efficiently on objects of which we are dimly  aware, if at all. While moving at high speed, for example, experienced drivers are sometimes capable of avoiding an interfering obstacle of whose visual attributes they become fully aware afterwards. One may efficiently either catch or avoid being hit by a flying tennis ball without being aware of either its color or texture. This is the sense in which seeing is not always perceiving. If so, then the question arises as to the nature, function and cognitive role of non-perceptual vision. Here, I will make two joint claims. First of all, I will try to argue that the main job of human visual perception is to provide visual information for what functionalist philosophers have called  the “belief box”. In other words, visual percepts are inputs to further conceptual processing whose output can be stored in the belief box. Secondly, I will try to argue that the function of that part of the visual system that produces what I shall call “non-perceptual” or more often “visuomotor” representations is to provide visual guidance to the “intention box”. More specifically, I will argue that, unlike visual percepts, visuomotor representations — which, I shall claim, are genuine representations — present visual information to motor intentions and serve as inputs to “causally indexical” concepts. On the joint assumptions (that I accept) that in the relevant propositional sense, only facts can be known, and that one cannot know a fact unless one believes that this very fact (or state of affairs) holds, then it follows from my distinction between perceptual and visuomotor processing that only visual perception can give rise to “detached” knowledge of the mind-independent world.

I. Not all seeing is perceiving
I.1. The dualistic model of the human visual system
            In their (1982) paper “Two Cortical Visual Systems”, the cognitive neuroscientists Leslie Ungerleider and Mortimer Mishkin posited an anatomical distinction between the ventral pathway and the dorsal pathway in the primate visual system (see Figure 1). The former projects the primary visual cortex onto inferotemporal areas. The latter projects the primary visual cortex onto parietal areas, which serve as a relay between the primary visual cortex, the premotor and the motor cortex. Ungerleider and Mishkin based their anatomical distinction on neurophysiological and behavioral evidence gathered from the study of macaque monkeys. They performed intrusive lesions respectively in the ventral and in the dorsal pathway of the visual system of macaque monkeys and they found the following double dissociation. Animals with a lesion in the ventral pathway were impaired in the identification and recognition of the colors, textures and shapes of objects. But they were relatively unimpaired in tasks or spatial orientation. In tasks of spatial orientation, they were presented with two wells one of which contained food and the other of which was empty: the former was closer to a landmark than the latter (see Figure 2). Animals with a ventral lesion could accurately use the presence of the landmark in order to discriminate the well with food from the well without. By contrast, animals with a dorsal lesion were severely disoriented, but their capacity to identify and recognize the shapes, colors and textures of objects were well-preserved. On this basis, Ungerleider and Mishkin (1982) concluded that the ventral pathway of the primate visual system is the What system and the dorsal pathway of the primate visual system is the Where system.

            In their (1995) book, The Visual Brain in Action, the cognitive neuroscientists David Milner and Mel Goodale presented a number of arguments in favor of a new interpretation of the dualistic model of the human visual system. On their view, the ventral stream of the human visual system serves what they call “vision-for-perception” and the dorsal stream serves what they call “vision-for-action”. The important idea underlying Milner and Goodale’s dualistic model of human vision is that one and the same visual stimulus can be processed in two fundamentally different ways. Now, two caveats are important here. First of all, it is quite clear, I think, that, as Austin (1962) emphasized, humans can see a great variety of things: they can see e.g., tables, trees, rivers, substances, gases, vapors, mountains, flames, clouds, smoke, shadows, holes, pictures, movies, events and actions. Here, I will not examine the ontological status of all the various things that human beings can see and I shall restrict myself to seeing ordinary middle-sized objects that can also happen to be targets of human actions. Secondly, it is no objection to the dualistic model of the human visual system to acknowledge that, in the real life of normal human subjects, the two distinct modes of visual processing are constantly collaborating. Indeed, the very idea that they collaborate — if and when they do — presupposes that they are distinct. The trick of course is to find experimental conditions in which the two modes of visual processing can be dissociated. In the following, I will provide some examples drawn first from the psychophysical study of normal human subjects and then from the neuropsychological study of brain-lesioned human patients.

I.2. Psychophysical evidence
            Bridgeman et al. (1975) Goodale et al. (1986) found that normal subjects can point accurately to a target on the screen of a computer whose motion they could not consciously notice because it coincided with one of their saccadic eye movement (see Jeannerod, 1997: 82). Castiello et al. (1991) found that subjects are able to correct the trajectory of their hand movement directed towards a moving target some 300 milliseconds before they became conscious of the target’s change of location. Pisella et al. (2000) and Rossetti & Pisella (2000) performed experiments involving a pointing task in which subjects were presented with a green target towards which they were requested to point their index finger. Some of them were instructed to stop their pointing movement towards the target when and only when it changed location by jumping either to the left or to the right. Pisella et al. (2000) and Rossetti & Pisella (2000) found a significant percentage of very fast unwilled correction movements generated by what they called the “automatic pilot” for hand movement. In a second experiment, Pisella et al. (2000) presented subjects simultaneously with pairs of a green and a red target. They were instructed to point to the green target, but the color of the two targets could be interchanged unexpectedly at movement onset. Unlike a change of target location, a change of color did not elicit fast unwilled corrective movements by the “automatic pilot”. On this basis, Pisella et al. (2000) draw a contrast between the fast visuomotor processing of the location of a target in egocentric coordinates and  the slower visual processing of the color of an object.

            One psychophysical area of particular interest is the study of visual size-contrast illusions. One particularly well-known such illusion is the Titchener or Ebbinghaus illusion. The standard version of the illusion consists of the display of two circles of equal diameter, one surrounded by an annulus of circles greater than it, and the other surrounded by an annulus of circles smaller than it. Although they are equal, the former looks smaller than the latter (see Figure 3). One plausible account of the Titchener illusion is that the array of smaller circles is judged to be more distant than the array of larger circles. Visually based perceptual judgments of distance and size are typically relative judgments: in a perceptual task, one cannot but fail to see some things as smaller (or larger) and closer (or further away) than other neighboring things that are parts of a single visual array. In perceptual tasks, the output of obligatory comparisons of sizes, distances and positions of constituents of a visual array serves as input to perceptual constancy mechanisms. As a result, of two physically equal objects, if one is perceived as more distant from the observer than the other, the former will be perceived as larger than the latter. A non-standard version of the illusion consists in the display of two circles of unequal diameter: the larger of the two is surrounded by an annulus of circles larger than it, while the smaller of the two is surrounded by an annulus of circles smaller than it, so that the two unequal circles look equal.

            Aglioti et al. (1995) designed an experiment in which they replaced the two central circles by two graspable three-dimensional plastic disks, which they displayed within a horizontal plane. In a first row of experiments with pairs of unequal disks whose diameters ranged from 27 mm to 33 mm, they found that on average the disk in the annulus of larger circles had to be 2,5 mm wider than the disk in the annulus of smaller circles in order for both to look equal. These numbers provide a measure of the delicacy of the human visual system. Finally, Aglioti et al. (1995) alternated presentations of physically unequal disks, which looked equal, and presentations of physically equal disks, which looked unequal. Both kinds of trials were presented randomly and so were the left vs. right positions of either kind of stimuli. Subjects were instructed to pick up the disk on the left between the thumb and index finger of their right hand if they thought the two disks to be equal or to pick up the disk on the right if they judged them to be unequal.

            The sequence of subjects’ choices of the disk on the right or the disk on the left provided a measure of the magnitude of the illusion prompted by the perceptual comparison between two disks surrounded by two distinct annuli. In the visuomotor task, the measure of grip size was based on the unfolding of the natural grasping movement performed by subjects while their hand approached the object. During a prehension movement, fingers progressively stretch to a maximal aperture before they close down until contact with the object. It has been found that the maximum grip aperture (MGA) takes place at a relatively fixed point, i.e., at about 60% of the duration of the movement (cf. Jeannerod, 1984). In non-illusory contexts, MGA has been found to be reliably correlated with the object’s physical size. Although much larger, it is directly proportional to the actual physical size of the object. MGA cannot depend on a conscious visual comparison between the size of the object and subjects’ hand during the prehension movement since the correlation between MGA and object’s size is reliable even when subjects have no visual access to their own hand. Rather, MGA is assumed to result from an early anticipatory automatic visual process of calibration. Thus, Aglioti et al. (1995) measured MGA in flight using optoelectronic recording.

            What Aglioti et al. (1995) found was that, unlike comparative perceptual judgment expressed by the sequence of choices of either the disk on the left or the disk on the right, the grip was not significantly affected by the illusion. The influence of the illusion was significantly stronger on perceptual judgment than on the grasping task. This experiment, however, raises a number of methodological problems. The main issue, raised by Pavani et al. (1999) and Franz et al. (2000), is the asymmetry between the two tasks. In the perceptual task, subjects are asked to compare two distinct disks surrounded by two different annuli. But in the grasping task, subjects focus on a single disk surrounded by an annulus. So the question arises whether, from the observation that the comparative perceptual judgment is more affected by the illusion than the grasping task, one may conclude that perception and action are based on two distinct representational systems.

            Aware of this problem, Haffenden & Goodale (1998) performed the same experiment, but they designed one more task: in addition to instructing subjects to pick up the disk on the left if they judged the two disks to be equal in size or to pick up the disk on the right if they judged them to be unequal, they required subjects to manually estimate between the thumb and index finger of their right hand the size of the disk on the left if they judged the disks to be equal in size and to manually estimate the size of the disk on the right if they judged them to be unequal (see Figure 4). Haffenden & Goodale (1998) found that the effect of the illusion on the manual estimation of the size of a disk (after comparison) was intermediary between comparative judgment and grasping.

            Furthermore, Haffenden & Goodale (1998) found that the presence of an annulus had a selective effect on grasping. They contrasted the presentation of pairs of disks either against a blank background or surrounded by an annulus of circles of intermediate size, i.e., of size intermediary between the size of the smaller circles and the size of the larger circles involved in the contrasting pair of illusory annuli. The circles of intermediate size in the annulus were slightly larger than the disks of equal size. When a pair of physically different disks were presented against either a blank background or a pair of annuli made of intermediate size circles, both grip scaling and manual estimates reflected the physical difference in size between the disks. When physically equal disks were displayed against either a blank background or a pair of annuli made of circles of intermediate size, no significant difference was found between grasping and manual estimate. The following dissociation, however, turned up: when physically equal disks were presented with a middle-sized annulus, overall MGA was smaller than when physically equal disks were presented against a blank background. Thus, the presence of an annulus of middle-sized circles prompted a smaller MGA than a blank background. Conversely, overall manual estimate was larger when physically equal disks were presented against a background with a middle-sized  annulus than when they were presented against a blank background. The illusory effect of the middle-size annulus presumably arises from the fact that the circles in the annulus were slightly larger than the equal disks. Thus, whereas the presence of a middle-sized annulus contributes to increasing manual estimation, it contributes to decreasing grip scaling. This dissociation shows that the presence of an annulus may have conflicting effects on perceptual estimate and on grip aperture.

            Finally, Haffenden, Schiff & Goodale (2001) went one step further. They presented subjects with three distinct Titchener circle displays one at a time, two of which are the traditional Titchener central disk surrounded by an annulus of circles either smaller than it or larger than it. In the former case, the gap between the edge of the disk and the annulus is 3 mm. In the latter case, the gap between the edge of the disk and the annulus is 11 mm. In the third display, the annulus is made of small circles (of the same size as in the first display), but the gap between the edge of the disk and the annulus is 11 mm (like the gap in the second display with an annulus of larger circles) (see Figure 5). What Haffenden, Schiff and Goodale (2001) found was the following dissociation: in the perceptual task, subjects estimated the third display very much like the first display and unlike the second display. In the visuomotor task, subjects’ grasping in the third condition was much more similar to grasping in the second than in the first condition (see Figure 6). Thus, perceptual estimate was far more sensitive to the size of the circles in the annulus than to the distance between target and annulus. Conversely, grasping was far more sensitive to the distance between target and annulus than to the size of the circles in the annulus. The idea here is that the annulus is processed by the visuomotor processing as a potential obstacle for the position of the fingers on the target disk.

            From this selective review of evidence on size-contrast illusions, I would like to draw two temporary conclusions. First of all, visual perception and visually guided hand actions directed towards objects impose different computational requirements on the human visual system. As I said above, visually based perceptual judgments of distance and size are typically relative comparative judgments. By contrast, visually guided actions directed towards objects are typically based on the computation of the absolute size and the egocentric representation of the location of objects on which to act. In order to successfully grab a branch or a rung, one must presumably compute the distance and the metrical properties of the object to be grabbed quite independently of pictorial contextual features in the visual array.

            Second of all, what the above experiments suggest is not that, unlike perceptual judgments, the visuomotor control of grasping is immune to illusions. Rather, both perceptual judgment and the visuomotor control of action can be fooled by the environment. But if so, then they can be fooled by different features of the visual display. The effect of the Titchener size-contrast illusion on perceptual judgment arises mostly from the comparison between the diameter of the disk and the diameter of the circles in the surrounding annulus. The visuomotor processing, which delivers a visual representation of the absolute size of a target of prehension, is so sensitive to the distance between the edge of the target and its immediate environment that it can be led to process two-dimensional cues as if they were three-dimensional obstacles. I take this last point quite seriously because I claim that it is evidence that the output of the visuomotor processing of the target of an action can misrepresent features of the distal stimulus and is thus a genuine mental representation.

I.3. Neuropsychological evidence
            In the 1970’s, Weiskrantz and others discovered a neuropsychological condition called “blindsight” (see Weiskrantz, 1986, 1997). Since then, the phenomenon has been extensively studied and discussed by philosophers. Blindsight results from a lesion in the primary visual cortex anatomically located prior to the bifurcation between the ventral and the dorsal streams. The significance of the discovery of this phenomenon lies in the fact that although blindsight patients have no phenomenal subjective visual experience of the world in their blind field, nonetheless it was found out that they are capable of striking residual visuomotor capacities. In situations of forced choice, they can do such remarkable things as grap quandragular blocks and insert a hand-held card into an oriented slot. According to most neuropsychologists who have studied such cases, in blindsight patients, the visual information is processed by subcortical pathways that bypass the visual cortex and relay visual information to the motor cortex.

            In the early 1990’s, DF, a British woman suffered an extensive lesion in the ventral stream of her visual system as a result of poisoning by carbon monoxide. She thus became an apperceptive agnosic, i.e., a visual form agnosic patient (see Farah, 1990 for the distinction between apperceptive and associative agnosia). Following the discovery of blindsight, the main novelty of the neuropsychological description of patient DF’s condition — first examined by Goodale et al. (1991) and his colleagues — lies in the fact that DF’s examination did not focus exclusively on what she could not do as a result of her lesion. Rather, she was investigated in depth for what she was still able to do.

            Careful sensory testing of DF revealed subnormal performance for color perception and for visual acuity with high spatial frequencies, though detection of low spatial frequencies was impaired. Her motion perception was poor. DF’s perception of shape and patterns was very poor. She was unable to report the size of an object by matching it by the appropriate distance between the index finger and the thumb of her right hand. Her line orientation detection (reveald by either verbal report or by turning a hand-held card until it matched the orientation presented) was highly variable: although she was above chance for large angular orientation differences between two objects, she fell at chance level for smaller angles. DF was unable to recognize the shape of objects. Interestingly, however, her visual imagery was preserved. For example, although she could hardly draw copies of seen objects, she could draw copies of objects from memory — which she then could hardly later recognize.

            By contrast with her impairment in object recognition, DF was normally accurate when object orientation or size had to be processed, not in view of a perceptual judgment, but in the context of a goal-directed hand movement. During reaching and grasping between her index finger and thumb the very same objects that she could not recognize, she performed accurate prehension movements. Similarly, while transporting a hand-held car towards a slit as part of the process of inserting the former into the latter, she could normally orient her hand through the slit at different orientations (Goodale et al., 1991, Carey et al., 1996). When presented with a pair of rectangular blocks of either the same or different dimensions and asked whether they were the same or different, she failed. When she was asked to reach out and pick up a block, the measure of her (maximal) grip aperture between thumb and index finger revealed that her grip was calibrated to the physical size of the objects, like that of normal subjects. When shown a pair of objects selected from twelve objects of different shapes for same/different judgment, she failed. When asked to grasp them using a “precision grip” between thumb and index finger, she succeeded.

            Conversely, optic ataxia is a syndrome produced by lesions in the dorsal stream. An optic ataxic patient, AT, examined by Jeannerod et al. (1994) shows the reversed dissociation. While she can recognize and identify the shape of visually presented objects, she has serious visuomotor deficits: her reach is misdirected and her finger grip is improperly adjusted to the size and shape of the target of her movements.

            At bottom, DF turns out to be able to visually process size, orientation and shape required for grasping objects, i.e., in the context of a reaching and grasping action, but not in the context of a perceptual judgment. Other experimental results with DF, however, indicate that her visuomotor abilities are restricted in at least two respects. First, in the context of an action, she turns out to be able to visually process simple sizes, shapes and orientations. But she fails to visually process more complex shapes. For example, she can insert a hand-held card into a slot at different orientations. But when asked to insert a T-shaped object (as opposed to a rectangular card) into a T-shaped aperture (as opposed to a simple oriented slit), her performance deteriorated sharply. Inserting a T-shaped object into a T-shaped aperture requires the ability to combine the computations of the orientation of the stem with the orientation of the top of the object together with the computation of the corresponding parts of the aperture. There are good reasons to think that, unlike the quick visuomotor processing of simple shapes, sizes and orientations, the computations of complex contours, sizes and orientations require the contribution of visual perceptual processes performed by the ventral stream — which, we know, has been severely damaged in DF.

            Secondly, the contours of an object can and often are computed by a process of extraction from differences in colors and luminance cues. But normal humans can also extract the contours or boundaries of an object from other cues — such as differences in brightness, texture, shades and complex principles of Gestalt grouping and organization of similarity and good form. Now, when asked to insert a hand-held card into a slot defined by Gestalt principles of good form or by textural information, DF failed (see e.g., Goodale, 1995).

            Apperceptive agnosic patients like DF raise the question: What is it like to see with an intact dorsal system alone? I presently want to emphasize what I take to be a crucial characteristic of the content of visuomotor representations jointly from the examination of DF’s condition and from the visuomotor representations of normal subjects engaged in tasks of grasping illusory displays such as Titchener circles. As I said above, a visual percept yields a representation of the relative size and distance of various neighboring elements within a visual array. I take it that it is of the essence of a percept that the processing of such visual attributes of an object as its size, shape and position or distance must be available for comparative judgment. By contrast, a visuomotor representation of a target in a task of reaching and grasping provides information about the absolute size of the object to be grasped. Crucially, the spatial position of any object can be coded in at least two major coordinate systems or frames of reference: it may be coded in an egocentric frame of reference centered on the agent’s body or it may be coded in an allocentric frame of reference centered on some object present in the visual array. The former is required for allowing an agent to reach and grasp an object. The latter is required in order to locate an object relative to some other object in the visual display.

            Consider e.g., a visual percept of a glass to the left of a telephone. In the visual percept, the location of the glass relative to the location of the telephone is coded in allocentric coordinates. The visual percept has a pictorial content that, I shall argue momentarily, is both informationally richer and more fine-grained than the verbally expressible conceptual content of a different representation of the same fact or state of affairs. For example, unlike the sentence ‘The glass is to the left of the telephone’, the visual percept cannot depict the location of the glass relative to the telephone without depicting ipso facto the orientation, shape, texture, size and color of both the glass and the telephone. Conceptual processing of the pictorial content of the visual percept may yield a representation whose conceptual content can be expressed by the English sentence ‘The glass is to the left of the telephone’. Now the visuomotor representation of the glass as a target of a prehension action requires that information about the size and shape of the glass be contained within a representation of the position of the glass in egocentric coordinates. Unless the telephone interferes with the trajectory of the reaching part of the action of grasping the glass, when one intends to grasp the glass, one does not need to represent the spatial position of the glass relative to the telephone.

            We know that patient DF cannot match the orientation of her wrist to the orientation of a slot in the context of a perceptual task, i.e., when she is not involved in the action of inserting a hand-held card into the slot. She can, however, successfully insert a card into an oriented slot. She cannot perceptually represent the size, shape and orientation of an object. However, she can successfully grasp an object between her thumb and index finger. So the main relevant contrast revealed by the examination of DF is that while she can use an effector (e.g., the distance between her thumb and index finger or the rotation of her wrist) in order to grasp an object or to insert a card into a slot, i.e., in the context of an action, she cannot use the same effector to express a perceptual judgment. What is the main difference between the perceptual and the visuomotor tasks? Both tasks require that visual information about the size and shape of objects be provided. But in the visuomotor task, this information is contained in a representation of the spatial position of the target coded in an egocentric frame of reference. In the perceptual task, information about the size and shape of objects is contained in a representation of the spatial position of the object coded in an allocentric frame of reference. Normal subjects can easily switch from one spatial frame of reference to the other. Such fast transformations may be required when e.g., one switches from counting items lying on a table or from drawing a copy of items lying on a table to grasping one of them. However, DF’s visual system cannot make the very same visual information about the size, shape and orientation of an object available for perceptual comparisons. In DF, information about the size and the shape of an object is trapped within a visuomotor representation of its location coded in egocentric coordinates. It is not available for recoding in an allocentric frame of reference. Coding spatial relationships among different constituents of a visual scene is crucial to forming a visual percept. By contrast, locating a target in egocentric coordinates is crucial to forming a visuomotor representation on the basis of which to act on the target.

II. Visual knowledge of the world

            Although , if the above is on the right track, not all human vision has been designed to allow visual perception, nonetheless one crucial function of human vision is visual perception. Like many psychological words, ‘perception’ can be used at once to refer both to a process and to its product. There are two complementary sides to visual perception: there is an objective side and a subjective side. On the objective side, visual perception is a fundamental source of knowledge about the world. Visual perception is indeed a — if not “the” — paradigmatic process by means of which human beings gather knowledge about objects, events and facts in their environment. On the subjective side, visual perception yields a peculiar kind of awareness of the world, namely: sight. Sight has a special kind of phenomenal character (which is lacking in blindsight patients). The phenomenology of human visual experience is  unlike the phenomenology of human experience in sensory modalities other than vision, e.g., touch, olfaction or audition.

            On my representationalist view (close to Dretske, 1995 and Tye, 1995), much of the distinctive phenomenology of visual experience derives from the fact that the human visual system has been selected in the course of evolution to respond to a specific set of properties. Visual perception makes us aware of such fundamental properties of objects as their size, orientation, shape, color, texture, spatial position, distance and motion, all at once. One of the puzzles that arises from neuroscientific research into the visual system (and which I will not discuss here) is the question of how these various visual attributes are perceived as bound together, given the fact that neuroscience has discovered that they are processed in different areas of the human visual system (see Zeki, 1993). Unlike vision, audition makes us aware of sounds. Olfaction makes us aware of smells and odors. Touch makes us aware of pressure and temperature. Although shape can be both seen and felt, what it is like to see a shape is clearly different from what it is like to touch it. Part of of the reason for the difference lies in the fact that a normally sighted person cannot see e.g., the shape of a cube without seeing its color. But by feeling the shape of a cube, one does not thereby feel its color.

            I will presently argue that visual perception is a fundamental source of knowledge about the world: visual knowledge. I assume that propositional knowledge is knowledge of facts and that one cannot know a fact unless one believes that this fact obtains. I accept something like Dretske’s (1969) distinction between two levels of visual perception: nonepistemic perception (of objects) and epistemic perception (of facts). Importantly, on my view, the nonepistemic perception of objects gives rise to visual percepts and visual percepts are different from what I earlier called visuomotor representations of the targets of one’s action. What Dretske (1969) calls nonepistemic seeing is part of the perceptual processing of visual information. In the previous section, I gave empirical reasons why visual percepts differ from visuomotor representations. Unlike the visuomotor representation of a target, a visual percept makes visual information about colors, shapes, sizes, orientations of constituents of a visual display available for contrastive identification and recognition. This is why visual percepts can serve as input to a conceptual process that can lead to a peculiar kind of knowledge of the world — visual knowledge. Visual percepts serve as inputs to conceptual processes, but percepts are not concepts: perceptual contrasts are not conceptual contrasts. My present task then will be to show that the claim that visual perception can give rise to visual knowledge of the world is consistent with the claim that visual percepts are different from thoughts and beliefs. Visual percepts lead to thoughts and beliefs, but it would be a mistake to confuse the nonconceptual contents of visual percepts with the conceptual contents of beliefs and thoughts.

II. 1. Percepts and thoughts

         As many philosophers of mind and language have argued, what is characteristic of conceptual representations is that they are both productive and systematic. Like sentences of natural languages, thoughts are productive in the sense that they form an open ended infinite set. Although the lexicon of a natural language is made up of finitely many words, thanks to its syntactic rules, a language contains indefinitely many well formed sentences. Similarly, an individual may entertain indefinitely many conceptual thoughts. In particular, both sentences of public languages and conceptual thoughts contain such devices as negation, conjunction and disjunction. So one can form indefinitely many new thoughts by prefixing a thought by a negation operator, by forming a disjunctive or a conjunctive thought out of two simpler thoughts or one can generalize a singular thought by means of quantifiers. Sentences of natural languages are systematic in the sense that if a language contains a sentence S with a syntactic structure e.g., Rab, then it must contain a sentence expressing a syntactically related sentence, e.g., Rba. An individual’s conceptual thoughts are supposed to be systematic too: if a person has the ability to entertain the thought that e.g., John loves Mary, then she must have the ability to entertain the thought that Mary loves John. If a person can form the thought that Fa, then she can form both the thought that Fb and the thought that Ga (where “a” and “b” stand for individuals and “F” and “G” stand for properties). Both Fodor’s (1975, 1987) Language of Thought hypothesis and Evans’ (1982) Generality constraint are designed to account for the productivity and the systematicity of thoughts, i.e., conceptual representations. It is constitutive of thoughts that they are structured and that they involve conceptual constituents that can be combined and recombined to generate indefinitely many new structured thoughts. Thus, concepts are building blocks with inferential roles.

            Because they are productive and systematic, conceptual thoughts can rise above the limitations imposed to perceptual representations by the constraints inherent to perception. Unlike thought, visual perception requires some causal interaction between a source of information and some sensory organs. For example, by combining the concepts horse and horn, one may form the complex concept unicorn, even though no unicorn has or ever will be visually perceived (except in visual works of art). Although no unicorn has ever been perceived, within a fictional context, on the basis of the inferential role of its constituents, one can draw the inference that if something is a unicorn, then it has four legs, it eats grass and it is a mammal.

            Hence, possessing concepts is to master inferential relations: only a creature with conceptual abilities can draw consequences from her perceptual processing of a visual stimulus. Thought and visual perception are clearly different cognitive processes. One can think about numbers and one can form negative, disjunctive, conjunctive and general thoughts involving multiple quantifiers. Although one can visually perceive numerals, one cannot visually perceive numbers. Nor can one visually perceive negative, disjunctive, conjunctive or general facts (corresponding to e.g., universally quantified thoughts).

            As Crane (1992: 152) puts it, “there is no such thing as deductive inference between perceptions”. Upon seeing a brown dog, one can see at once that the animal one faces is a dog and that it is brown. If one perceives a brown animal and one is told that it is a dog, then one can certainly come to believe that the brown animal is a dog or that the dog is brown. But on this hybrid epistemic basis, one can think or believe, but one cannot see that the dog is brown. One came to know that the dog is brown by seeing it. But one did not come to know that what is brown is a dog by seeing it. Unlike the content of concepts, the content of visual percepts is not a matter of inferential role. As emphasized by Crane (ibid.), this is not to say that the content of visual percepts is amorphous or unstructured. One proposal for capturing the nonconceptual structure of visual percepts is Peacocke’s (1992) notion of a scenario content, i.e., a visual way of filling in space. As we shall see momentarily, one can think or believe of an animal that it is dog without thinking or believing that it has a particular color. But one cannot see a dog in good daylight conditions without seeing its particular color (or colors). I shall momentarily discuss this feature of the content of visual percepts, which is part of their distinctive informational richness, as an analog encoding of information.

            In section I.3, I considered the contrast between the pictorial content of a visual percept of a glass to the left of a telephone and the conceptual content expressible by means of the English sentence: ‘The glass is to the left of the telephone’. I noticed that, unlike the English sentence, the visual percept cannot represent the glass to the left of the telephone unless it depicts the shape, size, texture, color and orientation of both the glass and the telephone. I concluded that an utterance of this sentence conveys only part of the pictorial content of the visual percept since the utterance is mute about any visual attribute of the pair of objects other than their relative locations. But, further conceptual processing of the conceptual content conveyed by the utterance of the sentence may yield a more complex representation involving, not just a two-place relation, but a three-place relation also expressible by the English predicate ‘left of’. Thus, one may think that the glass is to the left of the telephone for someone standing in front of the window, not for someone sitting at the opposite side of the table. In other words, one can think that the glass is to the left of the telephone from one’s own egocentric perspective and that the same glass is to the right of the telephone from a different perspective. Although one can form the thought involving the ternary relation ‘left of’, one cannot see the glass as being to the left of the telephone from one’s own egocentric perspective because one cannot see one’s own egocentric perspective. Perspectives are not things that one can see. This is an example of a conceptual contrast that could not be drawn by visual perception. Thus, unlike a thought, a visual percept is, in one sense of the word, “informationally encapsulated”. Thought, not perception, can, as Perry (1993) puts it, increase the arity of a predicate. Notice that percepts can cause thoughts. This is one way thoughts arise. Thoughts can also cause other thoughts. But presumably, thoughts do not cause percepts.

II. 2. The finegrainedness and informational richness of visual percepts

            Unlike thought, visual perception has a spatial, perspectival, iconic and/or pictorial structure not shared by conceptual thought. The content of visual perception has a spatial perspectival structure that pure thoughts lack. In order to apply the concept of a dog, one does not have to occupy a particular spatial perspective relative to any dog. But one cannot see a dog unless one occupies some spatial standpoint or other relative to it: one cannot e.g., see a dog simultaneously from the top and from below, from the front and from the back. The concept of a dog applies indiscriminately to poodles, alsatians, dalmatians or bulldogs. One can think that all dogs bark. But one cannot see all dogs bark. Nor can on see a generic dog bark. One must see some particular dog: a poodle, an alsatian, a dalmatian or a bulldog, as it might be. Although one and the same concept — the concept of a dog — may apply to a poodle, an alsatian, a dalmatian or a bulldog, seeing one of them is a very different visual experience than seeing another. One can think that a dog barks without thinking of any other properties of the dog. One cannot, however, see a dog unless one sees its shape and the colors and texture of its hairs.

            Thus, the content of visual perceptual representations turns out to be both more finegrained and informationally richer than the conceptual contents of thoughts. There are three paradigmatic cases in which the need to distinguish between conceptual content and the nonconceptual content of visual perceptions may arise. First, a creature may be perceptually sensitive to objective differences for which she has no concepts. Secondly, two creatures may enjoy one and the same visual experience, which they may be inclined to conceptualize differently. Finally, two different persons may enjoy two distinct visual experiences in the presence of one and the same distal stimulus to which they may be inclined to apply one and the same concept.

            Peacocke (1992: 67-8) considers, for example, a person’s visual experience of a range of mountains. As he notices, one might want to conceptualize one’s visual experience with the help of concepts of shapes expressible in English with such predicates as ‘round’ and ‘jagged’. But these concepts of shapes could apply to the nonconceptual contents of several different visual experiences prompted by the distinct shapes of several distinct mountains. Arguably, although a human being might not possess any concept of shape whose finegrainedness could match that of her visual experience of the shape of the mountain, her visual experience of the shape is nonetheless distinctive and it may differ from the visual experience of the distinct shape of a different mountain to which she would apply the very same concept. Similarly, human beings are perceptually sensitive to far more colors than they have color concepts and color names to apply. Although a human being might lack two distinct concepts for two distinct shades of color, she might well enjoy a visual experience of one shade that is distinct from her visual experience of the other shade. As Raffman (1995: 295) puts it, “discriminations along perceptual dimensions surpasses identification […] our ability ro judge whether two or more stimuli are the same or different surpasses our ability to type-identify them”.

            Against this kind of argument in favor of the nonconceptual content of visual experiences, McDowell (1994, 1998) has argued that demonstrative concepts expressible by e.g., ‘that shade of color’ are perfectly suited to capture the finegrainedness of the visual percept of color. I am willing to concede to McDowell that such demonstrative concepts do exist. But I agree with Bermudez (1998: 55-7) and Dokic & Pacherie (2000) that such demonstrative concepts would seem to be too weak to perform one of the fundamental jobs that color concepts and shape concepts must be able to perform — namely recognition. Color concepts and shape concepts stored in a creature’s memory must allow recognition and reidentification of colors and shapes over long periods of time. Although pure demonstrative color concepts may allow comparison of simultaneously presented samples of color, it is unlikely that they can be used to reliably reidentify one and the same sample over time. Nor presumably could pairs of demonstrative color concepts be used to reliably discriminate pairs of color samples over time. Just as one can track the spatio-temporal evolution of a perceived object, one can store in a temporary object file information about its visual properties in a purely indexical or demonstrative format. If, however, information about an object’s visual properties is to be stored in episodic memory, for future reidentification, then it cannot be stored in a purely demonstrative or indexical format, which is linked to a particular perceptual context. Presumably, the demonstrative must be fleshed with some descriptive content. One can refer to a perceptible object as ‘that sofa’ or even as ‘that’ ( no sortal). But presumably when one does not stand in a perceptual relation to the object, information about it cannot be stored in episodic memory in such a pure demonstrative format. Rather, it must be stored using a more descriptive symbol such as ‘the (or that) red sofa that used to face the fire-place’. This is presumably part of what Raffman (1995: 297) calls “the memory constrainst”. As Raffman (1995: 296) puts it:

the coarse grained character of perceptual memory explains why we can recognize ‘determinable’ colors like red and blue and even scarlet and indigo as such, but not ‘determinate’ shades of those determinables […] Because we cannot recognize determinate shades as such, ostension is our only means of communicating our knowledge of them. If I want to convey to you the precise shade of an object I see, I must point to it, or perhaps paint you a picture of it […] I must present you with an instance of that shade. You must have the experience yourself .

            Two persons might enjoy one and the same kind of visual experience prompted by one and the same shape or one and the same color, to which they would be inclined to apply pairs of distinct concepts, such as ‘red’ vs ‘crimson’ or ‘polygon’ vs ‘square’. If so, it would be justified to distinguish the nonconceptual content of their common visual experience from the different concepts that each would be willing to apply. Conversely, as argued by Peacocke (1998), presented with one and the same geometrical object, two persons might be inclined to apply one and the same generic shape concept e.g., ‘that polygon’ and still enjoy different perceptual experiences or see the same object as having different shapes. For example, as Peacocke (1998: 381) points out, “one and the same shape may be perceived as square, or as diamond-shaped […] the difference between these ways is a matter of which symmetries of the shape are perceived; though of course the subject himself does not need to know that this is the nature of the difference”. If one mentally partitions a square by bisecting its right angles, one sees it as a diamond. If one mentally partitions it by bisecting its sides, one sees it as a square. Presumably, one does not need to master the concept of an axis of symmetry to perform mentally these two bisections and enjoy two distinct visual experiences.

            The distinctive informational richness of the content of visual percepts has been discussed by Dretske (1981) in terms of what he calls the analogical coding of information.[1]  One and the same piece of information — one and the same fact — may be coded analogically or digitally. In Dretske’s sense, a signal carries the information that e.g., a is F in a digital form iff the signal carries no additional information about a that is not already nested in the fact that a is F. If the signal does carry additional information about a that is not nested in the fact that a is F, then the information that a is F is carried by the signal in an analogical (or analog) form. For example, the information that a designated cup contains coffee may be carried in a digital form by the utterance of the English sentence ‘There is some coffee in the cup’. The same information can also be carried in an analog form by a picture or by a photograph. Unlike the utterance of the sentence, the picture cannot carry the information that the cup contains coffee without carrying additional information about the shape, size, orientation of the cup and the color and the amount of coffee in it. As I pointed out above, unlike the concept of a dog, the visual percept of a dog carries information about which dog one sees, its spatial position, the color and texture of its hairs, etc. The contents of visual percepts are informationally rich in the sense of being analog. A thought involving several concepts in a hierarchically structured order might carry the same informational richness as a visual percept. But it does not have to. As the slogan goes, a picture is worth a thousand words. Unlike a thought, a visual percept of a cup cannot convey the information that the cup contains coffee without conveying additional information about several visual attributes of the cup.

            The arguments by philosophers of mind and by perceptual psychologists in favor of the distinction between the conceptual content of thought and the nonconceptual content of visual percepts is based on the finegrainedness and the informational richness of visual percepts. Thus, it turns on the phenomenology of visual experience. In section I, I provided some evidence from psychophysical experiments performed on normal human subjects and from the neuropsychological examination of brain lesioned human patients that point to a different kind of nonconceptual content, which I labelled “visuomotor” content. Unlike the arguments in favor of the nonconceptual content of visual percepts, the arguments for the distinction between the nonconceptual content of visual percepts and the nonconceptual content of visuomotor representations do not rely on phenomenology at all. Rather, they rely on the need to postulate mental representations with visuomotor content in order to provide a causal explanation of visually guided actions towards objects. Thus, on the assumption that such behaviors as grasping objects can be actions (based on mental representations), I submit that the nonconceptual content of visual representation ought to be bifurcated into perceptual and visuomotor content as in Figure 7:

conceptual content                                                           nonconceptual content

                                                            perceptual content                            visuomotor content

Figure 7

II. 3. The interaction between visual and non-visual knowledge

            Traditional epistemology has focused on the problem of sorting out genuine instances of propositional knowledge from cases of mere opinion or guessing. Propositional factual knowledge is to be distinguished from both nonpropositional knowledge of individual objects (or what Russell called “knowledge by acquaintance”) and from tacit knowledge of the kind illustrated by a native speaker’s implicit knowledge of the grammatical rules of her language. According to epistemologists, in the relevant propositional sense, what one knows are facts. In the propositional sense, one cannot know a fact unless one believes that the corresponding proposition is true, one’s belief is indeed true, and the belief was not formed by mere fantasy. On the one hand, one cannot know that the cup contains coffee unless one believes it. One cannot have this belief unless one knows what a cup is and what coffee is. On the other hand, one cannot know what is not the case: one can falsely believe that e.g., the cup contains coffee. But one cannot know it, unless a designated cup does indeed contain some coffee. True belief, however, is not sufficient for knowledge. If a true belief happens to be a mere guess or whim, then it will not qualify as knowledge. What else must be added to true belief to turn it into knowledge?

            Broadly speaking, epistemologists divide into two groups. According to externalists, a true belief counts as knowledge if it results from a reliable process, i.e., a process that generates counterfactually supporting connexions between states of a believer and facts in her environment. According to internalists, for a true belief to count as knowledge, it must be justified and the believer must in addition justifiably believe that her first-order belief is justified.  Since I am willing to claim that, in appropriate conditions, the way a red triangle visually looks to a person having the relevant concepts and located at a suitable distance from it provides grounds for the person to know that the object in front of her is a red triangle, I am attracted to an externalist reliabilist view of perceptual knowledge.

            Although the issue is controversial and is by no means settled in the philosophical literature, externalist intuitions suit my purposes better than internalist intuitions. Arguably, one thing is to be justified or to have a reason for believing something. Another thing is to use a reason in order to offer a justification for one’s beliefs. Arguably, if a perceptual (e.g., visual) process is reliable, then the visual appearances of things may constitute a reason for forming a belief. However, one cannot use a reason unless one can explicitly engage in a reasoning process of justification, i..e., unless one can distinguish one’s premisses from one’s conclusion. Presumably, a creature with perceptual abilities and relevant conceptual resources can have reasons and form justified beliefs even if she lacks the concept of reason or justification. However, she could not use her reasons and provide justifications unless she had language and metarepresentational resources. Internalism derives most of its appeal from reflection on instances of mathematical and scientific knowledge that result from the conscious application of explicit principles of inquiry by teams of individuals in the context of special institutions. In such special settings, it can be safely assumed that the justification of a believer’s higher-order beliefs do indeed contribute to the formation and reliability of his or her first-order beliefs. Externalism fits perceptual knowledge better than internalism and, unlike internalism, it does not rule out the possibility of crediting non-human animals and human infants with knowledge of the world — a possibility made more and more vivid by the development of cognitive science.

            On my view, human visual perceptual abilities are at the service of thought and conceptualisation. At the most elementary level, by seeing an object (or a sequence of objects) one can see a fact involving that object (or sequence of objects). By seeing my neighbor’s car in her driveway, I can see the fact that my neighbor’s car is parked in her driveway. I thereby come to believe that my neighbor’s car is parked in her driveway and this belief, which is a conceptually loaded mental state, is arrived at by visual perception. Hence, my term “visual knowledge”. If one’s visual system is — as I claimed it is — reliable, then by seeing my neighbor’s car — an object — in her driveway, I thereby come to know that my neighbor’s car is parked in her driveway — a fact. Hence, I come to know a fact involving an object that I actually see. This is a fundamental epistemic situation, which Dretske (1969) labels “primary epistemic seeing”: one’s visual ability allows one to know a fact about an object one perceives.

            However, if my neighbor’s car happens to be parked in her driveway if and only if she is at home (and I know this), then I can come to know a different fact: I can come to know that my neighbor is at home. “Seeing” that my neighbor is at home by seeing that her car is parked in her driveway is something different from seeing my neighbor at home (e.g., seeing her in her living-room). Certainly, I can come to know that my neighbor is at home by seeing her car parked in her driveway, i.e., without seeing her. “Seeing” that my neighbor is at home by seeing that her car is parked in her driveway is precisely what Dretske (1969) calls “secondary epistemic seeing”. Secondary epistemic seeing lies at the interface between pure visual knowledge of facts involving a perceived object and non-visual knowledge that can be derived from it.

            This transition from seeing one fact to seeing another displays the hierarchical structure of visual knowledge. In primary epistemic seeing, one sees a fact involving a perceived object. But in moving from primary epistemic seeing to secondary epistemic seeing, one moves from a fact involving a perceived car to a fact involving one’s unperceived neighbor (who happens to own the perceived car). This epistemological hierarchical structure is expressed by the “by” relation: one sees that y is G by seeing that x is F where x ≠ y. Although it may be more or less natural to say that one “sees” a fact involving an unperceived object by seeing a different fact involving a perceived object, the hierarchical structure that gives rise to this possibility is ubiquitous in human knowledge.

            One can see that a horse has walked on the snow by seeing hoof prints in the snow. One sees the hoof prints, not the horse. But if hoof prints would not be visible in the snow at time t unless a horse had walked on that very snow at time t – 1, then one can see that a horse has walked on the snow just by seeing hoof prints in the snow. One can see that a tennis player has just hit an ace at Flushing Meadows by seeing images on a television screen located in Paris. Now, does one really see the tennis player hit an ace at Flushing Meadows while sitting in Paris and watching television? Does one see a person on a television screen? Or does one see an electronic image of a person relayed by a television? Whether one sees a tennis player or her image on a television screen, it is quite natural to say that one “sees that” a tennis player hit an ace by seeing her (or her image) do it on a television screen. Even though, strictly speaking, one perhaps did not see her do it — one merely saw pictures of her doing it —, nonetheless seeing the pictures comes quite close to seeing the real thing. By contrast, one can “see” that the gas-tank in one’s car is half-full by seeing, not the tank itself, but the dial of the gas-gauge on the dashboard of the car. If one is sitting by the steering wheel inside one’s car so that one can comfortably see the gas-gauge, then one cannot see the gas-tank. Nonetheless, if the gauge is reliable and properly connected to the gas-tank, then one can (perhaps in some loose sense) “see” what the condition of the gas-tank is by seeing the dial of the gauge.

            One could wonder whether secondary epistemic seeing is really seeing at all. Suppose that one learns that the New York Twin Towers collapsed by reading about it in a French newspaper in Paris. One could not see the New York Twin Towers — let alone their collapse — from Paris. What one sees when one reads a newspaper are letters printed in black ink on a white sheet of paper. But if the French newspaper would not report the collapse of the New York Twin Towers unless the New York Twin Towers had indeed collapsed, then one can come to know that the New York Twin Towers have collapsed by reading about it in a French newspaper. There is a significant difference between seeing that the New York Twin Towers have collapsed by seeing it happen on a television screen and by reading about it in a newspaper. Even if seeing an electronic picture of the New York Twin Towers is not seeing the Twin Towers themselves, still the visual experience of seeing an electronic picture of them and the visual experience of seeing them have a lot in common. The pictorial content of the experience of seeing an electronically produced color-picture of the Towers is very similar to the pictorial content of the experience of seeing them. Unlike a picture, however, a verbal description of an event has conceptual content, not pictorial content. The visual experience of reading an article reporting the collapse of the New York Twin Towers in a French newspaper is very different from the experience of seeing them collapse. This is the reason why it may be a little awkward to say that one “saw” that the New York Twin Towers collapsed if one read about it in a French newspaper in Paris as opposed to seeing it happen on a television screen.

            Certainly, ordinary usage of the English word ‘see’ is not sacrosanct. We say that we “see” a number of things in circumstances in which what we do owes little — if anything — to our visual abilities. “I see what you mean”, “I see what the problem is” or “I finally saw the solution” report achievements quite independent of visual perception. Such uses of the verb ‘to see’ are loose uses. Such loose uses do not report epistemic accomplishments that depend significantly on one’s visual endowments. By contrast, cases of what Dretske (1969) calls secondary epistemic seeing are epistemic achievements that do depend on one’s visual endowments. True, in cases of secondary epistemic seeing, one comes to know a fact without seeing some of its constituent elements. True, one could not come to learn that one’s neighbor is at home by seeing her car parked in her driveway unless one knew that her car is indeed parked in her driveway when and only when she is at home. Nor could one see that the gas-tank in one’s car is half-full by seeing the dial of the gas-gauge unless one knew that the latter is reliably correlated with the former. So secondary epistemic seeing could not possibly arise in a creature that lacked knowedge of reliable correlations or that lacked the cognitive resources required to come to know them altogether.

            Nonetheless secondary epistemic seeing has indeed a crucial visual component in the sense that visual perception plays a critical role in the context of justifying such an epistemic claim. When one claims to be able to see that one’s neighbor is at home by seeing her car parked in her driveway or when one claims to be able to see that the gas-tank in one’s car is almost empty by seeing the gas-gauge, one relies on one’s visual powers in order to ground one’s state of knowledge. The fact that one claims to know is not seen. But the grounds upon which the knowledge is claimed to rest are visual grounds: the justification for knowing an unseen fact is seeing another fact correlated with the former. Of course, in explaining how one can come to know a fact about one thing by knowing a different fact about a different thing, one cannot hope to meet the philosophical challenge of scepticism. From the standpoint of scepticism, as Stroud (1989) points out, the explanation may seem to beg the question since it takes for granted one’s knowledge of one fact in order to explain one’s knowledge of another fact. But the important thing for present purposes is that — scepticism notwithstanding — one offers a perfectly good explanation of how one comes to know a fact about an object one does not perceive by knowing a different fact about an object one does perceive. The point is that much — if not all — of the burden of the explanation lies in visual perception: seeing one’s neighbor’s car is the crucial step in justifying one’s belief that one’s neighbor is at home. Seeing the gas-gauge is the crucial step in justifying one’s belief that one’s tank is almost empty. The reliability of visual perception is thus critically involved in the justification of one’s knowledge claim. In cases of primary epistemic seeing, the reliability of one’s visual system provides justifications for one’s visual knowledge in the sense that it provides one with reasons for believing that the fact involving an object one perceives obtains. In secondary epistemic seeing, one claims to know a fact that does not involve a perceived object. Still, the reliability of one’s visual system plays an indirect role in cases of secondary epistemic seeing in the sense that it provides grounds for one’s visual knowledge about a fact involving a perceived object, upon which one’s knowledge of a fact not involving a perceived object rests.

            Thus, secondary epistemic seeing lies at the interface between an individual’s visual knowledge (i.e., knowledge formed by visual means) and the rest of her knowledge. In moving from primary epistemic seeing to secondary epistemic seeing, an individual exploits her knowledge of regular connections. Although it is true that unless one knows the relevant correlation, one could not come to know the fact that the gas-tank in one’s car is empty by seeing the gas-gauge, nonetheless one does not consciously or explicitly reason from the perceptually accessible premiss that one’s neighbor’s car is parked in her driveway together with the premiss that one’s neighbor’s car is parked in her driveway when and only when one’s neighbor is at home to the conclusion that one’s neighbor is at home. Arguably, the process from primary to secondary epistemic seeing is inferential. But if it is, then the inference is unconscious and it takes place at the “sub-personal” level.

            What the above discussion of secondary epistemic seeing so far reveals is that the very description and understanding of the hierarchical structure of visual knowledge and its integration with non-visual knowledge requires an epistemological and/or psychological distinction between seeing of objects and seeing facts — a point much emphasized in Dretske’s writings on the subject — or between nonepistemic and epistemic seeing. The neurophysiology of human vision is such that some objects are simply not accessible to human vision. They may be too small or too remote in space and time for a normally sighted person to see them. For more mundane reasons, a human being may be temporarily so positioned as not to be able to see one object — be it her neighbor or the gas-tank in her car. Given the correlations between facts, by seeing a perceptible object, one can get crucial information about a different unseen object. Given the epistemic importance of visual perception in the hirarchical structure of human knowledge, it is important to understand how by seeing one object, one can provide decisive reasons for knowing facts about objects one does not see.


II. 4. The scope and limits of visual knowledge

            I now turn my attention again from what Dretske calls secondary epistemic seeing (i.e., visually based knowledge of facts about objects one does not perceive) back to what he calls primary epistemic seeing, i.e., visual knowledge of facts about objects one does perceive. When one purports to ground one’s claim to know that one’s neighbor is at home by mentioning the fact that one can see that her car is parked in her driveway, clearly one is claiming to be able to see a car, not one’s neighbor herself. Now, let us concentrate on the scope of knowledge claims in primary epistemic seeing, i.e., knowledge about facts involving a perceived object. Let us suppose that someone claims to be able to see that the apple on the table is green. Let us suppose that the person’s visual system is working properly, the table and what is lying on it are visible from where the person stands, and the lighting is suitable for the person to see them from where she stands. In other words, there is a distinctive way the green apple on the table looks to the person who sees it. Under those circumstances, when the person claims that she can see that the apple on the table is green, what are the scope and limits of her epistemic claims?

            Presumably, in so doing, she is claiming that she knows that there is an apple on the table in front of her and that she knows that this apple is green. If she knows both of this, then presumably  she also knows that there is a table under the apple in front of her, she knows that there is a fruit on the table. Hence, she knows what the fruit on the table is (or what is on the table), she knows where the apple is, she knows the color of the apple, and so on. Arguably, the person would then be in a position to make all such claims in response to the following various queries: is there anything on the table? What is on the table? What kind of fruit is on the table? Where is the green apple? What color is the apple on the table? If the person can see that the apple on the table is green, then presumably she is in a position to know all these facts.

            However, when she claims that she can see that the apple on the table is green, she is not thereby claiming that she can see that all of these facts obtain. What she is claiming is more restricted and specific than that: She is indeed claiming that she knows that there is an apple on the table and that the apple in question is green. Furthermore, she is claiming that she learnt the latter fact — the fact about the apple’s color — through visual perception: if someone claims that she can see that the apple on the table is green, then she is claiming that she has achieved her knowledge of the apple’s color by visual means, and not otherwise. But she is not thereby claiming that her knowledge of the location of the apple or her knowledge of what is on the table have been acquired by the very perceptual act (or the very perceptual process) that gave rise to her knowledge of the apple’s color. Of course, the person’s alleged epistemic achievement does not rule out the possibility that she came to know that what is on the table is an apple by seeing it earlier. But if she did, this is not part of the claim that she can see that the apple on the table is green. It is consistent with this claim that the person came to know that what is on the table is an apple by being told, by tasting it or by smelling it. All she is claiming and all we are entitled to conclude from her claim is that the way she learnt about the apple’s color is by visual perception.

            The investigation into the scope and limits of primary visual knowledge is important because it is relevant to the challenge of scepticism. As I already said, my discussion of visual knowledge does not purport to meet the full challenge of scepticism. In discussing secondary epistemic seeing, I noticed that in explaining how one comes to know a fact about an unperceived object by seeing a different fact involving a perceived object, one takes for granted the possibility of knowing the latter fact by perceiving one of its constituent objects. Presumably, in so doing, one cannot hope to meet the full challenge of scepticism that would question the very possibility of coming to know anything by perception. I briefly turn to the sceptical challenge to which claims of primary epistemic seeing are exposed. By scrutinizing the scope and limits of claims of primary visual knowledge, I want to examine briefly the extent to which such claims are indeed vulnerable to the sceptical challenge. Claims of primary visual knowledge are vulnerable to sceptical queries that can be directed backwards and forwards. They are directed backwards when they apply to background knowledge, i.e., knowledge presupposed by a claim of primary visual knowledge. They are directed forward when they apply to consequences of a claim of primary visual knowledge. I turn to the former first.

            Suppose a sceptic were to challenge a person’s commonsensical claim that she can see (and hence know by perception) that the apple on the table in front of her is green by questioning her grounds for knowing that what is on the table is an apple. The sceptic might point out that, given the limits of human visual acuity and given the distance of the apple, the person could not distinguish by visual means alone a genuine green apple — a green fruit — from a fake green apple (e.g., a wax copy of a green apple or a green toy). Perhaps, the person is hallucinating an apple when there is in fact nothing at all on the table. If one cannot visually discriminate a genuine apple from a fake apple, then, it seems, one is not entitled to claim that one can see that the apple on the table is green. Nor is one entitled to claim that one can see that the apple on the table is green if one cannot make sure by visual perception that one is not undergoing a hallucination. Thus, the sceptical challenge is the following: if visual perception itself cannot rule out a number of alternative possibilities to one’s epistemic claim, then the epistemic claim cannot be sustained.

            The proper response to the sceptical challenge here is precisely to appeal to the distinction between claims of visual knowledge and other knowledge claims. When the person claims that she can see that the apple on the table is green, she is claiming that she learnt something new by visual perception: she is claiming that she just gained new knowledge by visual means. This new perceptually-based knowledge is about the apple’s color. The perceiver’s new knowledge — her epistemic “increment”, as Dretske (1969) calls it — must be pitched against what he calls her “proto-knowledge”, i.e., what the person knew about the perceived object prior to her perceptual experience. The reason it is important to distinguish between a person’s prior knowledge and her knowledge gained by visual perception is that primary epistemic seeing (or primary visual knowledge) is a dynamic process. In order to determine the scope and limits of what has been achieved in a perceptual process, we ought to determine a person’s initial epistemic stage (the person’s prior knowledge about an object) and her final epistemic stage (what the person learnt by perception about the object). Thus, the question raised by the sceptical challenge (directed backwards) is a question in cognitive dynamics: How much new knowledge could a person’s visual resources yield, given her prior knowledge? How much has been learnt by visual perception, i.e., in an act of visual perception? What new information has been gained by visual perception?

            So when the person claims that she can see that the apple on the table is green, she no doubt reports that she knows both that there is an apple on the table and that it is green. She commits herself to a number of epistemic claims: she knows what is on the table, she knows that there is a fruit on the table, she knows where the apple is, and so on. But she merely reports one increment of knowledge: she merely claims that she just learnt by visual perception that the apple is green. She is not thereby reporting how she acquired the rest of her knowledge about the object, e.g., that it is an apple and that it is on the table. She claims that she can see of the apple that it is green, not that what is green is an apple, nor that what is on the table is an apple. The claim of primary visual knowledge bears on the object’s color, not on some of its other properties (it’s being e.g., an apple, a fruit or its location). All her epistemic claim entails is that, prior to her perceptual experience, she assumed (as part of her “proto-knowledge” in Dretske’s sense) that there was a apple on the table and then she discovered by visual perception that the apple was green.

            I now turn my attention to the sceptical challenge directed forward — towards the consequences of one’s claims of visual knowledge. The sceptic is right to point out that the person who claims to be able to see the color of an apple is not thereby in a position to see that the object whose color she is seeing is a genuine apple — a fruit — and not a wax apple. Nor is the person able to see that she is not hallucinating. However, since she is neither claiming that she is able to see of the green object that it is a genuine apple nor that she is not hallucinating an apple, it follows that the sceptical challenge cannot hope to defeat the person’s perceptual claim that she can see what she claims that she can see, namely that the apple is green. On the externalist picture of perceptual knowledge which I accept, a person knows a fact when and only when she is appropriately connected to the fact. Visual perception provides a paradigmatic case of such a connexion. Hence, visual knowledge arises from regular correlations between states of the visual system and environmental facts. Given the intricate relationship between a person’s visual knowledge and her higher cognitive functions, she will be able to draw many inferences from her visual knowledge. If a person knows that the apple in front of her is green, then she may infer that there is a colored fruit on the table in front of her. Given that fruits are plants and that plants are physical objects, she may further infer that there are at least some physical objects. Again, the sceptic may direct his challenge forward: the person claims to know by visual means that the apple in front of her is green. But what she claims she knows entails that there are physical objects. Now, the sceptic argues, a person cannot know that there are physical objects — at least, she cannot see that there are. According to the sceptic, failure to see that there are physical objects entails failure to see that the apple on the table is green.

            A person claims that she can know proposition p by visual perception. Logically, proposition p entails proposition q. There could not be a green apple on the table unless there exists at least one physical object. Hence, the proposition that the apple on the table is green could not be true unless there were physical objects. According to the sceptic, a person could not know the former without knowing the latter. Now the sceptic offers grounds for questioning the claim that the person knows proposition q at all — let alone by visual perception. Since it is dubious that she does know the latter, then, according to scepticism, she fails to know the former. Along with Dretske (1969) and Nozick (1981), I think that the sceptic relies on the questionable assumption that visual knowledge is deductively closed. From the fact that a person has perceptual grounds for knowing that p, it does not follow that she has the same grounds for knowing that q, even if q logically follows from p. If visual perception allows one to get connected in the right way to the fact corresponding to proposition p, it does not follow that visual perception ipso facto allows one to get connected in the same way to the fact corresponding to proposition q even if q follows logically from p.

            A person comes to know a fact by visual perception. What she learns by visual perception implies a number of propositions (such as there are physical objects). Although such propositions are logically implied by what the person learnt by visual perception, she does not come to know by visual perception all the consequences of what she learnt by visual perception. She does not know by visual perception that there are physical objects — if she knows it at all. Seeing a green apple in front of one has a distinctive visual phenomenology. Seeing that the apple in front of one is green too has a distinctive visual phenomenology. There is something distinctively visual about what it is like for one to see that the apple in front of one is green. If an apple is green, then it is colored. However, it is dubious whether there is a visual phenomenology to thinking of the apple in front of one that it is colored. A fortiori, it is dubious whether there is a visual phenomenology to thinking that there are physical objects. Hence, contrary to what the sceptic assumes, I want to claim, as Dretske (1969) and Nozick (1981) have, that visual knowledge is not deductively closed.

III. The role of visuomotor representations in the human cognitive architecture

            In the present section, I shall sketch my reasons for thinking that visuomotor representations do not lead to detached knowledge of the world. Rather, they serve as input to intentions in at least two respects: on the one hand, they provide visual guidance to what I shall call “motor intentions”. On the other hand, they provide visual information for “causally indexical” concepts. I will start by laying out the basic distinction between two different kinds of “direction of fit” that can be exemplified by mental representations.

III.1. Direction of fit

            Whereas visual percepts serve as inputs to the “belief box”, visuomotor representations, I now want to argue, serve is inputs to a different kind of mental representations, i.e., intentions. As emphasized by Anscombe (1957) and Searle (1983, 2001), perceptions, beliefs, desires and intentions each have a distinctive kind of intentionality. Beliefs and desires have what Searle calls “opposite direction of fit”. Beliefs have a mind-to-world direction of fit: they can be true or false. A belief is true if and only if the world is as the belief represents it to be. It is the function of beliefs to match facts or actual state of affairs. In forming a belief, it is up for the mind to meet the demands of the world. Unlike beliefs, desires have a world-to-mind direction of fit. Desires are neither true nor false: they are fulfilled or frustrated. The job of a desire is not to represent the world as it is, but rather as the agent would like it to be. Desires are representations of goals, i.e., possible nonactual states of affairs. In entertaining a desire, it is so to speak up for the world to meet the demands of the mind. The agent’s action is supposed to bridge the gap between the mind’s goal and the world.

            As Searle (1983, 2001) has noticed, perceptual experiences and intentions have opposite directions of fit. Perceptual experiences have the same mind-to-world direction of fit as beliefs. Intentions have the same world-to-mind direction of fit as desires. In addition, perceptual experiences and intentions have opposite directions of causation: whereas a perceptual experience represents the state of affairs that causes it, an intention causes the state of affairs that it represents.

            Although intentions and desires share the same world-to-mind direction of fit, intentions are different from desires in a number of important respects, which all flow from the peculiar commitment to action of intentions. Broadly speaking, desires are relevant to the process of deliberation that precedes one’s engagement into a course of action. Once an intention is formed, however, the process of deliberation comes to an end. To intend is to have made up one’s mind about whether to act. Once an intention is formed, one has taken the decision whether to act. I shall mention four main differences between desires and intentions.

             First, although desires may be about anything or anybody, intentions are always about the self. One can only intend oneself to do something. Second, unlike desires, intentions are tied to the present or the future: one cannot intend to do something in the past. Third, unlike the contents of desires, the contents of intentions must be about possible nonactual states of affairs. An agent cannot intend to achieve a state of affairs that she knows to be impossible at the time when she forms her intention. Finally, although one may entertain desires whose contents are inconsistent, one cannot have two intentions whose contents are inconsistent.

         Reaching and grasping objects are visually guided actions directed towards objects. I assume that all actions are caused by intentions. Intentions are psychological states with a distinctive intentionality. As I said earlier, intentions derive their peculiar commitment to action from the combination of their distinctive world-to-mind direction of fit and their distinctive mind-to-world direction of causation. I shall now argue that visuomotor representations have a dual function in the human cognitive architecture: they serve as inputs to “motor intentions” and they serve as input to a special class of indexical concepts, the “causally indexical” concepts.

III.2. Visuomotor representations serve as inputs to motor intentions

            Not all actions, I assume, are caused by what Searle (1983, 2001) calls prior intentions, but all actions are caused by what he calls intentions in action, which, following Jeannerod (1994), I will call motor intentions. Unlike prior intentions, motor intentions are directed towards immediately accessible goals. Hence, they play a crucial role, not so much in the planning of action as in the execution, the monitoring and the control of the ongoing action. Arguably, prior intentions may have conceptual content. Motor intentions do not. For example, one intends to climb a visually perceptible mountain. The content of this prior intention involves e.g., the action concept of climbing and a visual percept of the distance, shape and color of the mountain. In order to climb the mountain, however, one must intentionally perform an enormous variety of postural and limb movements in response to the slant, orientation and the shape of the surface of the slope. Human beings automatically assume the right postures and perform the required flexions and extensions of their feet and legs. Since they do not possess concepts matching each and every such movements, their non-deliberate intentional behavioral responses to the slant, orientation and shape of the surface of slope is monitored by the nonconceptual nonperceptual content of motor intentions.

            Not any sensory representation can match the peculiar commitment to action of motor intentions. Visuomotor representations can. Percepts are informationally richer and more fine-grained than either concepts or visuomotor representations. As I claimed above, visual percepts have the same mind-to-world direction of fit as beliefs. This is why visual percepts are suitable inputs to a process of selective elimination of information, whose ultimate conceptual output can be stored in the belief box.

            I shall presently argue that visuomotor representations have a different function: they provide the relevant visual information about the properties of a target to an agent’s motor intentions. Indeed, I want to think of the role of the visuomotor representation of a target for action as Gibson (1979) thought of an affordance. However, unlike Gibson (1979), who did not make a distinction between perceptual and visuomotor processing, I do not think of the visuomotor processing of a target as a “direct pick up of information”. I think that visuomotor representations are genuine representations. My main reason for thinking of the output of the visuomotor processing of a target as a genuine mental representation — and for thinking of grasping as a genuine action, not a behavioral reflex — is that Haddenden, Schiff & Goodale’s (2001) experiment suggests that the visuomotor processing of a target can be fooled by features of the visual display: it can be led to process two dimensional cues as if they were three dimensional obstacles. If the output of the visuomotor processing of a display can misrepresent it, then it represents it.

            Unlike visual percepts whose single role is to present visual information for further processing the output of which will be stored in the belief box, visuomotor representations are hybrid: as Millikan (1996), who calls them “pushmi-pullyu representations” has perceptively recognized, they have a dual role. I slightly depart from Millikan (1996), however, in that, unlike her, I assume that visuomotor representations, not motor intentions, have a double direction of fit. Visuomotor representations present states of affairs as both facts and goals for immediate action. On the one hand, they provide visual information for the benefit of motor intentions. On the other hand, their content can be conceptualized with the help of a special class of indexical concepts: causal indexicals. Whereas visual percepts must be stripped of much of their informational richness to be conceptualized, visuomotor representations can directly provide relevant visual information about the target of an action to motor intentions. To put it crudely, it follows from the work summarized in Jeannerod (1994, 1997) that the content of a motor intention has two sides: a subjective side and an objective side. On the subjective side, a motor intention represents the agent’s body in action. On the objective side, it represents the target of the action. Visuomotor representations contribute to the latter. Their ‘motoric’ informational encapsulation makes them suitable for this role. The nonconceptual nonperceptual content of a visuomotor representation matches that of a motor intention.

            Borrowing from the study of language processing, Jeannerod (1994, 1997) has drawn a distinction between the semantic and the pragmatic processing of visual stimuli. The view I want to put forward has been well expressed by Jeannerod (1997: 77): “at variance with the […] semantic processing, the representation involved in sensorimotor transformation has a predominantly ‘pragmatic’ function, in that it relates to the object as a goal for action, not as a member of a perceptual category. The object attributes are represented therein to the extent that they trigger specific motor patterns for the hand to achieve the proper grasp”. Thus, the crucial feature of the pragmatic processing of visual information is that its output is a suitable input to the nonconceptual content of motor intentions.

III.3. Visuomotor representations serve as inputs to causal indexicals

            I have just argued that what underlies the contrast between the pragmatic and the semantic processing of visual information is that, whereas the output of the latter is designed to serve as input to further conceptual processing with a mind-to-world direction of fit, the output of the former is designed to match the nonconceptual content of motor intentions with a world-to-mind direction of fit and a mind-to-world direction of causation. The special features of the nonconceptual contents of visuomotor representations can be inferred from the behavioral responses which they underlie, as in patient DF. They can also be deduced from the structure and content of elementary action concepts with the help of which they can be categorized.

            I shall presently consider a subset of elementary action concepts, which, following Campbell (1994), I shall call “causally indexical” concepts. Indexical concepts are shallow but indispensable concepts, whose references change as the perceptual context changes and whose function is to encode temporary information. Indexical concepts respectively expressed by ‘I’, ‘today’ and ‘here’ are personal, temporal and spatial indexicals. Arguably, their highly contextual content cannot be replaced by pure definite descriptions without loss. Campbell (1994: 41-51) recognizes the existence of causally indexical concepts whose references may vary according to the causal powers of the agent who uses them. Such concepts are involved in judgments having, as Campbell (1994: 43) puts it, “immediate implications for [the agent’s] action”. Concepts such as “too heavy”, “out of reach”, “within my reach”, “too large”, “fit for grasping between index and thumb” are causally indexical concepts in Campbell’s sense.

            Campbell’s idea of causal indexicality does capture a kind of judgment that is characteristically based upon the output of the pragmatic (or motor) processing of visual stimuli in Jeannerod’s (1994, 1997) sense. Unlike the content of the direct output of the pragmatic processing of visual stimuli or that of motor intentions, the contents of judgments involving causal indexicals is conceptual. Judgments involving causally indexical concepts have low conceptual content, but they have conceptual content nonetheless. For example, if something is categorized as “too heavy”, then it follows that it is not light enough. The nonconceptual contents of either visuomotor representations or motor intentions is better compared with that of an affordance in Gibson’s sense.

            Causally indexical concepts differ in one crucial respect from other indexical concepts, i.e., personal, temporal and spatial indexical concepts. Thoughts involving personal, temporal and spatial indexical concepts are “egocentric” thoughts in the sense that they they are perception-based thoughts. This is obvious enough for thoughts expressible with either the first-person pronouns ‘I’ or ‘you’. To refer to a location as ‘here’ or ‘there’ and to refer to a day as ‘today’, ‘yesterday’ or ‘tomorrow’ is to refer respectively to a spatial and a temporal region from within some egocentric perspective: a location can only be referred to as ‘here’ or ‘there’ from some particular spatial egocentric perspective. A temporal region can only be referred to by ‘today’, ‘yesterday’ or ‘tomorrow’ from some particular temporal egocentric perspective. In this sense, personal, temporal and spatial indexical concepts are egocentric concepts.[2] Arguably, egocentric indexicals lie at the interface between visual percepts and an individual’s conceptual repertoire about objects, times and locations.

            Many philosophers (see e.g., Kaplan, 1989 and Perry, 1993) have argued that personal, temporal and spatial indexical and/or demonstrative concepts play a special “essential” and ineliminable role in the explanation of action. And so they do. As Perry (1993: 33) insightfully writes: “I once followed a trail of sugar on a supermarket floor, pushing my cart down the aisle on one side of a tall counter and back the aisle on the other, seeking the shopper with the torn sack to tell him he was making a mess. With each trip around the counter, the trail became thicker. But I seemed unable to catch up. Finally it dawned on me. I was the shopper I was trying to catch”. To believe that the shopper with a torn sack is making a mess is one thing. To believe that oneself is making a mess is something else. Only upon forming the thought expressible by ‘I am making a mess’ is it at all likely that one may take appropriate measures to change one’s course of action. It is one thing to believe that the meeting starts at 10:00 AM. It is another thing to believe that the meeting starts now, even if now is 10:00 AM. Not until one thinks that the meeting starts now will one get up and run. Consider someone standing still at an intersection, lost in a foreign city. One thing is for that person to intend to go to her hotel. Something else is to intend to go this way, not that way. Only after she has formed the latter intention with a demonstrative locational content, will she get up and walk.

            Thus, such egocentric concepts as personal, temporal and spatial indexicals and/or demonstratives derive their ineliminable role in the explanation of action from the fact that their recognitional role cannot be played by any purely descriptive concept. Recognition involves a contrast but it can be achieved without recourse to a uniquely specifying definite description. Indexicals and demonstratives are mental pointers that can be used to refer to objects, places and times. Personal indexicals are involved in the recognition of persons. Temporal indexicals are involved in the recognition of temporal regions or instants. Spatial indexicals are involved in the recognition of locations. To recognize oneself as the reference of ‘I’ is to make a contrast with the recognition of the person one addresses in verbal communication as ‘you’. To identify a day as ‘today’ is to contrast it with other days that might be identified as ‘yesterday’, ‘the day before yesterday’, ‘tomorrow’, etc. To identify a place as ‘here’ is to contrast it with other places referred to as ‘there’.

            Although indexicals and demonstratives are concepts, they have non-descriptive conceptual content. The conceptual system needs such indexical concepts because it lacks the resources to supply a purely descriptive symbol, i.e., a symbol that could uniquely identify a person, a time or a place. A purely descriptive concept would be a concept that a unique person, a unique time or a unique place would satisfy by uniquely exemplifying each and every of its constituent features. We cannot specify the references of our concepts all the way down by using uniquely identifying descriptions on pain of circularity. If, as Pylyshyn (2000: 129) points out, concepts need to be “grounded”, then on pain of circularity, “the grounding [must] begin at the point where something is picked out directly by a mechanism that works like a demonstrative” (or an indexical). If concepts are to be hooked to or locked onto objects, times and places, then on pain of circularity, definite descriptions will not supply the locking mechanism.

            Personal, temporal and spatial indexicals owe their special explanatory role to the fact that they cannot be replaced by purely descriptive concepts. Although they allow recognition by nondescriptive means, their direction of application is mind-to-world. Causally indexical concepts, however, play a different role altogether. Unlike personal, temporal and spatial indexical concepts, causally indexical concepts have a distinctive quasi-deontic or quasi-evaluative content. I want to say that, unlike that of other indexicals, the direction of fit of causal indexicals is hybrid: it is partly mind-to-world, partly world-to-mind. To categorize a target as “too heavy”, “within reach” or “fit for grasping between index and thumb” is to judge or evaluate the parameters of the target as conducive to a successful action upon the target. Unlike the contents of other indexicals, the content of a causally indexical concept results from the combination of an action predicate and an evaluative operator. What makes it indexical is that the result of the application of the latter onto the former is relative to the agent who makes the application. Thus, the job of causally indexical concepts is not just to match the world but to play an action guiding role. If it is, then presumably causal indexicals have at best a hybrid direction of fit, not a pure mind-to-world direction of fit.

            In the previous section, I have argued that, unlike visual percepts, visuomotor representations provide visual information to motor intentions, which have nonconceptual content, a world-to-mind direction of fit and a mind-to-world direction of causation. I am presently arguing that the visual information of visuomotor representations can also serve as input to causally indexical concepts, which are elementary contextually dependent action concepts. Judgments involving causally indexical concepts have at best a hybrid direction of fit. When an agent makes such a judgment, he is not merely stating a fact: he is not thereby coming to know a fact that holds independently of his causal powers. Rather, he is settling, accepting or making his mind on an action plan. The function of causally indexical concepts is precisely to allow an agent to make action plans. Whereas personal, temporal and spatial indexicals lie at the interface between visual percepts and an individual’s conceptual repertoire about objects, times and places, causally indexical concepts lie at the interface between visuomotor representations, motor intentions and what Searle calls prior intentions. Prior intentions have conceptual content: they involve action concepts. Thus, after conceptual processing via the channel of causally indexical concepts, the visual information contained in visuomotor representations can be stored in a conceptual format adapted to the content and the direction of fit of one’s intentions — if not one’s motor intentions, then perhaps one’s prior intentions. Hence, the output of the motor processing of visual inputs can serve as input to further conceptual processing whose output will be stored in the ‘intention box’.


  1. Aglioti, S., De Souza, J.F.X. and Goodale, M.A. (1995) “Size-contrast illusions deceive the eye but not the hand”, Current Biology, 5, 6, 679-85.
  2. Anscombe, G.E.M. (1957) Intention, Ithaca: Cornell University Press.
  3. Austin, J. L. (1962) Sense and Sensibilia, Oxford: Clarendon Press.
  4. Bermudez, J. (1998) The Paradox of Self-Consciousness, Cambridge, Mass.: MIT Press.
  5. Bridgeman, B., Hendry, D. & Stark, L. (1975) “Failure to detect displacement of the visual world during saccadic eye movement”, Vision Research, 15, 719-22.
  6. Campbell, J. (1994) Past, Space and Self, Cambridge, Mass.: MIT Press.
  7. Carey, D.P., Harvey, M. & Milner, A.D. (1996) “Visuomotor sensitivity for shape and orientation in a patient with visual form agnosia”, Neuropsychologia, 34, 329-37.
  8. Castiello, U., Paulignan, Y. & Jeannerod, M. (1991) “Temporal dissociation of motor responses and subjective awareness. A study in normal subjects”, Brain, 114, 2639-2655.
  9. Crane, T. (1992) “The nonconceptual content of experience” in Crane, T. (ed.)(1992) The Contents of Experience, Cambridge: Cambridge University Press.
  10. Dokic, J. & Pacherie, E. (2001) “Shades and concepts”, Analysis, 61, 3, 193-202.
  11. Dretske, F. (1981) Knowledge and the Flow of Information, Cambridge, Mass.: MIT Press.
  12. Dretske, F. (1995) Naturalizing the Mind, Cambridge, Mass.: MIT Press.
  13. Evans, G. (1982) The Varieties of Reference, Oxford: Oxford University Press.
  14. Farah, M. (1990) Visual Agnosia: Disorders of Object Recognition and What They Will Tell Us About Normal Vision, Cambridge, Mass.: MIT Press.
  15. Fodor, J.A. (1987) Psychosemantics, Cambridge, Mass.: MIT Press.
  16. Franz, V.H., Gegenfurtner, K.R., Bülthoff and Fahle, M. (2000) “Grasping visual illusions: no evidence for a dissociation between perception and action”, Psychological Science, 11, 1, 20-25.
  17. Gibson, J.J. (1979) The Ecological Approach to Visual Perception, Boston: Houghton-Miffin.
  18. Goodale, M. A. (1995) “The cortical organization of visual perception and visuomotor control”, in Osherson, D. (1995)(ed.) An Invitation to Cognitive Science, Visual Cognition, vol. 2, Cambridge, Mass.: MIT Press.
  19. Goodale, M.A., Pélisson, D., Prablanc, C. (1986) “Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement”, Nature, 320, 748-50.
  20. Goodale, M. A., Milner, A.D., Jakobson I.S. and Carey, D.P. (1991) “A neurological dissociation between perceiving objects and grasping them”, Nature, 349, 154-56.
  21. Haffenden, A. M. & Goodale, M. (1998) “The effect of pictorial illusion on prehension and perception”, Journal of Cognitive Neuroscience, 10, 1, 122-36.
  22. Haffenden, A.M. Schiff, K.C. & Goodale, M.A. (2001) “The dissociation between perception and action in the Ebbinghaus illusion: non-illusory effects of pictorial cues on grasp”, Current Biology, 11, 177-181.
  23. Jacob, P. (1997) What minds can do, Cambridge: Cambridge University Press.
  24. Jeannerod, M. (1984) “The timing of natural prehension movements”, Journal of Motor Behavior, 16, 235-54.
  25. Jeannerod, M. (1994) “The representing brain: neural correlates of motor intentions”, Behavioral and Brain Sciences,
  26. Jeannerod, M. (1997) The Cognitive Neuroscience of Action, Oxford: Blackwell.
  27. Jeannerod, M., Decety, J. and Michel, F. (1994) “Impairment of grasping movements following bilateral posterior parietal lesions”, Neuropsychologia, 32, 369-80.
  28. Kaplan, D. (1989) “Demonstratives”, in Almog, J., Perry, J. & Wettstein, H. (eds.)(1989) Themes from Kaplan, New York: Blackwell.
  29. McDowell, J. (1994) Mind and the World, Cambridge, Mass.: Harvard University Press.
  30. McDowell, J. (1998) Précis of Mind and World, and Reply to Commentators, Philosophy and Phenomenological Research, LVIII, 2, 365-68, 403-31.
  31. Millikan, R.M. (199 ) “Pushmi-pullyu Representations”, in J. Tomberlin (ed.) Philosophical Perspectives, vol. IX, Atascadero, CA.
  32. Milner, D. & Goodale, M.A. (1995) The Visual Brain in Action, Oxford: Oxford University Press.
  33. Milner, D., Paulignan, Y., Dijkerman, H.C., Michel, F. and Jeannerod, M. (1999) “A paradoxical improvement of misreaching in optic ataxia: new evidence for two separate neural systems for visual localization”, Proc. of the Royal Society, 266, 2225-9.
  34. Nozick, R. (1981) “Knowledge and scepticism”, in Bernecker, S. & Dretske, F. (ed.)(2000) Knowledge, Readings in Contemporary Epistemology, Oxford: Oxford University Press.
  35. Pavani, F., Boscagli, I, Benvenuti, F., Rabuffetti & Farné, A. (1999) “Are perception and action affected differently by the Titchener circles illusion”, Experimental Brain Research, 127, 95-101.
  36. Peacocke, C. (1992) A Study of Concepts, Cambridge, Mass.: MIT Press.
  37. Peacocke, C. (1998) “Nonconceptual content defended”, Philosophy and Phenomenological Research, LVIII, 2, 381-88.
  38. Perry, J. (1979) “The essential indexical”, in Perry, J. (1993).
  39. Perry, J. (1986a) “Perception, action and the structure of believing”, in Perry, J. (1993).
  40. Perry, J. (1986b) “Thought without representation”, in Perry, J. (1993).
  41. Perry, J. (1993) The Problem of the Essential Indexical and Other Essays, Oxford: Oxford University Press.
  42. Pisella, L et al. (2000) “An ‘automatic pilot’ for the hand in human posterior parietal cortex: toward reinterpreting optic ataxia”, Nature Neuroscience, 3, 7, 729-36.
  43. Pylyshyn, Z. (2000) “Visual indexes, preconceptual objects and situated vision”, Cognition, 80, 127-58.
  44. Rossetti, Y. & Pisella, L. (2000) “Common mechanisms in perception and action”, in Prinz, W. & Hommel, B. (eds.)(2000) Attention and Performance, XIX, Oxford: Oxford University Press.
  45. Searle, J. (1983) Intentionality, Cambridge, Cambridge University Press.
  46. Searle, J. (2001) Rationality in Action, Cambridge, Mass.: MIT Press.
  47. Stroud, B. (1989) “Understanding human knowledge in general”, Bernecker, S. & Dretske, F. (ed.)(2000) Knowledge, Readings in Contemporary Epistemology, Oxford: Oxford University Press.
  48. Tye, M. (1995) Ten Problems about Consciousness, Cambridge, Mass.: MIT Press.
  49. Ungerleider, L.G. & Mishkin, M. (1982) “Two cortical visual systems”, in Ingle, D.J., Goodale, M.A. & Mansfield, R.J.W. (eds.) Analysis of visual behavior, MIT Press.
  50. Weiskrantz, L. (1986) Blindsight. A Case Study and Implications, Oxford: Oxford University Press.
  51. Weiskrantz, L. (1997) Consciousness Lost and Found, Oxford: Oxford University Press.
  52. Zeki, S. (1993) A Vision of the Brain, Oxford: Blackwell.


[1]  For discussion, see Jacob (1997, ch. 2).

[2] The egocentricity of indexical concepts should not be confused with the egocentricity of an egocentric frame of reference in which the visual system codes e.g., the location of a target. The former is a property of concepts. The latter is a property of visual representations. One crucial difference between the egocentricity of indexical concepts and the ecogentricity of an egocetnric frame of reference for coding the spatial location of a target is that, unlike the latter, the former involves a contrast: if e.g., something is here, it is not there. 


Paper for the Summer School in Analytic Philosophy, on Knowledge and Cognition, July 1-7, 2002. Seeing, Perceiving and Knowing, Pierre Jacob,


What Good is Consciousness?

If consciousness is good for something, conscious things must differ in some causally relevant way from unconscious things. If they do not, then, as Davies and Humphrey (1993: 4-5) conclude, too bad for consciousness: “psychological theory need not be concerned with this topic.”

Davies and Humphrey are applying a respectable metaphysical idea–the idea, namely, that if X’s having C does make a difference to what X does, if X’s causal powers are in no way altered by its possession of C, then nothing X does can be explained by its being C. A science dedicated to explaining the behavior of X need not, therefore, concern itself with C. That is why being an uncle is of no concern to the psychology (let alone the physics) of uncles. I am an uncle, yes, but my being so does not (causally speaking[1]) enable me to do anything I would not otherwise be able to do. The fact that I am an uncle (to be distinguished, of course, from my believing I am an uncle) does not explain anything I do. From the point of view of understanding human behavior, then, the fact that some humans are uncles is epiphenomenal. If consciousness is like that–if it is like being an uncle–then, for the same reason, psychological theory need not be concerned with it. It has no purpose, no function. No good comes from being conscious.

Is this really a worry? Should it be a worry? The journals and books, I know, are full of concern these days about the role of consciousness.[2] Much of this concern is generated by startling results in neuropsychology (more of this later). But is there a real problem here? Can there be a serious question about the advantages, the benefits, the good, of being conscious? I don’t think so. It seems to me that the flurry of interest in the biological function of consciousness betrays a confusion about several quite elementary distinctions. Once the distinctions are in place–and there is nothing especially arcane or tricky about them–the advantages (and, therefore, the good) of consciousness is obvious.

1. The First Distinction: Conscious Beings vs. Conscious States.

Stones are not conscious, but we are.[3] And so are many animals. We are not only conscious (full stop), we are conscious of things–of objects (the bug in my soup), events (the commotion in the hall), properties (the color of his tie), and facts (that he is following me). Following Rosenthal (1990), I call all these creature consciousness. In this sense the word is applied to beings who can lose and regain consciousness and be conscious of things and that things are so.
Creature consciousness is to be distinguished from what Rosenthal calls state consciousness–the sense in which certain mental states, processes, events and activities (in or of conscious beings) are said to be either conscious. or unconscious. When we describe desires, fears, and experiences as being conscious or unconscious we attribute or deny consciousness, not to a being, but to some state, condition or process in that being. States (processes, etc.), unlike the creatures in whom they occur, are not conscious of anything or that anything is so although we can be conscious of them and their occurrence in a creature may make that creature conscious of something.

That is the distinction. How does it help with our question? I’ll say how in a moment, but before I do, I need to make a few things explicit about my use of relevant terms. Not everyone (I’ve discovered) talks the way I do when they talk about consciousness. So let me say how I talk. My language is, I think, entirely standard (I use no technical terms), but just in case my readers talk funny, I want them to know how ordinary folk talk about these matters.

For purposes of this discussion and in accordance with most dictionaries I regard “conscious” and “aware” as synonyms. Being conscious of a thing (or fact) is being aware of it. Alan White (1964) describes interesting differences between the ordinary use of “aware” and “conscious”. He also describes the different liaisons they have to noticing, attending, and realizing. Though my use of these expressions as synonymous for present purposes blurs some of these ordinary distinctions, I think nothing essential to this topic is lost by ignoring the nuances.

I assume, furthermore, that seeing, hearing, smelling, tasting and feeling are specific forms–sensory forms–of consciousness. Consciousness is the genus; seeing, hearing, and smelling are species (the traditional five sense modalities are not, of course, the only species of consciousness). Seeing is visual awareness. Hearing is auditory awareness. Smelling burning toast is becoming aware–in an olfactory way–of burning toast. One might also see the burning toast. And feel it. These are other modalities of awareness, other ways of being conscious of the toast.[4] You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it.

This is important. I say that if you see (hear, etc.) it, you are conscious of it. The “it” refers to what you are aware of (the burning toast), not that you are aware of it. There are two ways one might, while being aware of burning toast, fail to be aware that one is aware of it. First, one might know one is aware of something, but not know what it is. “What is that I smell,” is the remark of a person who might well be aware of (i.e., smell) burning toast without being aware that he is aware of burning toast. Second, even if one knows what it is one is aware of–knows that it is burning toast–one might not understand what it means to be aware of it, might not, therefore, be aware that one is aware of it. A small child or an animal–creatures who lack the concept of awareness–can be conscious of (i.e., smell) burning toast without ever being aware that they are aware of something. Even if they happen to know that what they are aware of is burning toast, they do not know–are not, therefore, aware–that they are aware of it.

The language here is a bit tricky, so let me give another example. One can be aware of (hear) a french horn without being aware that that is what it is. One might think it is a trombone or (deeply absorbed in one’s work) not be paying much attention at all (but later remember hearing it). If asked whether you hear a french horn, you might well think and say (falsely) that you are not. Not being aware that you are aware of a french horn does not mean you are not aware of a french horn. Hearing a french horn is being conscious of a french horn. It is not–not necessarily anyway–to be aware that it is a french horn or aware that you are aware of it (or, indeed, anything). Mice who hear–and thereby become auditorily aware of–french horns never become aware that they are aware of anything–much less of french horns.[5]

So, once again, when I say that if you see, hear, or smell something you must be conscious of it , the “it” refers to what you are aware of (burning toast, a french horn), not what it is you are aware of or that you are aware of it . To be conscious of an F is not the same as being conscious that it is an F and certainly not the same as being conscious that one is conscious of an F. Animals (not to mention human infants) are presumably aware of a great many things (they see, smell, and feel the things around them). Nonetheless, without the concept of awareness, and without concepts for most of the things they are aware of, they are not aware of what they are aware of nor that they are aware of it. What they are conscious of is burning toast. They are not aware that it is burning toast nor that they are aware of it.

So much for terminological preliminaries. I have not yet said anything that is controversial. Still, with only these meagre resources, we are in a position to usefully divide our original question into two more manageable parts. Questions about the good of consciousness, about its purpose or function, can either be questions about creature consciousness or about state consciousness. I will, for the rest of this section, take them to be questions about creature consciousness. I return to state consciousness in the next section.

If, then, we take our question about the purpose of consciousness as a question about creature consciousness, about the benefits that consciousness affords the animals who are conscious, the answer would appear to be obvious. If animals could not see, hear, smell and taste the objects in their environment–if they were not (in these ways) conscious–how could they find food and mates, avoid predators, build nests, spin webs, get around obstacles, and, in general, do the thousand things that have to be done in order to survive and reproduce?

Let an animal–a gazelle, say–who is aware of prowling lions–where they are and what they are doing–compete with one who is not and the outcome is predictable. The one who is conscious will win hands down. Reproductive prospects, needless to say, are greatly enhanced by being able to see and smell predators. That , surely, is an evolutionary answer to questions about the benefits of creature consciousness.[6] Take away perception–as you do, when you remove consciousness–and you are left with a vegetable. You are left with an eatee, not an eater. That is why the eaters of the world (most of them anyway) are conscious.

This answer is so easy I expect to be told that I’m not really answering the question everyone is asking. I will surely be told that questions about the function of consciousness are not questions about why we–conscious beings–are conscious. It is not a question about the biological advantage of being able to see, hear, smell, and feel (thus, being conscious of) the things around us. It is, rather, a question about state consciousness, a question about why there are conscious states, processes, and activities in conscious creatures. Why, for instance, do conscious beings have conscious experiences and thoughts?

2. The Second Distinction: Objects vs. Acts of Awareness.

If our question is a question about the benefits of state consciousness, then, of course, we have preliminary work to do before we start answering it. We have to get clear about what a conscious state (process, activity) is. What, for instance, makes an experience, a thought, a desire, conscious? We all have a pretty good grip on what a conscious animal is. It is one that is–via some perceptual modality–aware of things going on around (or in) it. There are, no doubt, modes of awareness, ways of being conscious, which we do not know about and will never ourselves experience. We do not, perhaps, understand bat phenomenology or what it is like for dogfish to electrically sense their prey. But we do understand the familiar modalities–seeing, hearing, tasting and so on– and these, surely, qualify as ways of being conscious. So I understand, at a rough and ready level, what someone is talking about when they talk about a creature’s being conscious in one of these ways. But what does it mean to speak, not of an animal being conscious in one of these ways, but of some state, process, or activity in the animal as being conscious? States, remember, aren’t conscious of anything. They are just conscious (or unconscious) full stop. So what kind of property is this? And what makes a state conscious? Until we understand this, we won’t be in a position to even speculate about what the function of a conscious state is.
There are, as far as I can see, only two options for making sense out of state consciousness. Either a state is made conscious by its being an object or by its being an act of creature consciousness. A state of creature S is an object of creature consciousness by S being conscious of it. A state of creature S is an act of creature consciousness, on the other hand, not by S being aware of it, but by S being made aware (so to speak) with it–by its occurrence in S making (i.e., constituting) S’s awareness and, therefore, if there is an object that stands in the appropriate relation to this awareness, S’s awareness of some object. When state-consciousness is identified with a creature’s acts of awareness, the creature need not be aware of these states for them to be conscious. What makes them conscious is not S’s awareness of them, but their role in making S conscious–typically (in the case of sense perception), of some (external) object.

Consider the second possibility first. On this option, a conscious state (e.g., an experience) is one that makes an animal conscious. When a gazelle sees a lion, its visual experience of the lion qualifies as a conscious experience, a conscious state, because it makes the gazelle visually conscious of the lion. Without this experience, the gazelle would not be visually aware of anything–much less a lion.

There are, to be sure, states of (processes and activities in) the gazelle which are not themselves conscious but which are necessary to make the animal (visually) aware of the lion. Without eyes and the assorted events occurring therein, the animal would not see anything–would not, therefore, be visually conscious of lions or any other external object. This is true enough, but it is irrelevant to the act conception of state-consciousness. According to the act conception of state-consciousness, a conscious visual state is one without which the creature would not be visually conscious of anything–not just external objects. The eyes may be necessary for the gazelle to be conscious of (i.e., to see) the lion, but they are not necessary for the animal to be conscious, to have the sort of visual experiences that, when things are working right, are normally caused by lions and are, therefore, experiences of lions. A conscious visual state is one that is essential not just to a creature’s visual awareness of this or that kind of thing (e.g., external objects), but to its visual awareness of anything–including the sorts of “things” (properties) one is aware of in hallucinations and dreams. That is why, on an act account of state consciousness, the processes in early vision, those occurring in the retina and optic nerve, are not conscious. They may be necessary to a creature’s visual awareness of external objects, but they are not essential to visual awareness. Even without them, the creature can still dream about or hallucinate the things it can no longer see. The same acts of awareness can still occur. They just don’t have the same (according to some, they don’t have any) objects

If we agree about this–agree, that is, that conscious states are states that constitute creature consciousness (typically, of things), then the function, the good, of state consciousness is evident. It is to make creatures conscious, and if (see above) there is no problem about why animals are conscious, then, on the act conception of what a conscious state is, there is no problem about why states are conscious. Their function is to make creatures conscious. Without state consciousness, there is no creature consciousness. If there is a biological advantage in gazelles being aware of prowling lions, then there is a purpose in gazelles having conscious experiences. The experiences are necessary to make the gazelle conscious of the lions.

I do not expect many people to be impressed with this result. I expect to be told that the states, activities, and processes occurring in an animal are conscious not (as I have suggested) if the animal is conscious with them, but, rather, if the animal (in whom they occur) is conscious of them. A conscious state is conscious in virtue of being an object, not an act, of creature awareness. A state becomes conscious, according to this orthodox line of thinking, when it becomes the object of some higher-order thought or experience. Conscious states are not states that make the creatures in whom they occur conscious; it is the other way around: creatures make the states that occur in them conscious by becoming conscious of them.

Since the only way states can become an object of consciousness is if there are higher order acts which have them as their objects, this account of state consciousness has come to be called a HO (for Higher Order ) theory of consciousness. It has several distinct forms, but all versions agree that an animal’s experience (of lions, say) remains unconscious (or, perhaps, non-conscious) until the animal becomes aware of it. A higher order awareness of one’s lion-experience can take the form of a thought (a HOT theory)–in which case one is aware that (i.e., one thinks that) one is experiencing a lion–or the form of an experience (a HOE theory)–in which case one is aware of the lion-experience in something like the way one is aware of the lion: one experiences one’s lion-experience (thus becoming aware of one’s lion-experience) in the way one is aware of (experiences) the lion.

I have elsewhere (Dretske 1993, 1995) criticized HO theories of consciousness, and I will not repeat myself here. I am more concerned with what HO theories have to say–if, indeed, they have anything to say–about the good of consciousness. If conscious states are states we are, in some way, conscious of, why have conscious states? What do conscious states do that unconscious states don’t do? According to HO theory, we (i.e., creatures) could be conscious of (i.e., see, hear, and smell) most of the objects and events we are now conscious of (and this includes whatever bodily conditions we are proprioceptively aware of) without ever occupying a conscious state. To be in a conscious state is to be conscious of the state, and since the gazelle, for example, can be conscious of a lion without being conscious of the internal states that make it conscious of the lion, it can be conscious of the lion–i.e., see, smell, feel and hear the lion–while occupying no conscious states at all. This being so, what is the purpose, the biological point, of conscious states? It is awareness of the lion that is useful, not awareness of one’s lion experiences. It is the lions, not the lion-experiences, that are dangerous.

On an object conception of state-consciousness, it is difficult to imagine how conscious states could have a function. To suppose that conscious states have a function would be like supposing that conscious ball bearings–i.e., ball bearings we are conscious of–have a function. If a conscious ball bearing is a ball bearing we are conscious of, then conscious ball bearings have exactly the same causal powers as do the unconscious ones. The causal powers of a ball bearing (as opposed to the causal powers of the observer of the ball bearing) are in no way altered by being observed or thought about. The same is true of mental states like thoughts and experiences. If what makes an experience or a thought conscious is the fact that S (the person in whom it occurs) is, somehow, aware of it, then it is clear that the causal powers of the thought or experience (as opposed to the causal powers of the thinker or experiencer) are unaffected by its being conscious. Mental states and processes would be no less effective in doing their job–whatever, exactly, we take that job to be–if they were all unconscious. According to HO theories of consciousness, then, asking about the function of conscious states in mental affairs would be like asking about the function of conscious ball bearings in mechanical affairs.

David Rosenthal (a practising HOT theorist) has pointed out to me in correspondence that though experiences do not acquire causal powers by being conscious, there may nonetheless be a purpose served by their being conscious. The purpose might be served, not by the beneficial effects of a conscious experience (conscious and unconscious experiences have exactly the same effects acccording to HO theories), but by the effects of the higher-order thoughts that makes the experience conscious. Although the conscious experiences don’t do anything the unconscious experiences don’t do, the creatures in which conscious experiences occur are different as a result of having the higher order thoughts that make their (lower order) experiences conscious. The animal having conscious experiences is therefore in a position to do things that animals having unconscious experiences are not. They can, for instance, run from the lion they (consciously) experience–something they might not do by having an unconscious experience of the lion. They can do this because they are (let us say) aware that they are aware of a lion–aware that they are having a lion experience.[7] Animals in which the experience of the lion is unconscious, animals in which there is no higher-order awareness that they are aware of a lion, will not do this (at least not deliberately) This, then, is an advantage of conscious experience; perhaps–who knows?–it is the function of conscious experiences.

I concede the point. But I concede it about ball bearings too. I cannot imagine conscious ball bearings having a function–simply because conscious ball bearings don’t do anything non-conscious ball bearings don’t do–but I can imagine their being some purpose served by our being aware of ball bearings. If we are aware of them, we can, for instance, point at them, refer to them, talk about them. Perhaps, then, we can replace defective ones, something we wouldn’t do if we were not aware of them, and this sounds like a useful thing to do. But this is something we can do by being aware of them, not something they can do by our being aware of them. If a conscious experience was an experience we were aware of, then there would be no difference between conscious and unconscious experiences–anymore than there would be a difference between conscious and unconscious ball bearings. There would simply be a difference in the creatures in whom such experiences occurred, a difference in what they were aware of.

The fact that some people who have cancer are aware of having it while others who have it are not aware of having it does not mean there are two types of cancer–conscious and unconscious cancers. For exactly the same reason, the fact that some people (you and me, for instance) are conscious of having visual and auditory experiences of lions while others (parrots and gazelles, for example) are not, does not mean that there are two sorts of visual and auditory experiences–conscious and unconscious. It just means that we are different from parrots and gazelles. We know things about ourselves that they don’t, and it is sometimes useful to know these things. It does not show that what we know about–our conscious experiences–are any different from theirs. We both have experiences–conscious experiences–only we are aware of having them, they are not. Both experiences–those of the gazelle and those of a human–are conscious because, I submit, they make the creature in which they occur aware of things–whatever objects and conditions are perceived (lions, for instance). Being aware that you are having such experiences is as relevant–which is to say, totally irrelevant–to the nature of the experiences one has as it is to the nature of observed ball bearings.[8]

3. The Third Distinction: Object vs. Fact Awareness.

Once again, I expect to hear that this is all too quick. Even if one should grant that conscious states are to be identified with acts, not objects, of creature awareness, the question is not what the evolutionary advantage of perceptual belief is, but what the advantage of perceptual (i.e., phenomenal) experience is. What is the point of having conscious experiences of lions (lion-qualia) as well as conscious beliefs about lions? Why are we aware of objects (lions) as well as various facts about them (that they are lions, that they are headed this way)? After all, in the business of avoiding predators and finding mates, what is important is not experiencing (e.g., seeing, hearing) objects, but knowing certain facts about these objects. What is important is not seeing a hungry lion but knowing (seeing) that it is a lion, hungry, or whatever (with all that this entails about the appropriate response on the part of lion-edible objects). Being aware of (i.e., seeing) hungry lions and being aware of them, simply, as tawny objects or as large shaggy cats (something a two-year old child might do) isn’t much use to someone on the lion’s dinner menu. It isn’t the objects you are aware of, the objects you see–and, therefore, the qualia you experience–that is important in the struggle for survival, it is the facts you are aware of, what you know about what you see. Being aware of (seeing) poisonous mushrooms (these objects) is no help to an animal who is not aware of the fact that they are poisonous. It is the representation of the fact that another animal is a receptive mate, not simply the perception of a receptive mate, that is important in the game of reproduction. As we all know from long experience, it is no trick at all to see sexually willing (or, as the case may be, unwilling) members of the opposite sex. The trick is to see which is which–to know that the willing are willing and the others are not. That is the skill–and it is a cognitive skill, a skill involving knowledge of facts–that gives one a competitive edge in sexual affairs. Good eyesight, a discriminating ear, and a sensitive nose (and the qualia associated with these sense modalities) are of no help in the struggle for survival if such experiences always (or often) yield false beliefs about the objects perceived. It is the conclusions, the beliefs, the knowledge, that is important, not the qualia-laden experiences that normally give rise to such knowledge. So why do we have phenomenal experience of objects as well as beliefs about them? Or, to put the same question differently: Why are we conscious of the objects we have knowledge about?
Still another way of putting this question is to ask why we aren’t all, in each sense modality, the equivalent of blindsighters who appear able to get information about nearby objects without experiencing (seeing) the objects.[9] In one way of describing this baffling phenomenon, blindsighters seem able to “see” the facts (at least they receive information about what the facts are–that there is, say, an X (not an O) on the right—without being able to see the objects (the X’s) on the right. No qualia. No phenomenal experience. If, therefore, a person can receive the information needed to determine appropriate action without experience, why don’t we?[10] Of what use is phenomenal experience in the game of cognition if the job can be done without it?

These are respectable questions. They deserve answers–scientific, not philosophical, answers. But the answers–at least in a preliminary way–would appear to be available. There are a great many important facts that we cannot be made aware of unless we are, via phenomenal experience, made aware of objects these facts are facts about. There are also striking behavioral deficits–e.g., an inability to initiate intentional action with respect to those parts of the world one does not experience (Marcel 1988a). Humphrey (1970, 1972, 1974), worked for many years with a single monkey, Helen, whose capacity for normal vision was destroyed by surgical removal of her entire visual cortex. Although Helen originally gave up even looking at things, she regained certain visual capacities.

She improved so greatly over the next few years that eventually she could move deftly through a room full of obstacles and pick up tiny currants from the floor. She could even reach out and catch a passing fly. Her 3-D spatial vision and her ability to discriminate between objects that differed in size or brightness became almost perfect. (Humphrey 1992: 88).
Nonetheless, after six years she remained unable to identify even those things most familiar to her (e.g., a carrot). She did not recover the ability to recognize shapes or colors. As Humphrey described Helen in 1977 (Humphrey 1992: 89),

She never regained what we–you and I–would call the sensations of sight. I am not suggesting that Helen did not eventually discover that she could after all use her eyes to obtain information about the environment. She was a clever monkey and I have little doubt that , as her training progressed, it began to dawn on her that she was indeed picking up ‘visual’ information from somewhere–and that her eyes had something to do with it. But I do want to suggest that, even if she did come to realize that she could use her eyes to obtain visual information, she no longer knew how that information came to her: if there was a currant before her eyes she would find that she knew its position but, lacking visual sensation, she no longer saw it as being there. . . . The information she obtained through her eyes was ‘pure perceptual knowledge’ for which she was aware of no substantiating evidence in the form of visual sensation . . .
If we follow Humphrey and suppose that Helen, though still able to see where objects were (conceptually represent them as there), was unable to see them there, had no (visual) experience of them, we have a suggestion (at least) of what the function of phenomenal experience is: we experience (i.e., see, hear, and smell) them to help in our identification and recognition of them. Remove visual sensations of X and S might still be able to tell where X is, but S will not be able to tell what X is. Helen couldn’t. That is–or may be–a reasonable empirical conjecture for the purpose of experience–for why animals (including humans) are, via perceptual experience, made aware of objects. It seems to be the only way–or at least a way–of being made aware of pertinent facts about them.
Despite the attention generated by dissociation phenomena, it remains clear that people afflicted with these syndromes are always “deeply disabled” (Weiskrantz 1991: 8). Unlike Helen, human patients never recover their vision to anything like the same degree that the monkey did. Though they do much better than they “should” be able to do, they are still not very good Humphrey (1992: 89). Blindsight subjects cannot avoid bumping into lamp-posts, even if they can guess their presence or absence in a forced-choice situation. Furthermore,

All these subjects lack the ability to think about or to image the objects that they can respond to in another mode, or to inter-relate them in space and in time; and this deficiency can be crippling (Weiskrantz, 1991: 8).
This being so, there seems to be no real empirical problem about the function (or at least a function) of phenomenal experience. The function of experience, the reason animals are conscious of objects and their properties, is to enable them to do all those things that those who do not have it cannot do. This is a great deal indeed. If we assume (as it seems clear from these studies we have a right to assume) that there are many things people with experience can do that people without experience cannot do, then that is a perfectly good answer to questions about what the function of experience is. That is why we, and a great many other animals, are conscious of things and, thus, why, on an act conception of state consciousness, we have conscious experiences. Maybe something else besides experience would enable us to do the same things, but this would not show that experience didn’t have a function. All it would show is that there was more than one way to skin a cat–more than one way to get the job done. It would not show that the mechanism that did the job wasn’t good for something.


Davies, M. and G. W. Humphreys (1993). Introduction. In Davies and Humphreys (1993), eds. Consciousness. Oxford; Blackwell, 1-39.

Dretske, F. (1993). Conscious experience. Mind, vol 102.406, 1-21.

Dretske, F. (1995). Naturalizing the Mind. Cambridge, Ma.; MIT Press, A Bradford Book.

Humphrey, N. (1970). What the frog’s eye tells the monkey’s brain. Brain, Beh. Evol, 3: 324-37.

Humphrey, N. (1972). Seeing and nothingness. New Scientist 53: 682-4.

Humphrey, N. (1974). Vision in a monkey without striate cortex: a case study. Perception 3: 241-55.

Humphrey, N. (1992). A History of the Mind: Evolution and the Birth of Consciousness. New York: Simon and Schuster.

Milner, A. D. (1992). Disorders of perceptual awareness– commentary. In Milner & Rugg (1992), 139-158.

Milner, A. D. & M. D. Rugg, eds. (1992). The Neuropsychology of Consciousness. London: Academic Press.

Rosenthal, D. (1990). A theory of consciousness. Report No. 40,Research Group on Mind and Brain, ZiF, University of Bielefeld.

Rosenthal, D. (1991). The independence of consciousness and sensory quality. In Villanueva 1991, 15-36.

Rey, G. (1988). A question about consciousness, in H. Otto and J.Tuedio, eds., Perspectives on Mind. Dordrecht: Reidel.

van Gulick, R. (1985). Conscious wants and self awareness. The Behavioral and Brain Sciences, 8.4, 555-556.

van Gulick, R. (1989). What difference does consciousness make? Philosophical Topics, 17: 211-30.

Velmans, M. (1991). Is human information processing conscious? Behavioral and Brain Sciences 14.4,651-668.

Villanueva, E., ed. (1991). Consciousness. Atascadero, CA; Ridgeview Publishing Co.

Walker, S. (1983). Animal Thought. London: Routledge and Kegan Paul.

White, A. R. (1964). Attention. Oxford: Basil Blackwell

Weiskrantz, L., ed. (1986). Blindsight: A Case Study and Implications. Oxford: Oxford University Press.

Weiskrantz, L. (1991). Introduction: Dissociated Issues. In Milner and Rugg (1991): 1-10.

1. There is a sense in which it enables me to do things I would not otherwise be able to do–e.g., bequeath my books to my nephews and nieces–but this, clearly, is a constitutive, not a causal, sense of “enable.” Spelling out this difference in a precise way is difficult. I will not try to do it. I’m not sure I can. I hope the intuitive distinction will be enough for my purposes.
2. For recent expressions of interest, see Velmans 1991, Rey 1988, and Van Gulick 1989.

3. I here ignore dispositional senses of the relevant terms–the sense in which we say of someone or something that it is a conscious being even if, at the time we describe it this way, it is not (in any occurrent sense) conscious. So, for example, in the dispositional sense, I am a conscious being even during dreamless sleep.

4. I here ignore disputes about whether, in some strict sense, we are really aware of objects or only (in smell) odors emanating from them or (in hearing) voices or noises they make. I shall always take the perceptual object–what it is we see, hear, or smell (if there is such an object)–to be some external physical object or condition. I will not be concerned with just what object or condition this is.

5. In saying this I assume two things, both of which strike me as reasonably obvious: (1) to be aware that you are aware of a french horn requires some understanding of what awareness is (not to mention an understanding of what a french horn is); and (2) mice (even if we give them some understanding of french horns) do not understand what awareness is (they do not have this concept).

6. This is not to say that consciousness is always advantageous. As Georges Rey reminds me, some tasks–playing the piano, pronouncing language, and playing sports–are best performed when the agent is largely unaware of the performatory details. Nonetheless, even when one is unconscious of the means, consciousness of the end (e.g., the basket into which one is trying to put the ball, the net into which one is trying to hit the puck, the teammate to whom one is trying to throw the ball) is essential. You don’t have to be aware of just how you manage to backhand the shot to do it skillfully, but, if you are going to be successful in backhanding the puck into the net, you have to be aware of where the net is.

7. I assume here that, according to HOT theories, the higher order thought one has about a lion experience that makes that experience conscious is that it is a lion experience (an experience of a lion). This needn’t be so (Rosenthal 1991denies that it is so), but if it isn’t so, it is even harder to see what the good of conscious experiences might be. What good would be a thought about a lion experience that it was . . . what? . . . a (generic) experience?

8. I’m skipping over a difficulty that I should at least acknowledge here. There are a variety of mental states–urges, desires, intentions, purposes, etc.–which we speak of as conscious (and unconscious) whose consciousness cannot be analyzed in terms of their being acts (instead of objects ) of awareness since, unlike the sensory states associated with perceptual awareness (seeing, hearing, and smelling), they are not, or do not seem to be, states of awareness. If these states are conscious, they seem to be made so by being objects, not acts of consciousness (see, e.g., Van Gulick 1985). I don’t here have the space to discuss this alleged difference with the care it deserves. I nonetheless acknowledge its relevance to my present thesis by restricting my claims about state-consciousness to experiences–more particularly, perceptual experiences. Whatever it is that makes a desire for an apple, or an intention to eat one, conscious, experiences of apples are made conscious not by the creature in whom they occur being conscious of them, but by making the creature in whom they occur conscious (of apples).

9. For more on blindsight see Weiskrantz 1986 and Milner & Rugg 1992. I here assume that a subject’s (professed) absence of visual experience is tantamount to a claim that they cannot see objects, that they have no visual experience. The question that blindsight raises is why one has to see objects (or anything else, for that matter) in order to see facts pertaining to those objects–what (who, where, etc.) they are. If blindsighters can see where an object is, the fact that it is there (where they point), without seeing it (the object at which they point), what purpose is served by seeing it?

10. There are a good many reflexive “sensings” (Walker 1983: 240) that involve no awareness of the stimulus that is controlling behavior–e.g., accommodation of the lens of the eye to objects at different distances, reactions of the digestive system to internal forms of stimulation, direction of gaze toward peripherally seen objects. Milner (1992:143) suggests that these “perceptions” are probably accomplished by the same midbrain visuomotor systems as mediate prey catching in frogs and orienting reactions in rats and monkeys. What is puzzling about blindsight is not that we get information we are not aware of (these reflexive sensings are all instances of that), but that in the case of blindsight one appears able to use this information in the control and guidance of deliberate, intentional, action (when put in certain forced choice situations)–the sort of action which normally requires awareness.


Ontology and Perception

The ontological question of what there is, from the perspective of common sense, is intricately bound to what can be perceived. The above observation, when combined with the fact that nouns within language can be divided between nouns that admit counting, such as ‘pen’ or ‘human’, and those that do not, such as ‘water’ or ‘gold’, provides the starting point for the following investigation into the foundations of our linguistic and conceptual phenomena. The purpose of this paper is to claim that such phenomena are facilitated by, on the one hand, an intricate cognitive capacity, and on the other by the complex environment within which we live. We are, in a sense, cognitively equipped to perceive discrete instances of matter such as bodies of water. This equipment is related to, but also differs from, that devoted to the perception of objects such as this computer. Behind this difference in cognitive equipment underlies a rich ontology, the beginnings of which lies in the distinction between matter and objects. The following paper is an attempt to make explicit the relationship between matter and objects and also provide a window to our cognition of such entities.

General Introduction

Lying at the center of this article is the claim that the study of ontology ought to begin with what is perceived rather than what is said. Researchers who are interested in ontology should take as their starting point what is given in the perceptual field rather than what nouns are present in a given language. Some ontological research begins and ends with an analysis of the relationship between a language and its speakers (see for example see Lutz, Riedemann, and Probst (2003), Kayed and Colomb (2002), and Wielinga, Schreiber, Wielemaker, and Sandberg (2001)). There are, however, general problems associated with language that should warn us from investing too much in the implications of what nouns appear in a given language. One such general problem is found in the seemingly simple distinction between mass nouns, such as ‘water’ or ‘gold’, and count nouns, such as ‘human’ or ‘pen’. Some nouns are difficult to place on one side or the other of this distinction. One such noun is ‘glass’, which can be used to refer to an object capable of containing liquids or as a material that composes several such objects. The mass-count distinction is the subject of the section that immediately follows. Fortunately, for those who are interested in the ultimate source of the distinctions drawn within any ontology, we have recourse to some provocative research into infant object perception. Such research indicates that there is a primitive distinction to be made between objects, entities which are coherent wholes, and materials, entities which lack coherence. This is a primitive distinction for the very fact that the infants who seem to make it do so without any significant understanding of the linguistic distinction between mass and count nouns. Next, attention will be given to what is needed on the side of human perceivers in order to not only draw such a distinction, but to use it during daily interaction with the world. It will be argued that we need two types of concepts in order to negotiate our way through the world. On the one hand we need concepts in order to track real world instances, such as particular buses, walls, and drops of water. On the other hand, we need concepts that are general in the sense that they can be used to recognize that a completely new instance, as when occurs when I am introduced to a new person, belongs to the same class as previously encountered instances. In addition to concepts, we also need to investigate what sorts of rules must be present in order for human subjects to so readily discriminate between objects and materials at such an early age. Such rules must be receptive to surface properties such as color, shape, and texture in order to begin to explain the discriminatory behavior of the infants in the psychological tests cited below. Subsequent to this discussion, the relations among rules and concepts, respectively, will be explored.

The Mass-Count Distinction

It is important to keep in mind that the mass-count distinction is first and foremost a linguistic one. Quite simply, there are mass nouns, such as ‘water’, which refer to matter, or more colloquially, stuff, while count nouns, such as ‘car’, refer to objects. We may be asked to count the number of cars in the parking lot and understand just what this task means. But can we count the number of waters on the table? Are we to count the water in the glass as a unified whole and the small area of water that collected beside it as another? In order to make linguistic sense of the task of counting waters, we would have to add some sort of count term in front of the mass noun ‘water.’ So we may be asked to count the areas or puddles of water on the table which admit counting.
However there is a series of issues which can result in the dissolution of the mass-count distinction. First of all, how are we to determine mass from count nouns? It is clear that whether a noun can be made plural and still make grammatical sense is not an adequate criterion of differentiation. Consider words such as ‘news’ or ‘woods’ and immediately one obtains a grasp of the difficulty of maintaining the distinction. Despite the ‘s’ at the end of these words, in English they function as singular nouns. For example, to relay bad news we say, “The news is bad.”. Further, words that at first glance appear to be of the mass variety also seem to be able to be readily counted. For instance, I may pass two separate ‘woods’ on my way to grandma’s house. In addition, in a restaurant I may easily order two ‘waters’ and have my order understood by the server who has heard it. One may claim that with respect to the latter example that ‘waters’ is an abbreviated form of ‘glasses of water.’ Ware suggests that we define the distinction according to the type of quantifiers and determiners that are used in front of the two types of noun (1979). This seems plausible but undesirable in that we want the distinction to be applicable to nouns, not to noun phrases. The issue is whether mass nouns divide the reference in a different way than count nouns. In order to attempt to answer this question we need to set aside quantifiers and determiners and deal with them separately.

Another possible criterion that could be used to maintain the mass-count distinction is to hold that the two differ according to what they refer to. So, as Cartwright explains, count nouns refer to individuals while mass nouns such as ‘water’ refer to stuff (1979). This suggests that there is a genuine ontological distinction that corresponds to the linguistic one. But, we may certainly ask whether mass nouns use a different referring mechanism than count nouns. In other words, when we say ‘milk is a good source of calcium’, it appears as though we are not intending to refer to a discrete mass of milk but rather to a type of matter. The question seems to be whether milk in this sentence refers to a different kind of entity than ‘man’ does in this sentence: ‘man is an animal.’ And if so where does this difference rest? Is it an ontological difference having to do with the nature of being of that which is referred to? Or is the difference found in the way we think about what they name?

In addition, what can we say of cases where it is not clear whether we are referring to a type of matter or a particular collection of matter? When a person who is gasping for breath says quite simply, “water”, is he referring to a type of matter or an instance?

Quine’s Body-Mindedness

There are several issues that could be raised based on the observation that mass nouns can be classified according to whether they refer to an instance of matter or they refer to a type of matter. First and foremost, we may ask how we come to form types of anything. A concomitant topic consists in whether types exist in the world or only in the minds of human observers.
Perhaps all we have is a world of individuals. If so, it is unclear whether such a world includes discrete instances of matter and what individuating criteria can be applied to matter in such a way as to result in discrete instances. Moreover, matter has the additional problem of not being a body in the sense of Quine (1974). At stake is the relationship between the cognitive representation of matter and Quine’s observation that human beings are instinctively body-minded. If we accept the claim that there are representational advantages bestowed upon bodies, what does this mean for the representation of matter?

Let us begin to address this latter question by first attempting to describe the presence of the matter concept and its role within human cognition. What I suggest is that we look upon the matter concept in much the same way as the object concept. However, there is a fundamental difference between the two. This difference is first noticed in psychological experiments conducted on infants, and it is appropriate for us to note the results from such experiments as they shed light upon how basic the distinction between matter and object actually is.

There are many experiments (for example, Baillargeon et al. 1985; Chiang and Wynn, 1997; and Huntley-Fenner et al., 2001) which show that while infants readily track objects, such as toy cars and rubber ducks, they fail to track discrete instances of matter, such as sand or gel. The literature regarding infant object recognition (Spelke 1994) suggests that the reason for this is that instances of matter lack certain principles or properties that objects possess. According to Carey and Xu (2001, p. 207) infant experiments on object recognition point to the following conclusion:

These infant studies suggest that the object tracking system is just that: an object tracking system, where object means 3D, bounded, coherent physical object. It fails to track perceptually specified figures that have a history of non-cohesion. (Emphasis in original work).
An example of this is found in two experiments (Baillargeon et al. 1985, Chiang and Wynn 1997). In each experiment, infants were presented with one of two trials. In one, a coherent, bounded object was dropped behind a screen that was placed in front of the infant (object trial). In the other, sand was poured behind a screen (material trial). In both cases the screen was removed after the initial presentation to test the subject’s response to the disappearance of the item in question. The results were that the infant subjects showed surprise (as measured in amount of time the infant spent gazing at the area where the object or material was supposed to lie) upon the outcome of the object trial, but did not show surprise in the material trial. These experiments lend support to there being an object tracking system within our cognitive repertoire, but not a material tracking system.
What are we to make of the fact that infants routinely fail to track instances of matter? Furthermore we can ask a more fundamental question: when presented with a non-solid instance of matter, does the infant perceive a non-solid instance of matter? I want to claim that this is indeed what the infant perceives. Notice how this is a more detailed claim than that found in the literature pertaining to infant object perception. The claim made by Carey and Xu above seems to suggest that what the infant perceives is primarily a non-object in the standard sense of objects being three-dimensional, coherent entities. This is an important claim, but it does not tell us much with respect to the infant’s perception of matter instances. What I would like to do below is to attempt to fill in the details with respect to the perception of discrete instances of matter.

The broader point to be made is that perception alone cannot account for the fact that there is a fundamental difference between objects and non-objects, the latter of which includes materials such as clay or sand. Rather, perception must be linked to more advanced cognitive systems which are both flexible and specific enough to be sensitive to incoming perceptual information, yet rigid and general such that the information that comes in is properly classified and tracked (or not tracked in the case of a discrete instance of matter). Seen in this way, perception is not a lower level cognitive activity that is divorced from higher level activities such as categorization. It is, instead, embedded within cognition. Furthermore, without perception, there would be little need for categories or concepts as well.
The Matter Concept

Here, let us take stock of what kinds of entities we need in order to recognize, re-identify, and track collections of matter. The view taken up here is that there is a hierarchy of concepts which we must describe in order to begin to speak about recognizing matter. We will begin by describing our more general concept of matter.
First, there is matter which is a broad superordinate category that stands as a contrast-class to object. When enumerating the principles that determine whether an entity falls under the matter category, we want the principle to be sufficiently flexible to handle a wide variety of types of matter. In addition, we must keep in mind that at least in infancy, the matter concept seems to be underdeveloped in contrast to the object concept.

What determines matter from objects are certain irregularities of shape which are present in the instances of the former. Matter, the concept, is attuned, in the way that our perceptual system is attuned to perceiving objects and matter instances and not molecules, to discontinuities in shape, which objects, as a general rule, do not present. However, there are always exceptions to general rules which we will observe in a moment. The fact that matter corresponds to shape irregularities means that there are different criteria which are used to recognize, re-identify, and track matter instances.

This is an important point which Keil, Kim, and Greif develop within their chapter in Forde and Humphreys (Eds., 2002). In their chapter, Keil et al. speak of the perceptual shunt as key to cognitive processing of low-level perceptual information. The shunt is claimed to be a mechanism that channels perceptual information to different parts of the brain for subsequent higher level cognitive processing. The idea is that in order for this process to work, our cognitive structure must be sensitive to salient perceptual information. In Keil’s et al. (2001, p.13) words, “data can only enter the system if it sets off primary perceptual triggers.” The attempt for us is to apply these ideas to the perception of instances of matter.

The experiments conducted on infants point to the conclusion that objects, in the standard sense, are assimilated according to shape. However, matter instances are assimilated according to the material which composes them. Soja, Carey, and Spelke’s (1991) experiment included presenting infants (2 year olds) with a named object with a T-shape and a named non-solid matter instance of a novel shape. After the infants were habituated to the two items they were shown two more sets of items. After being shown the T-shaped object, infants tended to apply the stimulus name to a T-shaped object made of a different material as being more similar to the original object than a collection of separate objects made of the same material, but having a non-T-shape. However, when presented with a matter instance of a novel shape, the infants applied the stimulus name to a differently shaped entity made of the same material rather than a similarly shaped entity made of a different material. This experiment shows that there is an interesting dynamic between shape and material which is applied to differentiate matter instances from objects.

There are several questions involved in interpreting the results of this experiment. First and foremost is the question of how the subjects are receptive to the fact that material composition is salient in one trial and not the other. In other words, just what are they using to apply the stimulus word and how are they using it?

I would suggest that what the subjects perceive are discrete matter instances. However, in order to assimilate the stimulus with the target, the child must perform two different tasks at two different levels of cognitive processing. First, she must somehow mentally extract the material composition of the named stimulus. Also, she must have some notion that the material composition in the material experiment is salient. Yet it is not so in the object trial. At issue is how this is performed. The answer must be found at the top-level of conceptual formation rather than in bottom-level perceptual experience. There are constraints which guide the mind in order to perceive instances of matter. One such constraint was mentioned above. Shape irregularity seems to be a good candidate for matter instance recognition. Allow me to elaborate why I single out shape irregularity as being salient to the classification of matter.

There is, according to my interpretation of the above experiment, an important distinction to be made between perceiving shape irregularity and using it to assimilate two matter instances to a single name. The recognition of the fact that the named stimulus is of a novel shape signals to the infant that the substance of which it is composed is of primary import. (Of course there are other such signals found in surface properties, for instance texture and color distribution, which we will set aside here.) This leads the infant to overlook differences in shape upon being asked to assimilate names to different instances of matter. There are obvious objections to this interpretation. For example, just how do we define a novel or irregular shape? A man is irregularly shaped in a sense. Are we to classify a man as an instance of matter?

My first response would be to say that shape irregularity refers to asymmetry in shape. But this will not due for we can certainly think of counter examples. For instance a symmetrical portion of gold is called gold and this holds irrespective of its symmetrical shape. It is interesting to consider the following. If a semi-solid matter instance such as clay were molded into a T-shape and presented as a named stimulus, would infants subsequently assimilate the name to targets of the same shape as in the object trial? The current literature on infant perception (Huntley-Fenner 2001) seems to predict that the result of such an experiment would depend upon how the named stimulus was presented. So, if we formed the semi-solid material into a T shape prior to presentation, then infants would assimilate the name according to shape. However, if we perhaps fashioned the material into a T-shape before the infant’s eyes, then assimilation would take place based upon material composition. In any event, it seems to me that it is entirely possible or even very likely that an irregular shape marks a matter instance, however difficult it is to theoretically define.

There are two more constraints placed on the cognitive processing of matter instances which I want to touch upon. The first is what I refer to as the uniformity constraint which tells us that there is something peculiar about perceiving matter instances. Uniformity says that instances of matter are in general composed of a uniform material throughout the instance. This applies especially to solid, opaque masses such as a nugget of gold. Of course, this assumption could be dead wrong. There could be a mass of some other mineral or metal concentrated in the center or scattered throughout. Nevertheless, we tend to apply matter instance uniformity based upon surface uniformity. This applies equally to translucent, non-solids such as water, even when it has been mixed with, say, salt. The tendency is to view the mixture as being uniform throughout the instance.

Moreover, the last constraint, which delineates our perception of matter instances, is that in general they do not present to us any significant surface divisions. In other words, objects present us with parts at the mesoscopic level, which is the level at which we perceive. For example, cups have handles and humans have arms. Of course, there is not a clear boundary between the handle and the remainder of the cup, and the arm from the remainder of the human. Nevertheless, instances of matter lack this phenomenon. From this, we seem to be much more able to mentally parse the cup into parts than the contents that the cup contains.

What we are left with then are three constraining principles – the principle of irregular shape, uniformity, and lack of perceptible surface divisions – which interact to give us the matter concept. There are significant, outstanding questions that one could ask of these constraints. For instance, are we to look upon them as necessary or sufficient conditions for the matter concept? And further, how do they interact?

What I would like to do is to briefly articulate some of the relationships that exist among these principles. First, let me make this observation regarding the uniformity principle. Whether an instance of matter is uniform is not discoverable upon immediate visual perception. This sets uniformity apart from shape irregularity and the lack of perceptible surface divisions, which are perceived upon immediate visual inspection. What this means is that uniformity is something which the subject derives on the basis of the other two principles. This derivation is important to the perception of an instance of matter. This is because the perception of matter instances includes depth information, information upon the physical properties of the instance at points that are hidden from visual inspection. This is what distinguishes it from the perception of animals or artifacts whose inner, physical properties are much more intricate and are available only to those with specialized knowledge, such as biologists. So, the perception of portions of matter proceeds from shape irregularities and a lack of perceptible surface divisions to material uniformity throughout the portion in question. Of course, a lack of perceptible surface divisions is more strongly connected to material uniformity than shape irregularity is. When perceiving an instance of matter, a lack of surface divisions implies uniformity of material throughout the instance.

What I would like to do now is to contemplate whether we have left something out of our analysis of perceiving portions of matter. Perhaps the three principles weighed above are aspects of a more fundamental principle that explains the perception of matter instances. I offer the following for consideration. There is a difference of degree in what I call the three-dimensional definiteness between instances of matter and standard objects. When perceiving a particular portion of matter from a particular angle, it is much more difficult for the subject to mentally construct the perceptual properties of the part that is obstructed from visual inspection than is the case with standard objects. This is due in large part to the three principles described above. Irregular shape and the lack of perceptible surface divisions make the determination of what is on the other side difficult. Yet, with, for example, a T-shape object, it is not very challenging to surmise what the object would look like if rotated. However, the uniformity principle does at least tell us what type of matter the occluded side is made of. So, in a sense, uniformity reduces the amount of indefiniteness we have concerning the three-dimensional view of the matter instance in question. It counteracts to a certain extent the uncertainty with which irregular shape and lack of perceptible surface divisions leave us. For instance, when presented with a large portion of gold, we cannot properly imagine what the occluded section looks like based on visual perception alone. However, we do assume that gold composes the unseen section whatever particular shape it may have.

Of course there are problems with these remarks as well. It seems as though the lesser degree of three-dimensional definiteness applies well to certain types of matter instances. But can the same be said of a portion of water, which given its transparency is perhaps more three-dimensionally definite than a T-shaped object? This objection I intend the reader to consider. But it contains a point on which I would like to focus next: the notion of types of matter.

Types of Matter

There are different types of matter, many of which we lack specific names for. First of all, there is the heap or collection of objects at close spatial quarters such as an archipelago. This type can be further divided into heaps which have parts of uniform shape and size such as piles of sand and those composed of parts of all shapes and sizes, for instance a heap of garbage. Next we have semi-solid types of matter, which include peanut butter and clay. Also, there are kinds of fluids such as gases and parcels of air in physical geographic parlance. Furthermore, we have liquids such as water, and finally solid kinds, for instance, gold.
What I want to suggest is that the reason we need a type classification of matter is that different types of matter are inductively rich. This is to say that their formation facilitates important inferences pertaining to how instances of matter behave. This is perhaps a point that is worth emphasizing. Knowing that an instance of matter belongs to a particular type tells us something about its physical composition. So a semi-solid mass, or to stick with our terminology, a semi-solid instance of matter, can be divided into smaller portions that are composed of the same semi-solid material. In addition, knowledge that a matter instance belongs to a particular type tells us something about the behavior of the instance of matter. For example, a semi-solid instance would provide a certain amount of resistance upon surface contact. In addition, if we placed a semi-solid instance on the top of an inclined plane we would not expect it to move toward the bottom. We would expect a liquid such as water to do so, however. There are two implications to be drawn from this observation.

I am intrigued by Pascal Boyer’s (Millikan, 1998) argument for category-specific tracking processes. Tracking instances of a different matter type would, I theorize, involve different processes due to different motions. Tracking a cloud of gas moving through the air is a lot different than tracking a slab of bronze as it is being fashioned into a statue. I want to say it is different because there seems to be a difference in the degree but not the kind of cohesion amongst the respective types of matter to which these instances belong. In fact, we can establish a continuum of cohesion among kinds of matter ranging from the least cohesive fluids such as smoke to the most cohesive solid masses such as gold with semi-solids in the middle.

I also think that since different kinds of matter have different behaviors, then it can also be claimed that discrete matter instances are more than just collections of parts. For instance, an area of water is more than a collection of water molecules. This is because although significant chunks of material can be added to or taken away from an instance of matter, we still view that matter instance the same as before the change. What this seems to imply is that instances of matter bear a significant resemblance to Aristotelian substances, at least as they are described in Book VIII of the Metaphysics.

What bothers me about this claim is that there is a problem with respect to Aristotelian substantial change. Aristotle recognized that although an entity may change, we still acknowledge the changed entity as being identical to the entity which existed before the change. There is something out there in the world and in the entity in question which causes this phenomenon. For Aristotle, it is this something that is matter. In Book VII, Chapter 1 of the Metaphysics, line 1042 a 32, we read the following: “But clearly matter also is substance; for in all the opposite changes that occur there is something which underlies the changes.”

The problem of applying this principle to an instance of matter rests in the following. Imagine a scenario in which we have a quantity of water in a glass. We then take the glass and pour some amount of the water out, leaving us a smaller quantity of water (which we will designate Q1) remaining in the glass. Next, some new water is poured into the same glass. Finally, we pour another amount of water out leaving us another quantity of water (designated as Q2), which is exactly the same amount as (Q1). The question arises whether the water that remains in the glass at (Q2) is the same as that at (Q1). We really cannot be certain that the water at (Q1) stayed at the bottom during the course of the second out-pouring, or that some molecules somehow moved into the amount which was poured out. In sum, we cannot determine whether the whole or any part of (Q2) is identical to (Q1) in which case there seems to be no causal foundation for us to call the (Q2) water the same as the (Q1) water.

But we would be correct in calling the (Q2) water identical to the (Q1) water and this not because they have identical quantities. Rather, their identity is based on two considerations. First of all, they are uniform wholes and this is true regardless of how many molecules of (Q2) are different from (Q1). Secondly, the two quantities display identical behaviors, for example, each react in the same way upon coming into contact with something else. The glass holds both in the same way. This is in addition to the other perceptible physical properties such as color, which are used in identification. They are the same type of matter, but are the instances the same? I want to claim that they are because they are of the same type in addition to being the same quantity. Whether or not they have the exact same molecules we leave to the scientist to determine.

But one obvious objection to these observations is that we could be completely wrong about calling the two quantities the same. Suppose it was not water which was poured into the glass but some chemical that bears a striking resemblance to it. In which case at (Q2) we have some sort of water-chemical mixture that differs from (Q1) which was entirely composed of water. Here we have two different types without even knowing it.

This is certainly a serious objection. But the key is to notice that it misses the point that I wanted to make. The point is to discover how we re-identify matter instances across change. The objection cites the fallibility of our knowledge. The fact that our knowledge can be wrong is a separate issue from how, in fact, we do come to identify and assimilate. It is the case that we do identify entities as being the same. We do track entities across time. We need to do so in order to survive. How we perform these tasks is a different observation than the observation that we could be wrong. Besides, I could be equally wrong in believing that I am a philosopher. Perhaps I am a victim of a deception. To which I reply that perhaps this is true, but it is highly unlikely.

The Material Object Concept and Material Objects

Before we proceed, I call attention to the fact that thus far we have been careful to speak of matter instances. This is because it is important to notice the differences between discrete instances of matter and what is known in the literature on infant perception as standard objects. These differences are ontological. What I wish to do now is to speak of the conceptualization of matter instances. For this reason, I will use the term material object instead of matter instance. This is meant to reflect that just as we need an object concept to track objects, we also need a material object concept to track material objects. However, let me make clear that a material object refers to a discrete instance of matter. From now on I will use the terms ‘material object’ and ‘matter instance’ or some variation of the latter interchangeably to refer to particular portions of matter.
With this in mind what remains is a description of the material object concept and material objects. Specifically, we should ask just what are material object concepts, why do we need them, and how are they formed?

Basically, what is meant by the material object concept is the cognitive representation of material objects that is utilized during cognitive processing. They are collections of special properties that distinguish material objects from standard objects. Surface texture and color seem to be two such properties to which material object concepts must be receptive. We need material object concepts in order to re-identify and track an enormous range of possible individual material objects.

What is of particular difficulty when devising a strategy to deal with a theory of matter is the amazing array of possible material objects that could exist in the world. Somehow a comprehensive theory must be able not to explain them all, but to accommodate a large majority of them. Take for example a heap of similar objects such as tennis balls. According to what was said above, this would be a material object. After all it resembles a heap of sand. But even if it is a material object how can we know this? Texture and color don’t seem to tell us anything different in the case of a heap of tennis balls than a single tennis ball does.

First, we know it is a different object from a single tennis ball not because of color or texture. The salient feature seems to be the irregular pattern of edges which the heap presents to the observer. As the visual system builds the primal sketch of the entity in the sense of Marr (1982), the viewer is presented with an irregular collection of edges which outline it. Contrast this with a single tennis ball, which presents a comparatively regular set of edges. Second, we know the heap is a material object because it has a certain behavior; it reacts a certain way upon surface contact. We know, for instance, that taking a ball from the bottom of the heap will probably have consequences for those balls above it, even those balls that are not in immediate contact with it.

But we may ask why we need material object concepts at all? There are two reasons for this. First our material object concepts must be able to capture and preserve properties that distinguish among the different types of matter. Secondly a material object concept must also capture specific properties of particular matter instances.


What I would like to do now is to connect the entities that we have discussed in order to form a coherent explanation of material object perception. The strategy is what I call convergence, which is one that attempts to combine top-down and bottom-up approaches to explaining the mystery of material cognition. The mystery is how people can easily recognize and re-identify material objects given the infinite variety of shapes and sizes they may have. To begin, I will attempt to clarify the explanatory strategy of convergence and attempt to situate such a strategy within the literature on cognition and perception.
Traditionally there are two approaches regarding human conceptual development which are referred to above by the terms ‘top-down’ and ‘bottom-up’. Top-down approaches are committed to the assumption that our concepts are formed independent of human-world interaction. One variant of a top-down approach is conceptual nativism which holds that we are born with at least some, if not all, of the concepts we have during the course of our respective lifetimes. On the other hand, bottom-up approaches are committed to the assumption that our concepts develop out of human-world interaction. The term ‘bottom-up approach’ is an umbrella term for all varieties of empiricism. The strategy of convergence is meant to simultaneously acknowledge such assumptions and also to set them aside. I would argue that setting these assumptions aside is important for two reasons. First of all, the debate between empiricism and nativism, despite its rich philosophical history which is too long to report in this article, may in fact be a diversion from what we should be seeking an explanation for. Namely, we should be seeking an explanation for human-world interaction, for without such an explanation the debate between empiricists and nativists would be incomprehensible. Secondly, and along such lines, we should be working toward building an ontology which exists independently of any assumptions like those made by the empiricists and nativists. The strategy offered below, that of convergence, marks the very start of such an endeavor.

I will focus on our formation of material object concepts, which are used to track specific instances of matter. We may ask how material object concepts are formed. My view is that they are formed through a union of conceptual constraints and perceptual information.

First of all, the matter concept is a collection of properties to which the perception of matter instances must be attuned. We have tried to find these properties above. These properties play a major role in determining which objects are to be placed into the material object class. However important these properties are in the recognition of material objects, they do not provide us with the ability to single out specific instances of matter.

In order to have specific material object concepts that track specific matter instances we also need lower level perceptual information. What is meant here by lower level perceptual information is information about shape, size, location, etc. that is specific to this area of water, for example.

Thus the claim is that our material object concepts, which are used to track specific instances of matter, are formed where the properties of the matter concept and lower-level perceptual information converge.

The Material-Object Concept – A Test Case

Now let us take the time to test some of the ideas included above in a test case. The case is designed to be difficult in order to see how well the ideas discussed so far withstand some significant pressure.
Let us imagine we have in front of us two entities. One entity is a rock of a particular shape and color. The second entity has the exact same shape and size as the first, but it is obvious due to its distinct physical properties that it is gold. So, the first object we refer to as ‘a rock’, a count noun; while the second we refer to as ‘gold’, a mass noun.

What is the true difference between an object and a material object? In this instance we have two entities with the same surface properties of shape and texture. Further, they can be said to have the same behaviors in that both react identically to surface contact and both are coherent in the same way. In addition, if one is irregularly shaped, then so is the other. Moreover, let us suppose that their shape is such that it does not indicate any distinct mesoscopic parts. In sum, the constraints meant to distinguish between matter and object seem to apply to both entities. So, the distinction seems to break down at this point.

However, I argue that there is indeed a difference between the material object, gold, and the object, rock. The difference rests in the composition of the entities and the uniformity assumption discussed above. In the case of gold, we assume that the color gives us information about what the entity is composed of throughout its extension in space. In fact, we would be truly shocked if we found out that the gold on the surface was just a patina. Notice that if we did make this discovery the piece of gold would become instead a gold covered rock. This applies even in cases not involving precious metals where there is a desire for material uniformity, such as coal.

Further, it is uninteresting to apply the uniformity constraint to the rock. Instead, we would be more interested if we were told that the rock was not uniform throughout and contained sections of gold scattered within it; in which case it would still be called a rock, but it would be one with scattered portions of gold within it. The point is that speculating about and coming to know of the innards of the rock does not affect its status as a rock as much as in the case of gold. Also, we have different names for both types of objects. We do not call gold a ‘gold rock’ but we have special names, such as ‘gold nugget’ or ‘piece of gold’ for a significant portion of gold.

Next, we may inquire as to what degree this object/material object distinction is captured by the linguistic mass/count distinction. The problem is that language does not encapsulate this distinction in its entirety. The difficulty is that ‘gold’ can be used to refer to a type of matter which is defined by the properties that gold has. However, it can also be used to refer to a particular material object composed of gold. Somehow we know that when a miner shouts, “I’ve found gold”, he is referring to some particular portion of gold with boundaries as yet to be discovered. Yet, similarly, we know that when a milk-drinker says that, “milk is a good source of calcium”, he is referring to a type of matter and not to some particular material object. It is my contention that a major part of how we can decipher these different referents is that we have an order or hierarchy of concepts which help us to understand our world. And just as there are higher-order constraints that aid our understanding, these same constraints help us to decode and make sense of our language.

A Final Objection

Before we conclude, let us consider yet one more objection. The objection is provided by Millikan in her 1998 article “A Common Structure for Concepts of Individuals, Stuffs and Real Kinds: More Mama, More Milk, and More Mouse.” Millikan’s argument challenges some of the core assumptions underlying the claims found in this paper.

First, according to Millikan, concepts are not constructed by attending to properties. Millikan’s aim is to provide a nondescriptionist account of concept formation. Concepts are not formed through the listing of specific properties. This is because properties cannot be used as the basis of individuation. Rather, the extensions of concepts, the instances that fall under the concept, are determined much more primitively through a process along the lines of what philosophers of language would call rigid designation. In other words, concepts do not describe entities; instead, they point to or enumerate entities. Indeed Millikan’s analysis of concepts proceeds along the lines of an analysis of how the nouns we use in our language refer. This relates to her view on how the use of language comes to influence our concepts. She claims, “Having substance concepts need not depend on knowing words, but language interacts with substance concepts, completely transforming the conceptual repertoire” (Millikan 1998, p. 55).

Secondly, it is important to realize what Millikan classifies as substances. Substances include “stuffs” such as milk and gold, individuals such as Bill Clinton, Mama, and the Empire State Building, and real kinds. Examples of real kinds include Rosch’s (1975) basic level categories such as mouse and house which children learn first (Millikan 1998).

There is a reason why Millikan includes such various items under the substance category. Specifically she wants to claim that there is not a genuine ontological distinction to be made between material objects, or in her terminology, stuff, such as milk, and objects, such as mouse. Here is Millikan (1998, p.56) describing the relationship between concepts and ontology:

My claim will be that these apparently quite different types of concepts have an identical root structure and that this is possible because the various kinds of “substances” I have listed have an identical ontological structure when considered at a suitably abstract level.

The concepts mouse and milk have the same structure, so Millikan claims, as concepts of individuals like Mama and Bill Clinton. The claim is that stuff concepts, such as gold, are rooted in our cognitive structure because they are conceptually and ontologically similar to individual objects. Millikan makes another point that is worth mentioning here. She claims that there is a distinction to be made between a substance concept and the properties that a substance is known to possess. She states:

It is because knowledge of the properties of substances is often used in the process of identifying them that it is easy to confuse having a concept of a substance with having knowledge of properties that would identify it (Millikan 1998, p. 63).

So, in sum, the acquisition of substance concepts involves storing information about substances and associating this information with the correct set of properties.

A Brief Response

Allow me to respond to Millikan by noting some of the consequences of her position. First of all her position seems to be much more complex than the one offered in the body of this paper. Further, this complexity is found in the way the mind perceives the world, not in the world itself.

Millikan’s view seems to contradict the empirical findings relating to infant perception cited above. This is a point that Paul Bloom emphasizes in his Open Peer Commentary response to Millikan (1998). Infants applied names for objects very differently from names for stuff or material objects as seen in the experiment conducted by Soja et al. (1991).

Secondly, it is not clear to me how we are to link our information of substances to the correct list of properties. In order to do so, it would seem we would have to think of an additional cognitive mechanism. This would be in addition to the perceptual shunt talked about above which is necessary to pick out the salient properties of objects and material objects alike. Under Millikan’s view we would need some sort of structure to connect the important properties to our information of substances. Further this structure, it would seem, must translate our perception of properties and our information on substances into a uniform format or perhaps language. It appears as though Millikan is committed to some form of the position that the mind is a general processor, which holds that the mind employs a general strategy and/or language across tasks.

The consequence of this view is that online processing, the kind of cognitive processing that operates on perceptual information, becomes inordinately difficult and slow. This is because perceptual information, our knowledge of properties, and our substance concepts must be joined together and subsequently processed. Again, this contradicts the fact that infants readily and easily distinguish between objects and material objects. In addition, if perceiving substances occurs the way Millikan describes, then it is hard to see how readily we can make distinctions that are relevant to our survival. When I cross a street and notice that there is a bus rapidly moving toward me, I do not link bus properties with bus substance. Instead, I do know quite early in my perception of the bus that it is an object and furthermore has a likely trajectory which, if I don’t take immediate action, will threaten my survival. Millikan overlooks the fact that perception, to be of any use to us, must not only be accurate and consistent more often than not, but also must be agile and quick enough to deliver real-time information to cognitive systems of more sophistication.

A simpler explanation is available if we recognize that there are different entities in the world, ontologically speaking. Two such entities include objects and material objects. The world is complex. However, the way ordinary people conceptualize the world is much less so.


In sum, we have attempted to notice what place perception has, not only within our cognitive capacity, but within our daily interaction with the world around us. Our concepts must be amenable to perceptual information in order for us to make sense of the world in which we live.

Thus, the claim is that we must utilize both a top-down and bottom-up processing mechanism in order to classify matter into types. Also we need this explanation to build material object concepts which are used to track particular material objects located within the visual field. Also, we’ve proposed three general constraints: uniformity, shape irregularity, and absence of perceptible surface division on the mesoscopic scale. We also considered whether these three may be aspects of another general constraint, which we referred to as three-dimensional definiteness which limits material objects to having a uniform material composition throughout. These filter down to the material object concept level and facilitate the classification of matter into types. However, they don’t provide us with specific types. Rather, specificity comes from the perception of material objects. The representation of material objects is sensitive to texture, color, and irregularities in shape which material objects possess.

There are two reasons for treating material object concepts. First, they are inductively rich and their processing is sufficiently complex. Different types of material object behave differently upon surface contact. There seems to be a continuum of coherence, which explains this. Second, this richness is not entirely captured by language, specifically by the mass-count distinction.

I would like to thank Roberto Casati, Randall Dipert, Gerald Erion, Barry Smith, and an anonymous reviewer for helpful comments. All remaining errors belong to the author. I would also like to acknowledge the National Science Foundation for supporting this research under the IGERT program at the State University of New York at Buffalo under award number DGE-9870668.



  1. Ayers, Michael. (1997). Is Physical Object a Sortal Concept? A Reply to Xu. Mind & Language, 3/4, pp. 393-405.
  2. Baillargeon, Renee, Spelke, Elizabeth S., and Wasserman, Stanley. (1985). Object Permanence in Five-Month-Old Infants. Cognition, 20, pp. 191- 208.
  3. Barker, Roger G. (1968). Ecological Psychology: Concepts and Methods for Studying the Environment of Human Behavior. Stanford: Stanford University Press.
  4. Bunt, H.C. (1979). ‘Ensembles and the Formal Semantic Properties of Mass Terms.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 249-277.
  5. Cartwright, H. (1979). ‘Some Remarks about Mass Nouns and Plurality.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 249-277.
  6. Carey, Susan, and Xu, Fei. (2001). Infants’ Knowledge of Objects: Beyond Object Files and Object Tracking. Cognition, 80, pp. 179-213.
  7. Chiang, W.-C., and Wynn, K. (1997). Eight-Month-Olds Reasoning about Collections. Poster presented at the meeting of the Society in Child Development, Washington, D.C., April 4.
  8. Forde, E.M.E., and Humphreys, G.W. (Eds.). (2002). Category Specificity in Brain and Mind. New York: Psychology Press.
  9. Frege, Gottlob. (1892/1966). On Sense and Reference. In Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (Eds.), Blackwell: Harvard University Press.
  10. Gathercole, Virginia C. (1986). Evaluating Competing Linguistic Theories with Child Language Data: The Case of the Mass-Count Distinction. Linguistics and Philosophy, 9, pp. 151-190.
  11. Gibson, James J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin Company.
  12. Harnad, S. (1990). The Symbol Grounding Problem. Physica D 42, pp. 335- 346.
  13. Hirschfeld, Lawrence A. (1996). Race in the Making: Cognition, Culture, and the Child’s Construction of Human Kinds. Cambridge, Massachusetts: MIT Press.
  14. Huntley-Fenner, Gavin. (2001). Children’s Understanding of Number is Similar to Adults’ and Rats’: Numerical Estimation by 5-7 Year Olds. Cognition, 78 (3) B27-B40.
  15. Kayed, Ahmad, and Colomb, Robert M. (2002). Using Ontologies to Index Conceptual Structures for Tendering Automation. Australian Computer Science Communications, 24 (2), pp. 95-101.
  16. Kripke, Saul A. (1980). Naming and Necessity. Blackwell: Harvard University Press.
  17. Laycock, H. (1979). ‘Theories of Matter.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 89-120.
  18. Lutz, Michael, Riedemann, Catharina, and Probst, Florian. (2003). ‘A Classification Framework for Approaches to Achieving Semantic Interoperability between GI Web Services.’ In W. Kuhn, M.F Worboys, and S. Timpf (Eds.), Conference on Spatial Information Theory, LNCS 2825. Berlin, Heidelberg: Springer-Verlag, pp. 186-203.
  19. Marr, David. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman and Company.
  20. McKeon, Richard. (1941). The Basic Works of Aristotle. New York: Random House.
  21. Millikan, Ruth Garrett. (1998). A Common Structure for Concepts of Individuals, Stuffs, and Real Kinds: More Mama, More Milk, and More Mice. Behavioral and Brain Sciences, 21, pp. 55-100.
  22. Pelletier, Francis Jeffry. (1979). ‘Non-Singular Reference: Some Preliminaries.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 1- 14.
  23. Putnam, Hilary. (1975). The Meaning of “Meaning.” In: Language, Mind and Knowledge, Keith Gunderson (Ed.) Vol. 7 of Minnesota Studies in the Philosophy of Science. Minnesota: University of Minnesota Press.
  24. Pylyshyn, Z.W., and Storm, R.W. (1988). Tracking of Multiple Independent Targets: Evidence for a Parallel Tracking Mechanism. Spatial Vision, 3, pp. 179- 197.
  25. Quine, W.V.O. (1974). Methods of Logic. London: Routledge.
  26. Robinson, Denis. (1982). Re-Identifying Matter. The Philosophical Review, 91,(3), pp. 317-341.
  27. Soja, N.N., Carey, S., and Spelke, E.S. (1991). Ontological Categories Guide Young Children’s Inductions of Word Meaning: Object Terms and Substance Terms. Cognition, 38, pp. 179-211.
  28. Talmy, Leonard. (2000). Toward a Cognitive Semantics: Volume I: Concept Structuring Systems. Cambridge, Massachusetts: MIT Press.
  29. Ware, R. (1979). ‘Some Bits and Pieces.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 15-29.
  30. Wielinga, B.J., Schreiber, A.T., Wielemaker, J., and Sandberg, J.A.C. (2001). From Thesaurus to Ontology. Proceedings of the International Conference on Knowledge Capture. New York: ACM Press, pp. 194- 201.
  31. Zemach, E. (1979). ‘Four Ontologies.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 63-80.



Jeffrey S. Galko,
State University of New York at Buffalo, Ontology and Perception, Essays in Philosophy, A Biannual Journal, Vol. 5 No. 1, January 2004

Knowledge Argument (KA)

Frank Jackson first presented the Knowledge Argument (henceforth KA) in “Epiphenomenal Qualia” (1982). The KA is an argument against physicalism, the doctrine that (very roughly put) everything is physical. The general thrust of the KA is that physicalism errs by misconstruing or denying the existence of the subjective features of experience. Physicalists have given numerous responses, and the debate continues about whether the KA ultimately succeeds in refuting any or all forms of physicalism. Jackson himself has recently (1998a) recanted: he now rejects the KA and endorses physicalism. One point should be acknowledged by all sides: in formulating the KA, Jackson clearly and forcefully articulated a deep-seated, intuitive reason why even some scientifically-minded analytic philosophers have resisted physicalism. The KA has been sufficiently influential that it is even discussed in writings aimed at general audiences that include non-philosophers; for example, it is discussed in E. O. Wilson’s Consilience (1998).

What follows is an overview of the literature on the KA. I will begin in section 1 with a sketch of the KA, followed by a discussion of the historical background in section 2. In section 3, the third and longest section, I will provide a taxonomy of objections to the KA, along with brief descriptions of them. In section 4 I will compare the KA to related arguments. I will close in section 5 by briefly raising a question about the extent to which the KA can be generalized.

1. The Basic Idea of the KA

The basic idea of the KA can be put abstractly as follows: one might know all the objective, physical facts about human conscious experiences, and yet fail to know certain facts about what human conscious experiences are like subjectively; therefore, there are facts about human conscious experiences that are left out of the physicalist’s story, and so physicalism is false.

The KA’s persuasive force derives chiefly from Jackson’s clever thought experiment involving Mary, the super-scientist. (The Mary case is one of two of Jackson’s original cases. The other involved Fred, who could see more colors than normal humans can. The Mary case is simpler and thus more often discussed.) Mary spends her life in a black-and-white room and has no color sensations. She watches science lectures on black-and-white television and learns everything about seeing in color that can in that way be learned. This includes mastering the completed science of human color vision. If physicalism were true, she would know all the facts about color experiences, because physicalism entails that all such facts can be expressed in the colorless language of science. But, one thinks intuitively, when she ventures into the colorful outside world and has color experiences for the first time, she learns something: she learns what it’s like to see in color. Therefore, Jackson concludes, ph ysicalism is false.

In short:

1. Before Mary leaves the room, she knows all the physical facts about color experiences.

2. When Mary leaves the room, she learns new facts (i.e., facts she did not know previously) about color experiences — facts about what it’s like to see in color.

3. Therefore, there are non-physical facts about color experiences.

4. Therefore, physicalism is false.

The argument can be formulated using the term ‘information’ instead of ‘facts’; Jackson uses both locutions.

The preceding compressed summary of the KA is convenient for conveying the basic idea, and it is accurate insofar as it goes. But it hides some implicit assumptions, as will be made clear below.

Before proceeding, however, it should be noted that the Mary case involves at least some idealization and possibly oversimplification. First, the assumption that science is completeable, even in a limited realm like the science of color vision, is not trivial. Second, several special implicit stipulations must be made in order to ensure that Mary’s pre-release visual experiences are not in color. For example, we must assume that she never presses on her eyes in such a way as to produce flashes of yellow. Alternatively, we could dispense with the device of the black-and-white room, and assume instead that she is congenitally colorblind and, after completing her science lessons, acquires color vision. Third, it may be naive to assume that Mary’s visual experiences would be very much like watching black-and-white television; for (anecdotal) reasons against that assumption, see Sacks 1995. However, none of these three points should be mistaken for subs tantive objections to the KA. There are, of course, substantive objections, as we shall see in section 3 below.

2. Historical Background

Something close to the KA can be found in writings that preceded Jackson 1982. As Jackson himself acknowledges, “Epiphenomenal Qualia” owes a great deal to Nagel 1974. Indeed, David Lewis (1983) describes Jackson’s KA as a purified version of an argument in Nagel 1974, and there is much truth in Lewis’ description. Thus, authors sometimes employ phrases like ‘the Nagel-Jackson Knowledge Argument’, and some (e.g., Pereboom 1994) argue that Nagel’s and Jackson’s arguments are at root identical. The KA did not elicit an enormous response until Jackson 1986, in which Jackson presented the KA for a second time and defended it against objections raised in Churchland 1985.

For the reader unacquainted with the history of Twentieth Century analytic philosophy of mind, it is worth noting that the dominant theories have all been physicalist — or in the case of functionalism, compatible with physicalism — and they have usually taken a reductionist form (see Searle 1992). That is the principal reason why arguments like the KA are regarded as important: the KA is an intuitively forceful attack on the entire reductionist-physicalist approach, rather than on one particular form of reductionist-physicalism, such as philosophical behaviorism.

3. Objections to the KA

3.1. Outline of Objections

Objections to the KA have been many and varied, and I will describe them below, beginning with section 3.2. In this section, I will classify the objections by explaining how their proponents would respond to a series of questions about the KA (a similar taxonomy appears in van Gulick 1993). My basis for choosing these questions in particular is as follows. The KA is driven by the intuition that Mary learns something, i.e. that she acquires knowledge, when she leaves the room. Objections may thus be divided into two groups: (i) those that reject Jackson’s intuition that Mary gains knowledge when she leaves the room, and (ii) those that accept Jackson’s intuition, but reject the consequences that he infers from it. Group (i) is represented by a negative answer to Question 1 below, and group (ii) is represented by negative answers to Questions 2, 3, or 4. (Authors are sometimes listed more than once, either because they propose explicitly diff erent views or because their views can be understood in different ways.)

#Question 1:    When Mary is released, does she acquire knowledge (in any sense)?

No. We think so only because we fail to appreciate how much the pre-release Mary knows. (Dennett, Churchland, Foss, Jackson?)

#Question 2:    But does she acquire factual (propositional) knowledge?

No. She gains only know-how, which is not propositional. (Nemirow, Lewis, Mellor)

No. She gains only acquaintance knowledge or indexical knowledge. (Conee, Bigelow and Pargetter, McMullen, Papineau, Yi)

#Question 3:    But does her new knowledge consist in learning new facts (facts she did not previously know)?

No. She represents old facts in a new way. (Horgan, Churchland, Tye, Lycan, Loar, Pereboom, Bigelow and Pargetter, van Gulick, McMullen, Papineau, Teller)

#Question 4:    But is physicalism thus refuted?

No. All the facts about qualia are, though inaccessible to the pre-release Mary, facts about the brain, and the existence of such facts is consistent with non-reductionist forms of physicalism. (Searle, Flanagan, Alter)

In the preceding chart, authors who accept the KA’s anti-physicalist conclusion are not listed. Such philosophers include Robinson (1996), Chalmers (1996), Gertler (1999) and (until recently; see section 3.2 below) Jackson (1982, 1986). Chalmers’ discussion is arguably the most thorough and vigorous defense of the KA as a refutation of physicalism.

Let us now consider each question in turn.

3.2. Does Mary Acquire Knowledge When Released?

Most of those discussing the KA are willing to grant that Mary learns something when released. But not everyone accepts that premise. Foss (1989) argues that the pre-release Mary lacks no knowledge about color experiences, because she could know everything that the color-sighted people who reside in the colorful outside world would (or even might) say about colors. Foss’s strategy has not been popular, presumably because, as Chalmers (1996) notes, it is far from clear that knowing everything about the verbal behavior of those who have had color experiences is sufficient for knowing what it’s like to see in color.

Other reasons for doubting that Mary learns anything when she is released may be found in Churchland 1985, Stemmer (1989) and in Dennett 1991. The position is somewhat clearer in Dennett 1991, since Churchland concentrates on another objection to the KA (see section 3.5 below). Dennett argues that, prior to leaving the room, Mary is already capable of identifying the kinds of color experiences she is about to have by using technical instruments like cerebroscopes — she would be able to recognize the brain patterns stimulated by her first color experiences, and so she would hence not be fooled by a blue banana.

One problem with Dennett’s argument is that it seems to presuppose that having certain recognitional capacities is equivalent to knowing what it’s like to see in color. That presupposition clearly requires defense and would beg the question against Jackson if not defended on independent grounds; see below, section 3.3. For other criticisms of Dennett’s discussion of the KA, see Robinson 1993 and Jacquette 1995.

Nevertheless, there is an important moral to be drawn from Dennett’s discussion: we should take seriously the possibility that our intuitive judgment — that Mary learns something when released — is based on ignorance of what Mary’s vast knowledge would involve. After all, her pre-release knowledge includes everything in completed physics and neurobiology. Our confidence in any conclusions we draw from the Mary case should be limited accordingly.

Jackson himself has recently rejected the premise that Mary learns anything when she leaves the room, partly based on similar considerations. In Jackson 1998b) he argues that we should be suspicious of giving “intuitions about possibilities [like the Mary case] too big a place in determining what the world is like” (43-4). And in Jackson 1998a he states that, in his view, Mary does not gain knowledge when she leaves the room. He thinks that the real puzzle is to explain why the intuition to the contrary is so strong. He suggests the following explanation. Learning a physical fact often involves making inferences; it is often a long and complex process. By contrast, when Mary leaves the room, her gain in knowledge is almost immediate. We therefore infer, wrongly but naturally, that the knowledge gained cannot be knowledge of physical facts.                   

3.3. Does Mary Gain Only Abilities When Released?

If it is granted that Mary gains knowledge when released, the question arises as to what kind of knowledge she gains. It is generally agreed that if she gains knowledge, the knowledge she gains is knowledge of what it’s like to see in color, in Nagel’s (1974) sense of the phrase. (Nida-Rumelin (1998) argues that the phrase ‘knowing what it’s like’ should not be used in formulating the KA, but I will ignore this complication.) But what kind of knowledge is knowing what it’s like?

Jackson assumes that knowing what it’s like is a kind of propositional knowledge, but others disagree. In his review of Thomas Nagel’s Mortal Questions, Laurence Nemirow proposed that knowing what it’s like is a kind of know-how — it consists only in the possession of abilities, such as the ability to identify red objects as red, to imagine or remember having a red experience, and so on. Nemirow (1980, (1990) and Lewis (1983, 1988) adopt the ability analysis of knowing what it’s like and use it to block the KA. Mellor (1993) also defends a version of the view. These philosophers argue as follows. When Mary is released, she learns what it’s like to see in color, just as Jackson says. But what this means is that she acquires new abilities, not new information: she learns no info rmation or facts that she did not previously know. Therefore, they conclude, the Mary case provides no basis for doubting physicalism’s truth, even though the pre-release Mary does not know what it’s like to see in color.

Challenges to the ability analysis of knowing what it’s like are found in Conee 1994, Alter 1998, Loar 1990, Raymont 1999, and Lycan 1995 and 1996. Conee and Alter argue that having the abilities Nemirow and Lewis mention is neither necessary nor sufficient for knowing what it’s like. For example, against sufficiency Conee notes that one can have an ability without ever exercising it, and such could be Mary’s pre-release state. Both Conee and Alter argue that in principle one could know what it’s like to see red while seeing a red tomato and never possess the ability to imagine, remember, etc., such experiences; and that therefore possessing such abilities is not strictly necessary for knowing what it’s like. Raymont 1999 offers similar arguments and defends them against objections. The general strategy here is to argue that knowing what it’s like cannot be identified with having abilities because there are conceivable cases in which one can know what it’s like without having the relevant abilities and vice versa.

Lycan offers a barrage of criticisms of the ability analysis — ten in all, some of which involve close semantic analyses of the relevant linguistic expressions. One of these semantically-based criticisms was originally given by Loar, who in turn modeled his objection on one of Geach’s (1960) objections to ethical emotivism. Loar and Lycan argue that ordinary English claims expressing knowledge of what it’s like can be embedded in conditionals in a straightforward way, and that the same cannot be said of ordinary English expressions of the possession of the relevant abilities.

One aspect of the Lewis-Nemirow strategy that has not been much discussed is the assumption that know-how consists entirely in the possession of abilities, as opposed to propositional knowledge. That assumption has been forcefully challenged by Noam Chomsky (though not in connection to the KA). See, for example, Chomsky 1994. There Chomsky discusses brain injuries that result in a temporary loss of an ability, such as the ability to ride a bicycle, which is regained after recovery. As Chomsky writes, “[w]hat remained intact was the cognitive system that constitutes knowing how to ride a bicycle; this is not simply a matter of ability, disposition, habit, or skill” (Chomsky 1994, 11). Chomsky is concerned principally with linguistic know-how, of course, but (as his bicycle example indicates) his arguments are general and therefore apply to the Lewis-Nemirow ability analysis of knowing what it’s like: if there is more to know-how than possessing abilities, then one could question whether the Lewis-Nemirow strategy succeeds in preserving the intuition that Mary gains knowledge in any sense, even if it were granted that knowing what it’s like is a kind of know-how. This criticism is pursued in Alter n.d.

3.4. Does Mary Gain Only Acquaintance Knowledge or Indexical Knowledge When Released?

A variation of the Lewis-Nerimow strategy is to argue that, upon her release, Mary gains neither propositional knowledge nor know-how, but rather acquaintance knowledge — that she comes to know color experiences in the sense that one comes to know a person or a city (Conee 1994). Herbert Feigl (1967, esp. p. 68) once proposed such an account of knowing what it’s like, and other authors have made similar proposals. Conee (1994) applies the acquaintance analysis specifically to the KA.

Conee’s view is criticized in Alter 1998, where it is argued that, although Mary may acquire acquaintance knowledge upon her release, it is implausible that all she gains is acquaintance knowledge, and that this conclusion is supported by a careful examination of Conee’s analogy between becoming acquainted with a person or city and becoming acquainted with color qualia. However, the acquaintance knowledge analysis of knowing what it’s like cannot be easily dismissed; it should be regarded as a contender view.               

Some perceive a connection between Mary’s situation and a lack of indexical knowledge, and that connection forms the basis of an objection similar to the one based on the acquaintance knowledge analysis. These objectors (McMullen 1985, Bigelow and Pargetter 1990, Papineau 1993, Yi n.d.) concede that Mary gains knowledge when she leaves the room, but they argue that her gain is comparable to, and no more puzzling than, the absent-minded U.S. historian who learns that today is July 4th, America’s Independence Day. This strategy is criticized by Chalmers (1996).               

One difference between the indexical knowledge strategy and Conee’s acquaintance knowledge strategy is that advocates of the former tend not to deny that knowing what it’s like consists (at least in part) in propositional knowledge. Their tendency is to argue that the comparison of the Mary case to other examples of gaining indexical knowledge shows that Mary’s apparent gain in factual knowledge does not indicate that color experiences are non-physical. See section 3.7 below for similar criticisms of the alleged anti-physicalist implications of the Mary case.

3.5. Does Mary Just Come To Know Old Facts Under New Guises?

Several of the KA’s critics admit that the knowledge Mary gains when released is propositional in kind, but deny that she learns any new facts — facts that were not known to her prior to her release. According to these critics, what happens is that Mary comes to represent differently facts she already knew. On their view, the facts about color experiences are captured completely and accurately by the completed science that the pre-release Mary learns. Those same facts can be represented under phenomenal guises, but the pre-release Mary does not so represent those facts. The pre-release Mary lacks knowledge about color experiences in something like the way that Jones, who is up on his sports history but has never heard the name ‘Cassius Clay’, lacks knowledge of Clay’s boxing talents. Jones does not lack any pugilistic knowledge; he simply fails to represent the relevant facts using the ‘Clay’-guise. Likewise, the objectors argue, the pre-release Mary knows all the fa cts about color experiences; she simply fails to represent them under the relevant phenomenal guises.

The old-fact/new-guise analysis was first used as a criticism of the KA by Terence Horgan (Horgan 1984). Versions of it have since been developed by several authors, including Churchland (1985), Pereboom (1994), Tye (1986), Bigelow and Pargetter (1990), van Gulick (1993), Lycan (1990, 1996), Loar (1990), McMullen (1985), Papineau (1993), and Teller (1992).                      

It is argued in Alter 1998 and Chalmers 1996 that the analogies drawn to cases like the Ali/Clay case do not support the old-fact/new-guise theory. Alter and Chalmers each argue that, even if Mary gains “only” new phenomenal guises when she leaves the room, she nevertheless learns new facts involving those new guises. That criticism is not sufficient to undermine Loar’s sophisticated version of the old-fact/new-guise analysis, as Chalmers points out; but Chalmers also argues that even Loar’s version of that the theory does not stand up to further scrutiny. A criticism of Tye’s version of the theory is presented in Raymont 1995; a criticism of Pereboom’s version is presented in Alter 1995a; and at the 1999 Pacific Division APA meetings, A. Anchustegui argued that the theory succumbs to Kripke’s modal argument against type-identity theory. However, the old-fact/new-guise theory of Mary’s post-release knowledge remains the most widely held view among the KA’s critics.

3.6. A Semantic Objection to the KA

Some adduce considerations from the philosophy of language against the premise of the KA that Mary learns new facts when she leaves the room. Those objectors reason as follows. There is no reason why the pre-release Mary cannot communicate with color-sighted people who reside in the colorful world outside her black-and-white room. Those color-sighted people can express in language precisely the facts about knowing what color experiences are like that Mary is supposed not to know. For example, they might say or write, “Seeing red is like this”, intending the demonstrative to refer to color qualia. Indeed, such a sentence might appear in one of Mary’s science lectures. According to the objectors, some such communication would provide Mary with access to any facts about color experiences that she does not learn from her science lectures. After all, the objectors reason, contemporary theories of reference suggest that historical chains of communication enable those who know vi rtually nothing at all about Cicero to refer specifically to him (see, for example, Kripke 1972); why, then, wouldn’t Mary’s communication with those who have had color experiences provide her with cognitive access to the facts in question?

Versions of this semantically-based objection to the KA are presented in Tye 1986 and Conee 1994. Alter (1998) counters, however, by arguing that the objection depends on confusing different senses of ‘having access to a fact’ and on related mistakes. But the objection raises important issues about the extent to which language can enable one to grasp propositions about that with which one is unacquainted — a topic of particular interest to Bertrand Russell and other central figures in philosophical semantics. See Russell 1910-11. For more recent discussions of the issue, see Donnellan 1979 and Kaplan 1989.

3.7. Is Physicalism Thus Refuted?

The KA is sometimes portrayed as an argument for something akin to Cartesian Dualism. Whether or not the KA could be used for such a purpose, Jackson makes clear that he never had any such intention. In Jackson 1986, he suggests that the KA may be used to support property dualism, and David Chalmers (1996) concurs (see also Furash 1989 and Robinson 1996). Unfortunately, Jackson does not explain exactly what he means by ‘property dualism’. Minimally, property dualism implies that (certain) mental properties are not identical to any neural properties. But that non-identity thesis is consistent with the weaker physicalist thesis that mental states are constituted by or realized in brain states; and the non-identity thesis is also consistent with the thesis that disembodied minds are impossible.

If Mary learns new facts when released, does it follow that physicalism is false? That depends on what physicalism entails. Jackson (1986) defines physicalism as the doctrine that all facts are physical (in Jackson 1982 he formulated the physicalist thesis with ‘(correct) information’ instead of ‘facts’, but he treats the two formulations as equivalent). And he claims that if physicalism is true, then all the facts about color experiences would be known to the pre-release Mary — a claim that may seem trivial, but is not. What may be trivial, because it is stipulated and seems coherent, is the claim that the pre-release Mary knows everything that can be conveyed to and understood by a human being by black-and-white television lectures. But why should we believe that all physical facts can be conveyed to a human being — or any creature — through a black-and-white medium?

That question has not received a tremendous amount of attention, perhaps because Jackson formulates his stipulation by saying that Mary learns all of the physical facts while in the room. Yet one may legitimately wonder whether the latter stipulation is coherent — whether anyone, even a superscientist, could learn all of the physical facts about color experiences without having any color experiences herself. All Jackson does to defend the coherence of his stipulation is to offer the following quick reductio ad absurdum argument: if it were impossible to learn all of the physical facts about color experiences without having any color experiences, then the Open University would have to be broadcast in color, which is absurd. (The Open University is a British University in which classes are conducted almost entirely over television.) But Jackson’s reductio argument is not compelling. As odd as it may sound, perhaps some physical facts about color experiences cannot be conveyed accurately and completely in black-and-white — or perhaps some such facts cannot be understood if conveyed in black-and-white. Perhaps the Open University would have to be broadcast in color, if the goal is to convey all the facts about color vision. Owen Flanagan (1992) makes this point, arguing that the pre-release Mary “does not have complete physical knowledge” (100). Similar points are made in Alter 1998, Horgan 1984 and Searle 1992. Bealer (1994) also uses similar reasoning, along with a comparison of the KA to the paradox of analysis, to conclude that the KA poses no threat to the mental-state/brain-state identity thesis.

In a short postscript to Jackson 1986, Jackson (1995) elaborates on his view that the epistemological premises of the Knowledge argument support substantial metaphysical conclusions. More specifically, he argues that, “materialism is committed to the a priori deducibility of our psychological nature from our and our environment’s physical nature” (189). He elaborates on this point in Jackson 1998b. And some authors do take the KA to refute, or at least provide a serious challenge to, physicalism of any kind. But the question of what exactly follows from admitting that Mary learns new facts when released remains unresolved.

In Jackson’s original article, facts about functional roles were counted among the physical facts, and the implication seems to be that Jackson did regard the KA as refuting functionalism (for concurring opinions, see Vidal 1995 and Robinson 1993). Jackson (1982) could reasonably be read as implying that the KA leaves us with epiphenomenalism; in that article, immediately after presenting the KA, he defends epiphenomenalism against objections. But whether the KA implies epiphenomenalism is a substantive issue; see Searle 1992. Indeed, Watkins (1989) argues that Jackson cannot consistently accept epiphenomenalism and irreducibly non-physical qualia. More generally, there is no consensus about what, if any, substantial metaphysical theses follow from granting that Mary learns new facts when she is released.

4. The KA, Nagel’s Argument, and Kripke’s Modal Arguments

I noted in section 2 that the KA bears much similarity to arguments presented in Nagel 1974. Some differences between the KA and Nagel’s arguments are worth mentioning. First, Nagel’s argument involves claims about the essence of mental and physical processes. The KA involves no such claims. In fact, in a review of a book by Brian O’Shaughnessy, Jackson (1982b) suggests that although mental states have qualia, qualia may be inessential properties of those states — a view that directly contradicts Nagel’s opinion on this matter. Second, unlike Nagel’s arguments, the KA does not involve empirical theses about what humans can and cannot imagine (Jackson emphasizes this point in both Jackson 1982 and Jackson 1986). Third, unlike Jackson, Nagel does not purport to show that physicalism is false; Nagel’s conclusion is rat he r that physicalism is, though possibly true, presently unintelligible. This last point may, however, be a distinction without a difference. Nagel can plausibly be read as arguing for the falsity of reductionist forms of physicalism that deny the subjectivity of the phenomenological features of mental states, even though he regards non-reductionist forms (such as dual aspect theory) as possibly true though presently unintelligible. The KA could perhaps also be seen as directed only at such reductionist versions of physicalism.

It is also important to distinguish the KA from Kripke’s (1972) famous anti-physicalist arguments. As Jackson (1982) notes, the KA is not a modal argument in the sense that Kripke’s arguments are. Unlike Kripke’s arguments, the KA could consistently be given by Quinean skeptics about de re modality or by contingent identity theorists.

However, John Searle (1992) claims that Jackson’s KA, Nagel’s bat-arguments, Kripke’s modal arguments, and certain arguments of his (Searle’s) own can be seen as variations on a single theme: Twentieth Century materialist theories, from behaviorism to the identity thesis to functionalism to eliminative materialism, all err in denying the irreducible subjectivity of mental states (see Holman 1987). Put more positively, the KA could be seen as one variation of an argument for irreducible qualia. As Searle emphasizes, that conclusion is at least prima facie consistent with the claim that qualia are features of the brain.

5. Generalizing the KA

Jackson writes that the KA can be deployed, “for the various mental states which are said to have (as it is variously put) raw feels, phenomenal features or qualia” (1982, 130). Surprisingly, the question of exactly how far the KA extends has not been seriously investigated. Virtually all of the published discussions of the KA follow Jackson’s in focussing exclusively on perceptual experiences and sensations. (Janet Levin’s (1985) paper on the KA, “Could Love Be Like A Heatwave?”, is no exception: love is mentioned nowhere in the text of her article.) But there is a substantive issue about whether Jackson’s reasoning, to the extent that it is sound, can be extended to other aspects of consciousness, such as emotions and propositional attitudes. A brief discussion of that issue occurs in Alter 1995b, and a more detailed account appears in an as yet unpublished paper by Alt er, “What a Vulcan Couldn’t Know”.

A tempting conclusion to draw from the foregoing overview is that the issues surrounding the KA have been thoroughly canvassed. But who knows? As Yogi Berra allegedly said (according to Pinker 1997), it is hard to make predictions, especially about the future.

Torin Alter, University of Alabama

The Mysterious Mind

Review of Radiant Cool, Author: Dan Lloyd, MIT Press

“Nobody expects the Spanish Inquisition” says our philosopher hero, Miranda Sharpe, to her empty-headed cat, Holly Golightly. Indeed they don’t. And nobody expects to read an exciting crime thriller that is set in the world of cognitive science, peopled by real philosophers who care about the mysteries of existence, and claims to present a new theory of consciousness.

The author, Dan Lloyd, a neurophilosopher from Trinity College in Hartford, Connecticut, has burst onto the consciousness scene with a book that is both a gripping story and an intellectual challenge. He even appears in his own plot, portraying himself (or his namesake) as a polite, if dull, middle-aged cognitive scientist who creates the best-ever website on consciousness and lives in a barn with avocado kitchen appliances.

The story begins one day at 6 a.m. when graduate student Miranda slips unnoticed into her Professor’s office and finds, to her horror, that the gruesome Max Grue is lying there, slumped over his keyboard. Asleep? Sick? Dead? She grabs what she has come for, a bright red folder labelled “Consciousness”, and runs.

The folder contains many surprises, as does the creepy female therapist, the sinister Russian forensic data scientist, and Gordon, the nerdy fellow grad student. So does Miranda’s research on “The Thrill of Phenomenology”.

It has to be said that not many people find phenomenology thrilling. This daunting philosophical tradition is based on the work of German philosopher Edmund Husserl (1859-1938), who advocated exploring consciousness by suspending judgement and looking directly into immediate experience. It has recently become very trendy in consciousness studies, perhaps because it offers the hope of reconciling private conscious experiences with the study of the brain — but phenomenology is notorious for its impenetrable language and slippery concepts.

Yet Miranda loves it. One morning, finding herself alone in charge of Grue’s class on “The Mystery of Consciousness” she is confronted by a Barbie-faced sophomore who complains “I don’t get any of this at all” — an understandable reaction from a student faced with Husserl.

Miranda rises to the challenge. She explains to the class why appearance is reality, why meaning takes time, and why the mind is a text. She even explains that “superposition” has nothing to do with quantum physics and Schrödinger’s cat, but refers to Husserl’s idea that conscious experience is always heaped up with meanings — every moment of awareness is a pile of interpretations all in “superposition”. A single state of mind is layered with harmonics of meaning — yet somehow remains one experience.

You might dismiss all this as wordy waffle — and the book along with it. But there are good reasons not to. One is that Miranda really does understand why consciousness is such a mystery. Another is that Max Grue, and by implication Grue’s creator Lloyd, claims to have built a novel scientific theory that bridges the gulf between neuroscience and phenomenology.

So, first to the mystery. At the start of the twenty-first century, there are hundreds of books on consciousness and dozens of writers who claim to have solved it. One of the most frustrating things is how few of them appreciate just how deep the mystery is. Daniel Dennett, a philosopher at Tufts University in Massachusetts and author of Consciousness Explained, defines a mystery as something “that people don’t know how to think about — yet.” He calls consciousness “just about the last surviving mystery”.

The trouble lies with subjectivity, or “what it is like” being me now. For example, I am right now having the experience of sitting in this room, with all its many sights, sounds and feels. My experience seems to be private, fleeting, ungraspable and utterly undeniable, and this is what we mean by consciousness. But how can this subjective experience relate to the objective world around me and to the physical brain inside my head? Real, physical things like rooms and brains seem to be of a completely different order from subjective conscious experiences.

We face what Australian philosopher David Chalmers, of the University of Arizona, calls the “hard problem” — the impossibility of seeing how the activity of brain cells can possibly give rise to subjective experience. In spite of dramatic developments in neuroscience, there is still this fathomless abyss between the objective and subjective worlds. We can study the conscious experiences of “mindspace” and the neural events of “brainspace” but we never seem able to map one onto the other.

Half-baked theories of the “consciousness is a fifth dimension” or “consciousness is a spiritual force” type miss this point completely. But so do the many scientific theories that locate consciousness in one part of the brain, equate it with a particular pattern of neural firing, or reduce it down to quantum levels inside minuscule cellular structures. In every case the mystery remains because it is impossible to see why that particular process or this particular brain area is conscious while all the rest are not. In fact just about every theory we have fails utterly in this way and leaves the mystery untouched.

So it is refreshing to find that Miranda and her fellow fictional characters do appreciate the problem. When Gordon, the neural network building nerd, makes the banal observation, “It’s all neurons. That’s all”, Miranda is quick to challenge him. How can a neuron be conscious? How can a physical piece of brain be a private experience? “What is it about neurons that makes them the medium of superposition?” she cries, “What is it about the neurons that makes seeing and smelling different?”. Poor old Gordon cannot answer, but Max and Miranda have been secretly sketching out what a viable theory of consciousness would need to do.

They explain their aim like this: it may be true that “It’s all neurons” but that statement is opaque because no one can understand how it could be true. The ultimate theory of consciousness must be a transparent theory that makes it obvious how consciousness can be the activity of neurons. In such a theory every description in “mindspace” will have an equivalent description in “brainspace”. So they set about using phenomenology to map out the nature of conscious experiences and then trying to fit this with neuroscience — eventually creating the theory on which the plot depends.

Thought experiments

So how well does this novel theory fare? Lloyd describes the act of writing a novel as “the most thorough thought experiment” implying that the story might help reveal the implications of his theory. Yet the novel itself proves insufficient even to explain what the theory is. So Lloyd adds over a hundred pages of helpful appendix to do the job. Here we are given a tutorial on some difficult concepts — recurrent neural networks, high resolution brain scanning, multi-dimensional scaling, and the practice of phenomenology, all of which are necessary for understanding his theory.

Much of the explanation deals with the experience of time — a favourite preoccupation of phenomenology. Rather than assuming that consciousness consists of a sequence of distinct, separate moments happening one after the other, Husserl stressed that every Now carries within itself the shadow of what is just past and the expectation of what comes next. So not only is conscious experience always heaped up with superimposed meanings, but each Now is a confluence of retention, presence and protention. This is Husserl’s famous tripartite structure of time, and was this phenomenological “Now” that Lloyd wanted to be able to describe in brain terms. He would then have achieved his mapping of mind to brain and thus his “transparent theory of consciousness”.

The task, then, is to find a way of describing what the brain does that fits phenomenology. Lloyd claims that conventional theories cannot do the job but that simple recurrent neural networks can. Neural networks are computer simulations of the way real brains might work. Typically they consist of several layers of highly interconnected units each of which represents a neuron; an input layer, an output layer and some hidden layers in between. The strength of the connections between the units changes with new inputs, and the state of the whole complicated network determines what its output will be. These artificial neural networks have been highly influential in cognitive science.

In 1990, Jeffrey Elman, Professor of Cognitive Science at the University of California, San Diego, proposed a new sort of “recurrent network” that has an extra “context layer”. This copies the most recent state of a hidden layer and then presents it alongside the next input. So, in effect, the network enfolds both past and present information. If brains are like this, claims Lloyd, a description of how they work might just match the phenomenological description of “now”.

To find out, Lloyd uses networks like these and measures their activity, but their behaviour is so complicated that it can only be described in terms of a vast multidimensional space. So he uses a mathematical technique known as multi-dimensional scaling. This reduces the mathematical space to just two or three dimensions, making it much easier to understand what is going on.

Playing with multi-dimensional spaces like these provides fun for the novel. Through her humble laptop, Miranda is drawn into the fictional Dan Lloyd’s “best-ever” consciousness website, the Labyrinth of Cognition, swooping through a virtual world of spaces of the mind. “This is what I am.” she cries, “Miranda Sharpe, an ever-shifting pattern in the dark.” She falls into multiple states of consciousness until she becomes “the brain from the brain’s own point of view.” This is where brain-map and phenomenology finally fit together. This is where “brainspace is also mindspace.”

It’s a fun story, but does the theory really apply to the brain? To find out Lloyd would need to apply the same mathematical analysis to data from human brains — and he would need lots of data. Fortunately this is available. At the fMRI Data Center at Dartmouth College in Hanover, New Hampshire, researchers have been depositing vast amounts of data from brain-scanning experiments and this is what he used. The simple artificial net was complicated enough but this meant another leap in complexity – Lloyd compares it to swapping a pair of binoculars for the Hubble Space Telescope. He took the masses of fMRI data, applied the same multi-dimensional scaling techniques, and hoped that Husserl’s phenomenology would predict what he found.

Flowing with time

Simplifying drastically, the argument goes like this. According to phenomenology, temporality is in everything experienced. So if it can be observed in the fMRI data it should be seen in a wide variety of tasks and across many regions of the brain. Phenomenology also describes experience as in continuous flux and temporality as monotonic — that is, it always goes in one direction. So change in the brain should also be seen always going in one direction. Put more concretely, a person’s fMRI data should show that as their brain states change, the difference between successive brain states should increase continuously over time rather than the brain returning again and again to a similar state. This is, indeed, what Lloyd found. The brains, he said, were “flowing with time”.

Success? Maybe not. David Rose, a Psychologist at the University of Surrey, says it’s “a nice idea but it’s not going to work in practice”. The fMRI data come from changes in blood flow which have a resolution of a few seconds and so are much too slow to track the phenomenological structure of time. As Rose explains “Lloyd has not shown the ‘tripartite’ nature of phenomenality reflected in the brain – he has just shown that taking an instant in the brain’s flow of activity enables you to predict the brain’s state 2 or 3 seconds into the future or the past — a trivial point to make.”

“Scientifically it’s nonsense” says Chris Frith, of the Functional Imaging Laboratory at London’s Institute of Neurology, because Lloyd has not done the necessary controls. “He has shown that the conscious person has a brain with certain temporal properties, but he hasn’t shown that an unconscious person has a brain that doesn’t have these temporal properties.” Without such a comparison we can draw no conclusions about consciousness.

For John McCrone, author of a book on consciousness called Going Inside, the ideas are not wrong, but just vague and naive. “The way Lloyd uses multi-dimensional scaling may be new and interesting but I don’t think it confirms Husserl’s phenomenology any more than many other ideas.”

This is the crunch. Lloyd may appear to have achieved a scientific success — that is, to have made a prediction from his theory and confirmed it with real data — but this same prediction could have been made in countless other ways without recourse to phenomenology. In essence Lloyd claims a symmetry between mind and brain — that you can’t step in the same stream of consciousness twice, and you can’t have exactly the same brain twice.

But surely these are obvious. On the mind side, ordinary introspection is enough to reveal this simple truth. On the brain side, the notion of continuous change is inherent in everything we know about this real live, wet, and growing organ. Its synapses are always changing and its tiny dendrites are always slightly growing and shrinking. Change in brains really is one way. So Lloyd’s findings are not surprising.

The fictitious philosophers were after a transparent theory. “It’ll be about neurons,” explained Miranda, “but once we get it, you’ll be able to interpret the neurons as conscious states.” If the real Lloyd’s theory does this, it should be obvious how the ever-changing brain gives rise to ever-changing experience. Yet it isn’t. The transparent theory is as far off as ever.

Could it be a step in the right direction? Philosophers such as Dennett and Patricia Churchland of the University of California at San Diego believe that once we really understand the brain there will be no “mystery of consciousness” left over. On this view, Lloyd’s discoveries may be a useful step toward such a theory. But others disagree. For Chalmers all this detailed study of the workings of the brain is part of the “easy problems” and does not touch on the real mystery of consciousness — the “hard problem” of why there is experience at all.

Whichever side you are on, Radiant Cool is fun, and philosophers and neuroscientists will love the jokes and allusions. There are flitting bats and brains in vats. There are “really awful” simulations that, like the real world, appear convincing. And there is a terrifying brain-machine that produces “blindsight” to order. With one zap the helpless, strapped-in victim claims to be blind yet she can still guess what is in front of her.

If the in-jokes pass you by, don’t worry. Radiant Cool is a terrific story and a great read. So do enjoy it, but don’t expect that by the end you will understand the mystery of consciousness.

The case of the mysterious mind, Review of Radiant Cool, by Dan Lloyd, New Scientist, 13 December 2003, 36-39. Susan Blackmore is a psychologist, writer and lecturer based in Bristol. Her latest book, Consciousness: An Introduction, was published in June by Hodder Arnold H&S

How Could I Be Wrong? How Wrong Could I Be?

One of the striking, even amusing, spectacles to be enjoyed at the many workshops and conferences on consciousness these days is the breathtaking overconfidence with which laypeople hold forth about the nature of consciousness – their own in particular, but everybody’s by extrapolation. Everybody’s an expert on consciousness, it seems, and it doesn’t take any knowledge of experimental findings to secure the home truths these people enunciate with such conviction.

One of my goals over the years has been to shatter that complacency, and secure the scientific study of consciousness on a proper footing. There is no proposition about one’s own or anybody else’s conscious experience that is immune to error, unlikely as that error might be.  I have come to suspect that refusal to accept this really quite bland denial of what would be miraculous if true lies behind most if not all the elaboration of fantastical doctrines about consciousness recently defended. This refusal fuels the arguments about the conceivability of zombies, the importance of a first-person science of consciousness, intrinsic intentionality and various other hastily erected roadblocks to progress in the science of consciousness.

You can’t have infallibility about your own consciousness. Period. But you can get close – close enough to explain why it seems so powerfully as if you do. First of all, the intentional stance (Dennett, 1971, 1987) guarantees that any entity that is voluminously and reliably predictable as an intentional system will have a set of beliefs (including the most intimate beliefs about its personal experiences) that are mainly true.  So each of us can be confident that in general what we believe about our conscious experiences will have an interpretation according to which we are, in the main, right. How wrong could I be? Not that wrong. Not about most things. There has to be a way of nudging the interpretation of your manifold of beliefs about your experience so that it comes out largely innocent of error though this might not be an interpretation you yourself would be inclined to endorse.  This is not a metaphysical gift, a proof that we live in the best of all possible worlds. It is something that automatically falls out of the methodology: when adopting the intentional stance, one casts about for a maximally charitable (truth-rendering) interpretation, and there is bound to be one if the entity in question is hale and hearty in its way.

But it does not follow from this happy fact that there is a path or method we can follow to isolate some privileged set of guaranteed-true beliefs. No matter how certain you are that p, it may turn out that p is one of those relatively rare errors of yours, an illusion, even if not a grand illusion. But we can get closer, too.  Once you have an intentional system with a capacity for communicating in a natural language, it offers itself as a candidate for the rather special role of self-describer, not infallible but incorrigible in a limited way: it may be wrong, but there may be no way to correct it. There may be no truth-preserving interpretation of all of its expressed opinions (Dennett, 1978, 1991) about its mental life, but those expressed opinions may be the best source we could have about what it is like to be it. A version of this idea was made (in-)famous by Richard Rorty back in his earlier incarnation as an analytic philosopher, and has been defended by me more recently in The Case for Rorts (Dennett, 2000). There I argue that if, for instance, Cog, the humanoid robot being developed by Rodney Brooks and his colleagues at MIT, were ever to master English, its own declarations about its subjectivity would systematically tend to trump the third-person opinions of its makers, even though they would be armed, in the limit, with perfect information about the micro-mechanical implementation of that subjectivity. This, too, falls out of the methodology of the intentional stance, which is the only way (I claim) to attribute content to the states of anything.

The price we pay for this near-infallibility is that our heterophenomenological worlds may have to be immersed in a bath of metaphor in order to come out mainly true. That is, our sincere avowals may have to be rather drastically reconstrued in order to come out literally true. For instance, when we sincerely tell our interrogators about the mental images we’re manipulating,  we may not think we’re talking about convolutions of data-structures in our brain–we may well think we’re talking about immaterial ectoplasmic composites, or intrinsic qualia, or quantum-perturbations in our micro-tubules! but if the interrogators rudely override these ideological glosses and disclaimers of ours and forcibly re-interpret our propositions as actually being about such data-structure convolution, these propositions will turn out to be, in the main, almost all true, and moreover deeply informative about the ways we solve problems, think about the world, and fuel our subjective opinions in general. (In this regard, there is nothing special about the brain and its processes; if you tell the doctor that you have a certain sort of traveling pain in your gut, your doctor may well decide that you’re actually talking about your appendix whatever you may think you’re talking about and act accordingly.)

Since we are such reflective and reflexive creatures, we can participate in the adjustment of the attributions of our own beliefs, and a familiar philosophical move turns out to be just such reflective self-re-adjustment, but not a useful one. Suppose you say you know just what beer tastes like to you now, and you are quite sure you remember what beer tasted like to you the first time you tasted it, and you can compare, you say, the way it tastes now to the way it tasted then. Suppose you declare the taste to be the same. You are then asked: Does anything at all follow from this subjective similarity in the way of further, objectively detectable similarities? For instance, does this taste today have the same higher-order effects on you as it used to have? Does it make you as happy or as depressed, or does it enhance or diminish your capacity to discriminate colors, or retrieve synonyms or remember the names of your childhood friends or. . . . .? Or have your other, surrounding dispositions and habits changed so much in the interim that it is not to be expected that the very same taste (the same quale, one may venture to say, pretending to know what one is talking about) would have any of the same effects at this later date? You may very well express ignorance about all such implications. All you know, you declare, is that this beer now tastes just like that first beer did (at least in some ineffable, intrinsic regard) whether or not it has any of the same further effects or functions. But by explicitly jettisoning all such implications from your proposition, you manage to guarantee that it has been reduced to a vacuity.  You have jealously guarded your infallibility by seeing to it that you=ve adjusted the content of your claim all the way down to zero. You can’t be wrong, because there’s nothing left to be right or wrong about.

This move is always available, but it availeth nought. It makes no difference, by the way, whether you said the beer tastes the same or different; the same point goes through if you insist it tastes different now. Once your declaration is stripped of all powers of implication, it is an empty assertion, a mere demonstration that this is how you fancy talking at this moment. Another version of this self-vacating move can be seen, somewhat more starkly, in a reaction some folks opt for when they have it demonstrated to them that their color vision doesn’t extend to the far peripheries of their visual fields: They declare that on the contrary, their color vision in the sense of color experience does indeed extend to the outer limits of their phenomenal fields; they just disavow any implications about what this color experience they enjoy might enable them to do e.g., identify by name the colors of the objects there to be experienced! They are right, of course, that it does not follow from the proposition that one is having color experiences that one can identify the colors thus experienced, or do better than chance in answering same-different? questions, or use color differences to detect shapes (as in a color- blindness test) to take the most obvious further effects. But if nothing follows from the claim that their peripheral field is experienced as colored, their purported disagreement with the researchers= claim that their peripheral field lacks color altogether evaporates.

O’regan and Noë (2001) argue that my heterophenomenology makes the mistake of convicting naive subjects of succumbing to a grand illusion.   

But is it true that normal perceivers think of their visual fields this way [as in sharp detail and uniform focus from the center out to the periphery]? Do normal perceivers really make this error? We think not. . . . . normal perceivers do not have ideological commitments concerning the resolution of the visual field. Rather, they take the world to be sold, dense, detailed and present and they take themselves to be embedded in and thus to have access to the world. [pXXX]

 My response to this was:

Then why do normal perceivers express such surprise when their attention is drawn to facts about the low resolution (and loss of color vision, etc) of their visual peripheries?  Surprise is a wonderful dependent variable, and should be used more often in experiments; it is easy to measure and is a telling betrayal of the subject’s having expected something else. These expectations are, indeed, an overshooting of the proper expectations of a normally embedded perceiver-agent; people shouldn’t have these expectations, but they do. People are shocked, incredulous, dismayed; they often laugh and shriek when I demonstrate the effects to them for the first time.(Dennett, 2001, pXXXX)

O’regan and Noë (see also Noë, Pessoa, and Thompson, (2000) Noë (2001), and Noë and O’Regan, forthcoming) are right that it need not seem to people that they have a detailed picture of the world in their heads. But typically it does. It also need not seem to them that they are not zombies but typically it does.  People like to have ideological commitments. They are inveterate amateur theorizers about what is going on in their heads, and they can be mighty wrong when they set out on these paths.

For instance, quite a few theorizers are very, very sure that they have something that they sometimes call original intentionality. They are prepared to agree that interpretive adjustments can enhance the reliability of the so-called reports of the so-called content of the so-called mental states of a robot like Cog, because those internal states have only derived intentionality, but they are of the heartfelt opinion that we human beings, in contrast, have the real stuff: we are endowed with genuine mental states that have content quite independently of any such charitable scheme of interpretation.  That’s how it seems to them, but they are wrong.

How could they be wrong? They could be wrong about this because they could be wrong about anything because they are not gods. How wrong could they be?  Until we excuse them for their excesses and re-interpret their extravagant claims in the light of good third-person science, they can be utterly, bizarrely wrong. Once they relinquish their ill-considered grip on the myth of first-person authority and recognize that their limited incorrigibility depends on the liberal application of a principle of charity by third-person observers who know more than they do about what is going on in their own heads, they can become invaluable, irreplaceable informants in the investigation of human consciousness.


  • Dennett, 1971, Intentional Systems, J.Phil, 68, pp87‑106
  • Dennett, 1978, How to Change your Mind, in Brainstorms, Cambridge, MA: MIT Press.
  • Dennett, 1987, The Intentional Stance,  Cambridge, MA: MIT Press.
  • Dennett, 1991, Consciousness Explained, Boston: Little, Brown, and London: Allen Lane 1992.
  • Dennett, 2000, The Case for Rorts, in Robert Brandom, ed., Rorty and his Critics, Oxford: Blackwells.
  • Dennett, 2001, Surprise, surprise, commentary on O’Regan and  Noë, 2001, BBS, 24, 5, pp.xxxx.
  • O’Regan and Noë, 2001. BBS, 24, 5, pp.xxxxx
  • Noë, A., Pessoa, L., Thompson, E. (2000) Beyond the grand illusion: what change blindness
  • really teaches us about vision. Visual Cognition.7, 2000: 93‑106.
  • Noë, A. (2001) Experience and the active mind. Synthese 129: 41‑60.
  • Noë, A. O’Regan, J. K. Perception, attention and the grand illusion.Psyche6 (15) URL:

Special issue of Journal of Consciousness Studies on The Grand Illusion, January 13, 2002, How could I be wrong? How wrong could I be? Daniel C. Dennett, Center for Cognitive Studies, Tufts University, Medford, MA 02155

Consciousness & Illusion

What is all this? What is all this stuff around me; this stream of experiences that I seem to be having all the time?

Throughout history there have been people who say it is all illusion. I think they may be right. But if they are right what could this mean? If you just say “It’s all an illusion” this gets you nowhere – except that a whole lot of other questions appear. Why should we all be victims of an illusion, instead of seeing things the way they really are? What sort of illusion is it anyway? Why is it like that and not some other way? Is it possible to see through the illusion? And if so what happens next.

These are difficult questions, but if the stream of consciousness is an illusion we should be trying to answer them, rather than more conventional questions about consciousness. I shall explore these questions, though I cannot claim that I will answer them. In doing so I shall rely on two methods. First there are the methods of science; based on theorising and hypothesis testing – on doing experiments to find out how the world works. Second there is disciplined observation – watching experience as it happens to find out how it really seems. This sounds odd. You might say that your own experience is infallible – that if you say it is like this for you then no one can prove you wrong. I only suggest you look a bit more carefully. Perhaps then it won’t seem quite the way you thought it did before. I suggest that both these methods are helpful for penetrating the illusion – if illusion it is.

We must be clear what is meant by the word ‘illusion’. An illusion is not something that does not exist, like a phantom or phlogiston. Rather, it is something that it is not what it appears to be, like a visual illusion or a mirage. When I say that consciousness is an illusion I do not mean that consciousness does not exist. I mean that consciousness is not what it appears to be. If it seems to be a continuous stream of rich and detailed experiences, happening one after the other to a conscious person, this is the illusion.

What’s the problem?

For a drastic solution like ‘it’s all an illusion’ even to be worth considering, there has to be a serious problem. There is. Essentially it is the ancient mind-body problem, which recurs in different guises in different times. Victorian thinkers referred to the gulf between mind and brain as the ‘great chasm’ or the ‘fathomless abyss’. Advances in neuroscience and artificial intelligence have changed the focus of the problem to what Chalmers (1995) calls the ‘hard problem’ – that is, to explain how subjective experience arises from the objective activity of brain cells.

Many people say that the hard problem does not exist, or that it is a pseudo-problem. I think they fall into two categories – those few who have seen the depths of the problem and come up with some insight into it, and those who just skate over the abyss. The latter group might heed Nagel’s advice when he says “Certain forms of perplexity—for example, about freedom, knowledge, and the meaning of life—seem to me to embody more insight than any of the supposed solutions to those problems.” (Nagel 1986 p 4).

This perplexity can easily be found. For example, pick up any object – a cup of tea or a pen will do – and just look, smell, and feel its texture. Do you believe there is a real objective cup there, with actual tea in it, made of atoms and molecules? Aren’t you also having a private subjective experience of the cup and the taste of the tea – the ‘what it is like’ for you? What is this experience made of? It seems to be something completely different from actual tea and molecules. When the objective world out there and our subjective experiences of it seem to be such different kinds of thing, how can one be caused by, or arise from, or even depend upon, the other?

The intractability and longevity of these problems suggests to me that we are making a fundamental mistake in the way we think about consciousness – perhaps right at the very beginning. So where is the beginning? For William James – whose 1890 Principles of Psychology is deservedly a classic – the beginning is our undeniable experience of the ‘stream of consciousness’; that unbroken, ever-changing flow of ideas, perceptions, feelings, and emotions that make up our lives.

In a famous passage he says “Consciousness … does not appear to itself chopped up in bits. … it flows. A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described. In talking of it hereafter, let us call it the stream of thought, of consciousness, or of subjective life.” (James, 1890, i, 239). He referred to the stream of consciousness as “… the ultimate fact for psychology.” (James 1890, i, p 360).

James took introspection as his starting method, and the stream of consciousness as its object. “Introspective Observation is what we have to rely on first and foremost and always. The word introspection need hardly be defined(it means, of course, the looking into our own minds and reporting what we there discover. Every one agrees that we there discover states of consciousness. …  I regard this belief as the most fundamental of all the postulates of Psychology, and shall discard all curious inquiries about its certainty as too metaphysical for the scope of this book.” (1890, i,  p 185).

He quotes at length from Mr. Shadworth Hodgson, who says “What I find when I look at my consciousness at all is that what I cannot divest myself of, or not have in consciousness, if I have any consciousness at all, is a sequence of different feelings. I may shut my eyes and keep perfectly still, and try not to contribute anything of my own will; but whether I think or do not think, whether I perceive external things or not, I always have a succession of different feelings. … Not to have the succession of different feelings is not to be conscious at all.” (quoted in James 1890, i, p 230)

James adds “Such a description as this can awaken no possible protest from any one.” I am going to protest. I shall challenge two aspects of the traditional stream; first that it has rich and detailed contents, and second that there is one continuous sequence of contents.

But before we go any further it is worth considering how it seems to you. I say this because sometimes people propose novel solutions to difficult problems only to find that everyone else says – ‘Oh I knew that all along’. So it is helpful to decide what you do think first. Many people say that it feels something like this. I feel as though I am somewhere inside my head looking out. I can see and hear and feel and think. The impressions come along in an endless stream; pictures, sounds, feelings, mental images and thoughts appear in my consciousness and then disappear again. This is my ‘stream of consciousness’ and I am the continuous conscious self who experiences it.

If this is how it seems to you then you probably also believe that at any given time there have to be contents of your conscious stream – some things that are ‘in’ your consciousness and others that are not. So, if you ask the question ‘what am I conscious of now?’ or ‘what was I conscious of at time t?’ then there has to be an answer. You might like to consider at this point whether you think there does have to be an answer.

For many years now I have been getting my students to ask themselves, as many times as possible every day “Am I conscious now?”. Typically they find the task unexpectedly hard to do; and hard to remember to do. But when they do it, it has some very odd effects. First they often report that they always seem to be conscious when they ask the question but become less and less sure about whether they were conscious a moment before. With more practice they say that asking the question itself makes them more conscious, and that they can extend this consciousness from a few seconds to perhaps a minute or two. What does this say about consciousness the rest of the time?

Just this starting exercise (we go on to various elaborations of it as the course progresses) begins to change many students’ assumptions about their own experience. In particular they become less sure that there are always contents in their stream of consciousness. How does it seem to you? It is worth deciding at the outset because this is what I am going to deny. I suggest that there is no stream of consciousness. And there is no definite answer to the question ‘What am I conscious of now?’. Being conscious is just not like that.

I shall try to explain why, using examples from two senses; vision and hearing.

The Stream of Vision

When we open our eyes and look around it seems as though we are experiencing a rich and ever-changing picture of the world; what I shall call our ‘stream of vision’. Probably many of us go further and develop some sort of theory about what is going on – something like this perhaps.

“When we look around the world, unconscious processes in the brain build up a more and more detailed representation of what is out there. Each glance provides a bit more information to add to the picture. This rich mental representation is what we see at any time. As long as we are looking around there is a continuous stream of such pictures. This is our visual experience.”

There are at least two threads of theory here. The first is the idea that there is a unified stream of conscious visual impressions to be explained, what Damasio (1999) calls ‘the movie-in-the-brain’. The second is the idea that seeing means having internal mental pictures – that the world is represented in our heads. People have thought this way at least for several centuries, perhaps since Leonardo da Vinci first described the eye as a camera obscura and Kepler explained the optics of the eye (Lindberg 1976). Descartes’ famous sketches showed how images of the outside world appear in the non-material mind and James, like his Victorian contemporaries, simply assumed that seeing involves creating mental representations. Similarly, conventional cognitive psychology has treated vision as a process of constructing representations.

Perhaps these assumptions seem unremarkable, but they land us in difficulty as soon as we appreciate that much of vision is unconscious.  We seem forced to distinguish between conscious and unconscious processing; between representations that are ‘in’ the stream of consciousness and those that are ‘outside’ it. Processes seem to start out unconscious and then ‘enter consciousness’ or ‘become conscious’. But if all of them are representations built by the activity of neurons, what is the difference? What makes some into conscious representations and others not.

Almost every theory of consciousness we have confronts this problem and most try to solve it. For example, global workspace (GW) theories (e.g. Baars 1988) explicitly have a functional space, the workspace, which is a serial working memory in which the conscious processing occurs. According to Baars, information in the GW is made available (or displayed, or broadcast) to an unconscious audience in the rest of the brain. The ‘difference’ is that processing in the GW is conscious and that outside of it is not.

There are many varieties of GWT. In Dennett’s (2001) ‘fame in the brain’ metaphor, as in his previous multiple drafts theory (Dennett 1991 and see below), becoming conscious means contributing to some output or result (fame is the aftermath, not something additional to it). But in many versions of GWT being conscious is equated with being available, or on display, to the rest of the system (e.g. Baars 1988, Dehaene and Naccache 2001). The question remains; the experiences in the stream of consciousness are those that are available to the rest of the system. Why does this availability turn previously unconscious physical processes into subjective experiences?

As several authors have pointed out there seems to be a consensus emerging in favour of GWTs. I believe the consensus is wrong. GWTs are doomed because they try to explain something that does not exist – a stream of conscious experiences emerging from the unconscious processes in the brain.

The same problem pervades the whole enterprise of searching for the neural correlates of consciousness. For example Kanwisher (2001) suggests that the neural correlates of the contents of visual awareness are represented in the ventral pathway – assuming, as do many others, that visual awareness has contents and that those contents are representations. Crick asks “What is the “neural correlate” of visual awareness? Where are these “awareness neurons”¾are they in a few places or all over the brain¾and do they behave in any special way?” One might think that these are rhetorical questions but he goes on ” … this knowledge may help us to locate the awareness neurons we are looking for.” (Crick 1994, 204). Clearly he, like others, is searching for the neural correlates of that stream of conscious visual experiences. He admits that  “… so far we can locate no single region in which the neural activity corresponds exactly to the vivid picture of the world we see in front of our eyes.” (Crick 1994, 159). Nevertheless he obviously assumes that there is such a “vivid picture”. What if there is not? In this case he, and others, are hunting for something that can never be found.

I suggest that there is no stream of vivid pictures that appear in consciousness. There is no movie-in-the-brain. There is no stream of vision. And if we think there is we are victims of the grand illusion.

Change blindness is the most obvious evidence against the stream of vision. In 1991 Dennett reported unpublished experiments by Grimes who used a laser tracker to detect people’s eye movements and then change the picture they were looking at just when they moved their eyes. The changes were so large and obvious that under normal circumstances they could hardly be missed, but when they were made during saccades, the changes went unnoticed. It subsequently turned out that expensive eye trackers are not necessary.  I suggested moving the whole picture instead, and this produced the same effects (Blackmore, Brelstaff, Nelson & Troscianko 1995) . Other, even simpler, methods have since been developed, and change blindness has been observed with brief blank flashes between pictures, with image flicker, during cuts in movies or during blinks (Simons 2000).

That the findings are genuinely surprising is confirmed in experiments in which people were asked to predict whether they or others would notice the changes. A large metacognitive error was found – that is, people grossly overestimated their own and others’ ability to detect change (Levin, Momen & Drivdahl 2000). James long ago noted something similar; that we fail to notice that we overlook things. “It is true that we may sometimes be tempted to exclaim, when once a lot of hitherto unnoticed details of the object lie before us, “How could we ever have been ignorant of these things and yet have felt the object, or drawn the conclusion, as if it were a continuum, a plenum? There would have been gaps¾but we felt no gaps” (p 488).

Change blindness is not confined to artificial laboratory conditions. Simons and Levin (1998) produced a comparable effect in the real world with some clever choreography. In one study an experimenter approached a pedestrian on the campus of Cornell University to ask for directions. While they talked, two men rudely carried a door between them. The first experimenter grabbed the back of the door and the person who had been carrying it let go and took over the conversation. Only half of the pedestrians noticed the substitution. Again, when people are asked whether they think they would detect such a change they are convinced that they would – but they are wrong.

Change blindness could also have serious consequences in ordinary life. For example, O’Regan, Rensink and Clark (1999) showed that dangerous mistakes can be made by drivers or pilots when change blindness is induced by mudsplashes on the windscreen.

Further experiments have shown that attention is required to notice a change. For example there is the related phenomenon of ‘inattentional blindness’ (Mack & Rock 1998) in which people attending to one item of a display fail to detect the appearance of unexpected new items, even when these are clearly visible or in the centre of the visual field. However, though attention is necessary to detect change, it is not sufficient. Levin and Simons (1997) created short movies in which various objects were changed, some in arbitrary locations and others in the centre of attention. In one case the sole actor in the movie went to answer the phone. There was a cut in which the camera angle changed and a different person picked up the phone. Only a third of the observers detected the change.

What do these results mean? They certainly suggest that from one saccade to the next we do not store nearly as much information as was previously thought. If the information were stored we would surely notice the change. So the ‘stream of vision’ theory I described at the start has to be false. The richness of our visual world is an illusion (Blackmore et al 1995).Yet obviously something is retained otherwise there could be no sense of continuity and we would not even notice if the entire scene changed. Theorists vary in how much, and what sort of, information they claim is retained.

Perhaps the simplest interpretation is given by Simons and Levin (1997). During each visual fixation we experience a rich and detailed visual world. This picture is only detailed in the centre, but it is nevertheless a rich visual experience. From that we extract the meaning or gist of the scene. Then when we move our eyes the detailed picture is thrown away and a new one substituted, but if the gist remains the same our perceptual system assumes the details are the same and so we do not notice changes. This, they argue, makes sense in the rapidly changing and complex world we live in. We get a phenomenal experience of continuity without too much confusion.

Slightly more radical is Rensink’s (2000) view. He suggests that observers never form a complete representation of the world around them – not even during fixations. Rather, perception involves ‘virtual representation’; representations of objects are formed one at a time as needed, and they do not accumulate. The impression of more is given because a new object can always be made ‘just in time’. In this way an illusion of richness and continuity is created.

Finally, O’Regan (1992) goes even further in demolishing the ordinary view of seeing. He suggests that there is no need for internal representations at all because the world can be used as an external memory, or as its own best model – we can always look again. This interpretation fits with moves towards embodied cognition (e.g. Varela, Thomson and Rosch, 1991) and towards animate vision in artificial intelligence (Clark 1999) in which mind, body and world work together, and sensing is intertwined with acting. It is also related to the sensorimotor theory of perception proposed by O’Regan and Noë (in press). On this view seeing is a way of acting; of exploring the environment. Conscious visual experiences are generated not by building representations but by mastering sensorimotor contingencies. What remains between saccades is not a picture of the world, but the information needed for further exploration. A study by Karn and Hayhoe (2000) confirms that spatial information required to control eye movements is retained across saccades. This kind of theory is dramatically different from existing theories of perception. It entails no representation of the world at all.

It is not yet clear which of these interpretations, if any, is correct but there is no doubt about the basic phenomenon and its main implication. Theories that try to explain the contents of the stream of vision are misguided. There is no stable, rich visual representation in our minds that could be the contents of the stream of consciousness.

Yet it seems there is doesn’t it? Well does it? We return here to the problem of the supposed infallibility of our own private experiences. Each of us can glibly say ‘Well I know what my experience is like and it is a stream of visual pictures of the world, and nothing you say can take away my experience’. What then do we make of the experiments that suggest that anyone who says this is simply wrong?

I suggest that we all need to look again – and look very hard, with persistence and practice. Experimental scientists tend to eschew personal practice of this kind. Yet I suggest we should encourage it for two reasons. First, we cannot avoid bringing implicit theories to bear on how we view our own experiences and what we say about them. So perhaps we should do this explicitly. As we study theories of consciousness, we can try out the proposals against the way it seems to us. As we do so our own experience changes – I would say deepens. As an example, take theories about change blindness. Many people find the evidence surprising because they are sure that they have rich visual pictures in their mind whenever they are looking at something. If you ask “What am I conscious of now?” again and again, this certainty begins to fall apart, and the change blindness evidence seems less surprising. This must surely help us to become better critics. At the very least it will help us to avoid dismissing theories of consciousness because of false assumptions we make about our own experiences.

The second reason is that this kind of practice can give rise to completely new hypotheses about consciousness. And this in turn can lead to testable predictions and new experiments. If these are derived from a deeper understanding of one’s own awareness then they are more likely to be productive than those based on the mistake of believing in the stream of conscious.

Note that what I am proposing here is first person practice – first person discipline – first person methods of inquiry. But the results of all this practice will be words and actions; saying things to oneself and others. This endeavour only becomes science when it is put to use in this way and it is then, of course, third person science.

How does one do it? There have been many methods developed for taking ‘the view from within’ (Varela and Shear 1999) but I am suggesting something quite simple here. Having learned about the results of the change blindness research we should look hard and persistently at our own visual experiences. Right now is there a rich picture here in my experience? If there seems to be, something must be wrong, so what is wrong? Look again, and again. After many years of doing this kind of practice, every day, it no longer seems to me that there is a stream of vision, as I described at the start. The research has changed not only my intellectual understanding of vision but the very experience of seeing itself.

The stream of sounds

Listening to what is going on it might seem as though there is a stream of sounds to match the stream of pictures. Suppose we are listening to a conversation, then turn our attention to the music in the background, and then to the conversation again. We may say that at first the conversation was in the conscious stream while the music remained unconscious, then they reversed and so on. If asked ‘what sounds were in your stream of consciousness at a particular time?’ you might be sure that there definitely was an answer, even if you can’t exactly remember what it was. This follows from the idea that there is a stream of consciousness, and sounds must either be in it or not.

Some simple everyday experiences cast doubt on this natural view. To take a much used favourite, imagine you are reading and just as you turn the page you become aware that the clock is striking. You hadn’t noticed it before but now you feel as though you were aware of it all along. You can even remember that it has struck four times already and you can now go on counting. What has happened here? Were the first three ‘dongs’ really outside the stream (unconscious) and have now been pulled out of memory and put in the stream? If so what was happening when the first one struck, while you were still reading? Was the sound out of the stream at the time, but after you turned the page it just felt as though it had been in there all along – with the contents of the previous page – even though it wasn’t really? Or have you gone back in time and changed the contents of the stream retrospectively? Or what? You might think up some other elaborations to make sense of it but I don’t think any will be very simple or convincing (in the same spirit Dennett (1991) contrasts Orwellian with Stalinesque revisions). The trouble all comes about because of the idea that there is a stream of consciousness and things are either in or out of it.

There are many other examples one could use to show the same thing. For example, in a noisy room full of people talking you may suddenly switch your attention because someone has said “Guess who I saw with Anya the other day – it was Bernard”. You prick up your ears – surely not – you think. At this point you seem to have been aware of the whole sentence as it was spoken. But were you really? The fact is that you would never have noticed it at all if she had concluded the sentence with a name that meant nothing to you.

Even simpler than this is the problem with all speech. You need to accumulate a lot of serial information before the meaning of a sentence becomes unambiguous. What was in the stream of consciousness while all this was happening? Was it just meaningless words? Gobbledegook? Did it switch from gobbledegook to words half way through? It doesn’t feel like that. It feels as though you listened and heard a meaningful sentence as it went along, but this is impossible.

Or take just one word, or listen to a blackbird trill its song. Only once the trill is complete, the word finished, can you know what it was that you heard. What was in the stream of consciousness before this point? Would it help to go even smaller? to try to break the stream down into its constituent bits? Perhaps there is a stream of raw feels, or indivisible bits of conscious stuff out of which the larger chunks are made. The introspectionists assumed this must be the case and tried – in vain – to find the units. James did a thorough job of disposing of such ideas in 1890, concluding “No one ever had a simple sensation by itself” (James 1890, i, 224) and there have been many objections since. There is no easy way to answer these questions about what really was in the stream of consciousness at a given time. Perhaps the idea of a stream of consciousness is itself the problem.

Of course we should have known all this. Dennett (1991) pointed out much the same using the colour phi phenomenon and the cutaneous rabbit. To produce colour phi a red light is flashed in one place and then a green light flashed a short distance away. Even on the first trial, observers do not see two distinct lights flashing, but one moving light that changes from red to green somewhere in the middle. But how could they have known what colour the light was going to turn into? If we think in terms of the stream of consciousness we are forced to wonder what was in the stream when the light seemed to be in the middle – before the second light came on.

There’s something backwards about all this. As though consciousness is somehow trailing along behind or making things up after the fact. Libet’s well-known experiments showed that about half a second of continuous cortical activity is required for consciousness, so consciousness cannot be instant. But we should not conclude that there is a stream of consciousness that runs along half a second behind the real world; this still wouldn’t solve the chiming clock problem. Instead I suggest that the problem lies with the whole idea of the stream.

Dennett (1991) formulated this in terms of the Cartesian Theatre – that non-existent place where consciousness happens – where everything comes together and I watch the private show (my stream of experiences) in my own theatre of the mind. He referred to those who believe in the existence of the Cartesian Theatre as Cartesian materialists. Most contemporary consciousness researchers deny being Cartesian materialists. Typically they say that they do not believe that ‘everything comes together’ at a point in the brain, or even a particular area in the brain. For example, in most GWTs the activity of the GW is widely distributed in the brain. In Edelman and Tononi’s (2000) theory the activity of groups of neurons in a widely distributed dynamic core underlies conscious experience.

However, many of these same theorists use phrases that imply a show in the non-existent theatre; such phrases as ‘the information in consciousness’, ‘items enter consciousness’, ‘representations become conscious’, or ‘the contents of consciousness’. But consciousness is not a container – whether distributed or not. And, if there is no answer to the question “what is in my consciousness now?” such phrases imply that people are assuming something that does not exist. Of course it is difficult to write clearly about consciousness and people may write this way when they do not really mean to imply a show in a Cartesian Theatre. Nevertheless, we should beware these phrases. If there is an answer to the question ‘what is in my consciousness now?’ then it makes sense to speak of things ‘entering consciousness’ and so on. If there is no answer it does not.

How can there not be an answer? How can there not be a stream of consciousness or a show in the theatre of the mind? Baars claims that “all of our unified models of mental functioning today are theater metaphors; it is essentially all we have.” (1997, 7) but it is not. It is possible to think about consciousness in other ways – I would say not just possible but necessary.

Dennett’s own suggestion is the theory of multiple drafts. Put simply it is this. At any time there are multiple constructions of various sorts going on in the brain – multiple parallel descriptions of what’s going on. None of these is ‘in’ consciousness while others are ‘out’ of it. Rather, whenever a probe is put in – for example a question asked or a behaviour precipitated – a narrative is created. The rest of the time there are lots of contenders in various stages of revision in different parts of the brain, and no final version. As he puts it “there are no fixed facts about the stream of consciousness independent of particular probes”.  “Just what we are conscious of within any particular time duration is not defined independently of the probes we use to precipitate a narrative about that period. Since these narratives are under continual revision, there is no single narrative that counts as the canonical version, … the events that happened in the stream of consciousness of the subject.” (Dennett 1991 p 136)

I would put it slightly differently. I want to replace our familiar idea of a stream of consciousness with that of illusory backwards streams. At any time in the brain a whole lot of different things are going on. None of these is either ‘in’ or ‘out’ of consciousness, so we don’t need to explain the ‘difference’ between conscious and unconscious processing. Every so often something happens to create what seems to have been a stream. For example, we ask “Am I conscious now?”. At this point a retrospective story is concocted about what was in the stream of consciousness a moment before, together with a self who was apparently experiencing it. Of course there was neither a conscious self nor a stream, but it now seems as though there was. This process goes on all the time with new stories being concocted whenever required. At any time that we bother to look, or ask ourselves about it, it seems as though there is a stream of consciousness going on. When we don’t bother to ask, or to look, it doesn’t, but then we don’t notice so it doesn’t matter. This way the grand illusion is concocted.

There are some odd implications of this view. First, as far as neuroscience is concerned we should not expect always to find one global workspace, or other unified correlate of the contents of consciousness. With particular sorts of probes there may, for a time, be such a global unification but at other times there may be several integrated patterns going on simultaneously, any of which might end up being retrospectively counted as contents of a stream of consciousness. Second, the backwards streams may overlap with impunity. Information from one ongoing process may end up in one stream, while information from another parallel process ends up in a different stream precipitated a bit later but referring to things that were going on simultaneously. There is no requirement for there really to be only one conscious stream at a time – even though it ends up seeming that way.

This is particularly helpful for thinking about the stream of sounds because sounds only make sense when information is integrated over appreciable lengths of time. As an example, imagine you are sitting in the garden and can hear a passing car, a bird singing, and some children shouting in the distance, and that you switch attention rapidly between them. If there were one stream of consciousness then each time attention switched you would have to wait while enough information came into the stream to identify the sound – to hear it as a passing car. In fact attention can switch much faster than this. A new backwards stream can be created very quickly and the information it uses may overlap with that used in another stream a moment later, and another, and so on. So at time t was the bird song really in your stream of consciousness or was it the children’s shouting? There is no answer.

Is it really this way? Do you want to protest that it doesn’t seem this way? As with vision it is possible to look harder into one’s own experience of sound and the results can be quite strange. Thinking about the chiming clocks, and listening as sounds come and go, the once-obvious linear stream begins to disappear.

Looking harder

I have suggested that we need to look hard into our own experience, but what does this mean? How can we look? If the models sketched above are correct then looking means putting in a probe and this precipitates a backwards stream. So we cannot catch ourselves not seeming to be having a stream of consciousness. As William James so aptly put it “The attempt at introspective analysis in these cases is in fact like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see how the darkness looks.” (James, 1890, i, 244).

The modern equivalent is the metaphor of the fridge door. Is the light always on inside the fridge?  You may keep opening the door, as quickly as you can, but you can never catch it out – every time you open it, the light is on.

Things, however, are not quite that bad for the stream of consciousness. We do, after all, have those obvious examples such as the chiming clock and the meaningless half a word to go on. And we can build on this. But it takes practice.

What kind of practice? A good start is calming the mind. There are many meditation traditions whose aim is to see the mind for what it really is, and all of these begin with calming the mind. You might say that at first it is more like a raging torrent or even a stormy ocean than a stream. To see whether there even is a stream we need to slow everything down. This is not easy. Indeed it can take many years of diligent practice, though some people seem to be able to do it much more easily than others. Nevertheless, with a calm mind it is easier to concentrate, and to concentrate for longer.

Now we can ask “What am I hearing now?”. At first there seems always to be an answer. “I am hearing the traffic” or “I am hearing myself ask the question in my head”. But with practice the answer becomes less obvious. It is possible to pick up the threads of various sounds (the clock ticking, the traffic, ones own breathing, the people shouting across the road) and notice in each case that you seem to have been hearing it for some time. When you get good at this it seems obvious that you can give more than one answer to the question “what was I hearing at time t”. When you can do this there no longer seems to be a single stream of sounds.

My purpose here is not to say that this new way of hearing is right, or even better than the previous way. After all, I might be inventing some idiosyncratic delusion of my own. My intention is to show that there are other ways of experiencing the world, and finding them can help us throw off the false assumptions that are holding back our study of consciousness. If we can find a personal way out of always believing we are experiencing a stream of consciousness, then we are less likely to keep getting stuck in the Cartesian Theatre.

I asked at the outset ‘What is all this? What is all this stuff – all this experience that I seem to be having, all the time?’. I have now arrived at the answer that all this stuff is a grand illusion. This has not solved the problems of consciousness, but at least it tells us that there is no point trying to explain the difference between things that are in consciousness and those that are not because there is no such difference. And it is a waste of time trying to explain the contents of the stream of consciousness because the stream of consciousness does not exist. 


  1. Baars, B.J. (1988) A Cognitive Theory of Consciousness, Cambridge, Cambridge University Press.
  2. Baars,B.J. (1997) In the Theatre of Consciousness: The Workspace of the Mind. New York, Oxford University Press
  3. Blackmore,S.J., Brelstaff,G., Nelson,K. and Troscianko,T. 1995 Is the richness of our visual world an illusion? Transsaccadic memory for complex scenes. Perception, 24, 1075-1081
  4. Chalmers, D.J. (1995) Facing up to the problem of consciousness. Journal of Consciousness Studies, 2, 200-219
  5. Clark, A. (1997) Being There: Putting brain, body, and world together again. Cambridge, MA, MIT Press
  6. Crick,F. (1994) The Astonishing Hypothesis. New York, Scribner’s
  7. Damasio, A. (1999) The Feeling of What Happens: Body, emotion and the making of consciousness. London, Heinemann
  8. Dehaene, S. and Naccache, L. (2001) Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79, 1-37
  9. Dennett, D.C. (1991) Consciousness Explained. London, Little, Brown & Co.
  10. Edelman,G.M. and Tononi, G. (2000) Consciousness: How matter becomes imagination. London, Penguin
  11. James,W. (1890) The Principles of Psychology, London; MacMillan
  12. Kanwisher, N. (2001). Neural Events and Perceptual Awareness. Cognition, 79, 89-113
  13. Karn, K. and Hayhoe, M. (2000) Memory representations guide targeting eye movements in a natural task. Visual Cognition, 7, 673-703
  14. Levin, D.T., Momen, N. and Drivdahl, S.B. (2000) Change blindness blindness: The metacognitive error of overestimating change-detection ability. Visual Cognition, 7, 397-412
  15. Levin, D.T. and Simons, D.J. (1997) Failure to detect changes to attended objects in moton pictures. Psychonomic Bulletin and Review, 4, 501-506
  16. Levine, J. (1983) Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64, 354-361
  17. Lindberg, D.C. (1976) Theories of Vision from Al-Kindi to Kepler, University of Chicago Press
  18. Mack, A. and Rock, I. (1998) Inattentional Blindness, Cambridge MA, MIT Press
  19. Nagel,T. (1974) What is it like to be a bat? Philosophical Review 83, 435-450
  20. Nagel,T. (1986) The View from Nowhere, New York; Oxford University Press
  21. O’Regan, J.K. (1992) Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology, 46, 461-488
  22. O’Regan, J.K. and Noë, A. (in press) A sensorimotor theory of vision. Behavioral and Brain Sciences.
  23. O’Regan, J.K., Rensink, R.A. and Clark, J.J. (1999) Change-blindness as a result of “mudsplashes”. Nature, 398, 34
  24. Rensink, R.A. (2000) The dynamic representation of scenes. Visual Cognition, 7, 17-42
  25. Simons, D.J. (2000) Current approaches to change blindness. Visual Cognition, 7, 1-15
  26. Simons, D.J. and Levin, D.T. (1998) Failure to detect changes to people during real-world interaction. Psychonomic Bulletin and Review, 5, 644-649
  27. Varela, F.J. and Shear, J. (1999) The view from within: First person approaches to the study of consciousness, Thorverton, Devon, Imprint Academic
  28. Varela,F.J., Thomson,E. and Rosch,E. (1991) The Embodied Mind. London, MIT Press

There is no stream of consciousnes – This paper is published in the Journal of Consciousness Studies, Volume9, number 5-6, which is devoted to the Grand Illusion.  See This paper is based on a conference presentation by Dr Susan Blackmore at ‘Towards a Science of Consciousness 2001, in Skövde, Sweden, 7-11 August 2001.

Twelve Varieties of Subectivity


Subjectivity is a theme common to many of those philosophers eager to deflate the ambitions of cognitive science. The claim is that persons differ from all other things in that they cannot be exhaustively described in the third person. Any attempt to do so will fail to capture something about every human being that is essentiallysubjective. This expression covers many things, and the word sounds all the more impressive for the fact that the things it purportedly designates are lumped into a very mixed bag. When lumped together as if they constituted one hugely complex problem, they tend to induce a sense of hopelessness. Which is exactly what some of the champions of subjectivity count on to preserve its mystery and irreducibility.

Among the champions in question, some of the most famous are Taylor (1989), Nagel (1986), and Searle (1992, 1997)).

Here is sampling of some of their claims. First, Nagel:

“[T]he purely objective conception will leave something out [viz., the subjective content of “I am Thomas Nagel”] which is both true and remarkable” (Nagel 1986, 64).

Next, Charles Taylor:

There are certain things which are generally held true of objects of scientific study which don’t hold of the self:….
1. The object of study is to be taken “absolutely”, that is, not in its meaning for us or any other subject,… (“objectively”).
2. The object is what it is independent of any descriptions or interpretations offered by any subjects
3. The object can in principle be captured in explicit description;
4. The object can in principle be described without reference to its surroundings. (Taylor 1989, 33-34).

Next, Searle:

Conscious mental states and processes have a special feature not possessed by other natural phenomena, namely subjectivity…..[M]uch of the bankruptcy of most work in the philosophy of mind and a great deal of the sterility of academic psychoanalysis over the past fifty years,… have come from a persistent failure to recognize and come to terms with the fact that the ontology of the mental is an irreducibly first-person ontology. (Searle 1992, 93,95).Consider, for example, the statement ‘I now have a pain in my lower back.’ That statement is completely objective in the sense that it is made true by the existence of an objective fact…. However, the phenomenon itself, the actual pain, itself, has a subjective mode of existence. (Searle 1992, 94).

To these claims about the irreducibility of subjectivity, two forms of resistance are possible. One is to claim that the multifarious problems posed by consciousness and subjectivity actually all reduce to one. That one may or may not be currently soluble, but at least one has one problem and not many. That strategy is adverted to (though not adopted) in a recent book by Lycan, who defends “a weak version of Brentano’s doctrine that the mental and the intentional are one and the same…. It would follow that once representation itself is (eventually) understood, … I do not think there will be any ‘problem of consciousness’ left” (Lycan 1996, 11)[1]. A similar strategy may be implied in the recent books by (Tye 1995) and (Dretske 1995) defending a representationalist theory of consciousness.The line I propose to pursue here is the opposite. It starts from the consideration that big mysteries are sometimes made of a lot of little tricks, and so might yield to a divide-and-conquer strategy. I suspect this is true of the mysteries of consciousness: if the “problem of consciousness” is not one, but many, and if each one can be successfully dismissed or solved along naturalistic lines, then by this different route we shall reach the same goal, of bringing it about that there not be any ‘problem of consciousness’ left.

I do not aim to demonstrate this large claim here. I concentrate only on the term “subjectivity”, and propose merely to make a start on the first phase, consisting in drawing up a list of ostensibly different problems of subjectivity. If some should turn out to be reducible to others, so much the better. But if not, then each variety of subjectivity might be tackled singly, and this might indeed contribute to a “natural history” of the human mind, in such a way as to bring it all under the aegis of science.

In his perceptive comments on the version of the present paper as presented to the ISCC conference, Jean-Michel Roy urged that by concentrating on the diversity of claims using the word ‘subjective’ made by philosophers, I risked missing the point which only a proper conceptual or phenomenological analysis could reveal. But the concept lives in what people use it to mean. No conceptual analysis, therefore, can avoid taking into account what those who use the concept have used it to do. Phenomenological analysis suffers from similar problems[2]. Still, one might ask what the root word ‘subject’ itself suggests as to what the core of subjectivity might mean. Two sufficient conditions then suggest themselves: either that we are talking about items to which only the subject has epistemic access, or that we are talking about items that are ontologically distinct in somehow pertaining only to the subject, as claimed by Searle in the passage just cited. The former can happily be conceded by any materialist. The latter begs the essential question of whether it makes sense to speak of an ontological category which is essentially defined in terms of conditions on epistemic access to it. But these features do not exhaust the claims made for subjectivity and its consequences for our understanding of the mental. That is my justification for undertaking the sort of botanizing I propose in what follows.

To give the flavour of the strategy to which this botanizing is supposed to contribute, here is an example of how confusion between various senses of subjectivity can be misleading.

Berkeley argued against the distinction between primary and secondary qualities that are all equally “ideas existing only in the mind”: (Berkeley 1957, 27, 30). The subjectivity of secondary qualities, in the sense of their relativity to the observer’s mind, can be shown to attach equally to primary qualities. If we resist the idealist conclusion, we can re-interpret this remark as implying that the perception of all qualities depends on the interaction between the external world and state of the subject’s sense-organs. This assumes that if the appearances of things are relative to the sensory and conceptual apparatus of the perceiver, this entails that their attribution to the outside world is mere projection, with no objective correlates beyond themselves. The argument conflates phenomenology — the quality of experience — relativity to an observer, and projection — the attribution of a property to the outside world which is actually entirely resident in or manufactured by the observer. This conflation is plausible in the extreme case in which some quality attributed by an observer to a target depends totally on the perceiver and not at all on the target. For there is then nothing to the property in question except the observer’s experience of it, and relativity collapses into projection. But no lesser degree of relativity can effect this collapse. At most, Berkeley’s arguments show that the conflation of these different senses of subjectivity leads to idealism. This is not what the modern champions of subjectivity intend, but it may turn out to be the logical consequence of their strategy nevertheless. To some of us, this is reason enough for avoiding the conflation.

Phenomenology, relativity and projection are only three of the possibly distinct senses of subjectivity that have been adduced against materialism. In what follows I distinguish twelve basic varieties — senses, readings, interpretations, or aspects — of subjectivity or ‘the subjective’. Some, as we shall see, might easily be further divided. Moreover, I am not confident that they are exhaustive. But I remain unconvinced that any form of “irreducible subjectivity” presents an obstacle to physicalism, and I offer the hope that tackling each variety singly will may make it easier to pre-empt their use as a medieval mace to whack wicked reductionists over the head with. Here then, is my list.

1. Perspective.

An individual is somewhere in space-time, and not somewhere else. Except for God, of course, who was invented to instantiate all contradictions in blessed harmony. He’s everywhere and everywhen, though at the same time, as it were, not in time or space[3]. But the upshot of this is that every individual has a point of view, a perspective, and apprehends the world, so far as it can apprehend the world, from somewhere and not nowhere[4] (Nagel 1986). If taken in isolation, the feature of being somewhere in particular affects all kinds of individuals, not just humans. But only those individuals that can view something can presumably have a point of view. Thus Searle again:

Subjectivity has the further consequence that all of my conscious forms of intentionality that give me information about the world independent of myself are always from a special point of view. The world itself has no point of view, but my access to the world through my conscious states is always perspectival. (ibid. 95).[5]

In itself, however, that could be true of any other living thing. Nor is it a requirement to be alive: an artificial eye has a point of view. More generally, as shown in the excellent discussion of this subject in (Proust 1997), aspectuality can be seen as a consequence of mere differences of informational channels, and doesn’t therefore require any level of consciousness.

Perspective might itself be of two kinds. This can be seen by asking: Does a still camera have a genuine point of view? One reason to deny this is that for a still camera there is nothing that corresponds to the difference between locality in time, and locality in space. For a living individual, these pose slightly different problems. For there are different ways in which we might care about the effects of our actions in distant space, and in different times. Time is asymmetrical in this sense (among others): we care more, or quite differently, about what happens in the future than about what happened in the past. But although the things we care about may, of course, be unevenly distributed, space has no uniformly privileged direction. So temporal perspectivity appears seems to constitute a more serious species of subjectivity than the spatial kind.

Now perspectivity is sometimes equated with subjectivity in general, as suggested in the last quotation from Searle above. Yet subjectivity is also associated with the self, and the temporal form of perspectivity actually causes problems for the view that my self is my subjectivity. This is because changes in perspective, especially in temporal perspective, change the relative value of different prospects. For example, as (Ainslie 1992) has pointed out, we seem to discount the future at a hyperbolic rate, so that the closer prospect can surpass the more distant in apparent value, rather as a low building can loom higher than a tall one when one is up close to the former. Where such changes occur, which perspective is the right one, that is, truly mine? Are there as many individual selves as there are perspectives? In a recent article, Galen Strawson answers in the affirmative: each of us is many brief, material, successive selves, he says, strung like pearls on a string (Strawson 1997). Before him, Derek Parfit (1971, 1984), is famous for advocating a similar view.

Suppose I get my friends solemnly to promise to put me gently to death when I become gaga, because I would rather die than be gaga. What if, once I become gaga, my priorities change? Now I don’t want to die: I would rather live and be gaga. Do my friends still “owe” me euthanasia, against my present wishes? Actually, the answer is always No, but for a different reason. If I’m a different self from what I was when they made their promise, then you can’t be bound by him (i.e. me-then) to do anything to or for me (i.e. me-now). But if I’m the same person, then I can now relieve you of your obligation to me if I change my mind. The facts about perspective, then, appear to be neutral in practice between the Parfitean and a traditional concept of the self, but they seem to be significantly different in theory.[6]

Note, however, that in articulating the problem of the asymmetry of time we have to introduce an additional factor: what’s involved is not just being at a certain place and time, but envisaging what is seen from that point of view as affording possibilities for agency. Make that our second form or aspect of subjectivity.

2. Agency

Agency is presumably not an aspect of human individual subjectivity that we concede to inanimate individuals. As a human, one experiences oneself as having the power to choose and act. The fact of being an agent, as has been often stressed,[7] is a form of subjectivity in the precise sense that I, the subject, and I alone can decide what I will do, although — depending on your own particular stance on the tricky antinomies of free-will — all sorts of circumstances can determine what I in fact end up deciding. Whether I am free to decide to do or not to do A, compatibly with your ability to predict which I will do, is a conundrum that I won’t discuss. I’ll only assert the obvious, namely that whether or not you can predict what I will do does not change the fact that I do in fact experience myself as deciding.

Perhaps this apparent fact about the irreducibility of decision is really just an effect of perspective. From my own point of view as an agent, I can’t take my reason for action to causally-determine my action without failing to decide; but failing to decide is just another decision. (Compare: I never can directly see my own face: would I be right in concluding that my face is different from everyone else’s in some crucial way that makes it invisible?)

The first locus of the claim that agency is a form of subjectivity is probably Descartes[8], though he didn’t say it in so many words. But it takes just a little teasing out to get from the claim that the will is infinite to the present thesis. The infinitude of the will is unfortunately compatible with complete powerlessness. So the measure of the will’s freedom has nothing to do with its effectiveness in bringing about any change in the world. Furthermore, this infinite freedom says nothing about the origins of our desires. An admittedly simplistic argument suggests that the infinitude of the will’s freedom is also compatible with there being absolutely none of my desires that originates in myself. For whatever my desire, I cannot deny that it might have been different had I different genes or had I had a different life. In other words, my desire must have come from causes ultimately traceable to my genes and to my environment. But since I am not in any sense the author of either my genes or my environment, it seems to follow that I’m not the author of my own desires either. Whatever one may think of this somewhat fishy argument, it remains true that the “freedom of the will,” which I’m equating with the subjectivity of agency, cannot be denied: whenever I am made conscious of a set of possible choices, choosing is not so much something I can do, as something I cannot forbear to do, regardless of the origins of my grounds for making it or of whether my choice makes any difference to what results.

3. Titularity or ownness.

One of the specific ways in which my power of agency is “essentially subjective” is that my actions are mine in a peculiar sense of the word. Sergio Moravia (1995) has labeled “titularity” the fact that my mental attributes (including but not only including qualia) are my own in a unique sense of ownership. This sense of ownership is indeed peculiar. It is different from the sense in which I own my bicycle, different from the sense in which I own my hair; different from the sense in which, on some views, I own myself and no other person can logically own me; and different again from the sense in which I “own myself” and no one else can (ethically) own me. For if it is unethical for some person to own another, then it is not metaphysically impossible. But it would seem to be not merely unethical, but metaphysically (or logically) impossible for the slave-owner to own his slave’s experiences.[9] The point has been made by Tye (1995, 10-11,71ff), who distinguishes two problems raised for materialism by this feature. We might call these the two conditions of special ownership. One is that every mental state necessarily belongs to someone or other; and the second is that every mental state necessarily belongs to whoever it belongs to and not anyone else.

This is a particularly good example of the mystifying function of these declarations of subjectivity. For the special sense of ownership involved here is not, in fact, exclusive to mental states, but belongs to a large class of predicates. It was described long ago by Aristotle, in connection with what commentators have named “dependent particulars”. There are two senses in which we can talk about the whiteness of this paper: one refers to a specific shade of white, and in that sense the whiteness of this paper might also belong to, or characterize, some other surface. But in another sense it is logically impossible that the whiteness of this paper should belong to anything else. We can reidentify this paper, even if it has changed colour, but there is no way that we can reidentify its whiteness independently of it. The paper exists independently of its whiteness, but not vice versa[10] (Aristotle 1963, 1a25-27). Tye points out that non-mental actions, such as one person’s laughter, or her walk, also meet both the conditions of special ownership. Events, even those involving no agency at all, exhibit the same feature. My pen’s falling to the floor is not something that logically could pertain to nothing, nor is it something that could pertain to anything other than my pen.

Besides titularity, the legal notion of ownership involves two features that have figured prominently in characterizations of what it means for something to belong to me. One is that I have a special right to use it: I have, as the phrase goes, priviledged access to it. The other is that I have the right to exclude others from my property. You might call this the right of privacy, and where it concerns my beliefs about myself, it amounts to their incorrigibility by any one else. These two, then, constitute the next two forms of subjectivity. They are commonly confused, at least in the terms used to describe them. But if we keep in our minds the difference between access and exclusion, it seems plain that they are indeed separable doctrines. I might enforce my right of access to my property, while not excluding anybody else. The converse seems to make less psychological sense, but is not logically impossible.

4. Privileged access.

It was long a dogma of the philosophy of mind that one of the defining characteristics of mental states was their privacy, that is, their inaccessibility to other observers. Tye sees this as one of the aspects of ownership: “My pains, for example, are necessarily private to me. You could not feel any of my pains.” (Tye 1990, 71). But clearly privacy is distinct from that other feature of ownership, privileged access. This may well be one of the features that the champions of subjectivity have in mind, but nowadays the issue of access is not generally regarded as clear in either direction. This is partly due, no doubt, to the influence of Wittgenstein’s attack on private languages. On the one hand, we have gotten used to talking about mental states which are clearly enough my own, but to which I have no access either because they are repressed in the “Freudian Unconscious” or because they pertain to the “Helmholzian Unconscious” (Johnson-Laird 1988, 354). Conversely, in the light of some recent thought experiments, the impossibility of accessing another person’s mental states can no longer be asserted without begging just the sorts of question at issue in debates about materialism.[11]

5. The incorrigibility of appearance.

One of the political privileges of privacy, in sense in which we speak of a right to privacy, is the right to keep others out. Under the last rubric I have focused on the subject’s access (which turns out to be dubious). What then of the subject’s converse right to exclude others?

Objective reality is more than meets the eye. No one subject, it seems, is ever in a position to exclude others from all facets of Reality. “Mere” appearances, as we call them, on the other hand, are subjective. We inherit from Plato one of the reasons for drawing this contrast: appearances change, while reality supposedly stays the same. But this is quite wrong-headed. If you were aware of an “appearance” which never changed, it would be a pretty sure sign that you were having a hallucination. Part of the changes brought to appearances are due to perspective, which I’ve already talked about; and if you didn’t see something in perspective and from your own point of view that would prove that it was not objective. If I thought I saw a circle from the side and it appeared circular, then I’d have to conclude it wasn’t really a circle. The supposed subjectivity of appearance, then, lies in its incorrigibility. It is only I that am incorrigible about what appears to me; anyone else fails to have equal authority with mine. Conversely, what seems to be the case is the only thing on which (for example) Descartes allows that I am incorrigible (Descartes 1984-1985, 29). Incorrigibility thus emerges as a form of subjectivity independent of those already listed, because it is logically possible that propositions concerning perspective, agency, ownership, and even privacy (and also qualitative experience and seeing-as, which we shall get to in a moment) might all be corrigible on the basis of objective evidence. Incorrigibility has a low status these days: most philosophers agree that if any candidates presented itself it would turn out to be an illusion. (Gopnik 1993) An inverse relationship holds between the empirical content of a claim and the degree of its certainty. It is a plausible principle, even if we do not cling to strict verificationist or falsificationist dogmas, that there is a direct correlation between corrigibility and content. (Sellars 1963)

6. Proprioceptive sense.

Among the things I own in some peculiar sense, though not in the peculiar sense just discussed, is my own body. Herein lies one more trademark of subjectivity. In the most common case, the proprioceptive “sense” designates the awareness we have of the position of one’s limbs. Try this: close your eyes and touch your nose with your index finger. You may miss, but not by much. Ramachandran and Hirstein have described a delightful experiment in which I can actually find the tip of my nose to be displaced to where the tip of your nose actually is actually displaced:

[T]he subject sits in a chair blindfolded, with an accomplice sitting at his right side…. facing in the same direction. The experimenter then stands near the subject, and with his left hand takes hold of the subject’s left index finger and uses it to repeatedly and randomly tap and stroke the nose of the accomplice, while at the same time, using his right hand, he taps and strokes the subject’s nose in precisely the same manner, and in perfect synchrony. After a few seconds of this procedure, the subject develops the uncanny illusion that his nose has either been dislocated, or has been stretched out several feet…..” (Ramachandran and Hirstein 1997, 452).

What I find particularly intriguing about this illusion, is that in fact this “sense” which guided your hand is not a sense at all, insofar as it has no “organ”. What’s more, it is clearly a form of subjectivity, insofar as only the subject can make the relevant observation. We can’t have someone else’s phantom nose illusion. But it’s not just a quale or bundle of qualia.

Here again one might divide even more finely. For the special proprioceptive consciousness of one’s own face seems to form a distinct class by itself. It is not so easily explained as the nose-displacement illusion, and even more interesting, Jonathan Cole has described severe disturbances in self-concept and interaction with others in patients suffering from “Möbius syndrome”, which involves an inability to move any of the muscles of facial expression (Cole 1998). This inability is described as inhibiting the development of a sense of self, no doubt largely because it makes impossible facial imitation which, from the earliest days of a baby’s life, establishes one’s sense of one’s own emotions in some sort of concert with the emotions of others. Cole cites (Meltzoff and Gopnik 1993)’s observation of imitation in infants as suggesting “that in early experience babies learn something of emotion, and how it is experienced, by taking the facial expressions of others and, by imitation, feeling their own faces to be like others” (Cole 1997, 481). More of this under a later heading (see “The subjectivity of intersubjectivity” below). At this point I wish only to point out that no such mechanism could make sense unless there existed the sort of pseudo-sixth sense that is proprioceptive perception of one’s own face.

But is this a problem comforting to the mysterians? No. On the contrary, all these proprioceptive phenomena, both common and exotic, are highly suggestive about the physical, neurological origins that are likely to give rise to them.

7. Ipseity.

When I refer to myself, I am not just referring to the person who happens to be me. This is a point developed in a number of papers by H.-N. Castaneda (e.g. Castaneda 1988) and by John Perry (1979). The latter’s vivid example has him noticing a trail of sugar in the supermarket. He identifies its source as someone whose cart contains a leaking bag of sugar, who is unaware of it, and who has apparently been all over the supermarket. But for a long time fails to identify the person thus “identified” with himself.[12]

Is knowing that I (Ronnie) am I a real piece of knowledge? God couldn’t know it, though he could know that the writer of the previous sentence is Ronnie, or any number of other statements identifying me with myself under two different descriptions.

A similar point could be made about perspective: since God is everywhere, he necessarily lacks perspective. Is this a limitation on God’s supposed omniscience? Whatever the answer, it is tempting to think that ipseity is merely a side-effect of perspective. Tempting, but wrong: for the facts of perspective are entailed by the existence of spatio-temporal particulars in space-time; not so ipseity, since it would be theoretically possible for all information I have about myself to be devoid of perspective, and for all my desires to be formulated in entirely general terms. Sober and Wilson have suggested that ipseity is an adaptive trait which allows a self-interested individual to channel benefits to itself without having to burden itself with large amounts of discriminatory information. “This speculation,” they add, “entails a small irony. People use the concept of “I” to formulate the thought that they are unique. Yet, part of the reason that people have this concept is that they are not unique….” (Sober and Wilson 1998, 214, 350). Sober and Wilson also correctly point out that ipseity (which they call “self-recognition”) differs from “self-awareness” in that “self-recognition does not require that the individual be a psychologist”, i.e. think of themselves as having beliefs and desires. (Sober and Wilson 1998, 216).

8. Tone or colour.

What is it like to be you? It’s not obvious either that Descartes was right about the transparency of that consciousness, nor that there isn’t anything it’s like to be me, nor that it’s somehow reducible to all the others, or to some subset of qualia. An individual tone, or colour is thus subjective in what seems to be yet another irreducible sense. Nevertheless, the colour of my life may supervene on many physical properties, just as the colour of a surface supervenes on a number of properties of texture, light, and relational properties computed by our visual system in ways determined by complex ecological factors (Thompson 1995). What is distinct about this form of subjectivity is that it concerns not sensory experience in general, but one’s experience of oneself in particular. It is precisely not reducible to ipseity, however, if the contrast I just borrowed from Sober and Wilson between ipseity and uniqueness is a real one. My feeling-of-being-me may well be different from anyone else’s analogous feeling, and indeed is likely to be so insofar as it supervenes on a number of factors that determine different aspects of our experience of ourselves.

9. The subjectivity in intersubjectivity.

My identity is, in part, intersubjective. I mean by this that it is causally constituted by my being able to gauge the state of my own mind, and particularly my own emotions, in interaction with others. I note three aspects of this interaction. First, grown-ups tell children what they feel, more or less effectively, resulting in adults who know more or less what they feel. I am not sure quite how to analyse the capacity to be so trained to recognize one’s own emotions; obviously it presupposes that there must be something that one is being trained to recognize, but it doesn’t follow that there must already be full-fledged emotions awaiting recognition. For it may be, as Sue Campbell has stressed, that expressing emotions contributes crucially to the determination of the emotion, both in the sense of bringing the emotion into being and in the sense of making it to be just this emotion and not another (Campbell 1998). Again, there would be no such possibility were there not some subjective side to the interpersonal transaction which is an expression of emotion. One creates one’s subjective sense of one’s own emotions by comparing them to others. But this suggests something of a paradox, since subjectivity seems to be a precondition, or presupposition, of its own cause. For there surely can’t be inter-subjective engagement if there are no subjectivities between which the engagement takes place. The solution to this puzzle no doubt lies in a developmental and dynamic perspective, allowing us to see how the intersubjective and the subjective appreciation of one’s emotional self develop together. One piece of the puzzle may lie in this third observation: Meltzoff and Gopnik (1993) have shown that infants are able to imitate facial expressions in the very earliest days of their lives, at a stage when one would be disinclined to ascribe to them anything like a sense of self. Nevertheless it would be surprising if this capacity didn’t play a role in the acquisition of emotional expressions and through them of a sense of self. Jonathan Cole puts it this way:

Through imitation a face can be assimilated from visual experience, through proprioception, into felt experience: something can be taken from being “out there” in another, to being “in me” (Cole 1998, 110)

If so, then certain forms of interaction would lie at the base of the acquisition of subjectivity in some of the other senses I have sketched.

10. Projection.

Though most mentions of subjectivity are intended to prove our species superiority, it does sometimes happen that the connotation of subjectivity is negative. One such case is where what’s intended is projective illusion or “projection” in the Freudian sense. Now projection is actually a pathological condition, to the extent that it represents a mistake, an illusion based on a sort of confusion between a characteristic of oneself (which isn’t acknowledged) and which is ascribed to others though it isn’t actually there. But there are closely related phenomena that carry no such stigma. Indeed, some people have suggested that simulation (a sort of systematic but non-self-deceptive projection) plays a crucial role in our understanding of other people. (Gordon 1986, 1992; Goldman 1992; contra, see Stich and Nichols 1992).[13]

11. Seeing-as.

Among the subtleties of my perceptual point of view and of my experiences (to which I return in the next section), lies a facet of subjectivity which does not appear to be exhausted by the previous descriptions. This is the fact that what I perceive, I commonly perceive-as-something-or-other. When I see a duck-rabbit, how many qualia do I see? In the paper cited above, Ramachandran and Hirstein go so far as to propose the following unabashedly functional hypothesis: when I look at a duck-rabbit figure, I can “see-it-as” only one (at least one at a time), because that’s precisely among the functions of consciousness: to fix “irrevocably” what we see so as to make it possible for us to undertake unambiguous action. Though Hamlet says conscience makes cowards of us all, yet it is consciousness, on this hypothesis, that has the job of keeping us from the Hamlet syndrome.

To see something as a so-and-so, is to see it in a way that involves intentionality. Yet some sensations are sometimes held to be experienced non-intentionally: the typical example is pain.

Pains, however, are sensed as painful. Indeed, (Kripke 1980) used this fact to revive an old argument against the identity theory of sensations and brain states. That classical argument went as follows:

A painful sensation S is not just painful, but essentially painful. Any neural process N with which we might claim to identify it, however, could only be contingently painful. Therefore, by Leibniz’s law, N couldn’t be identical with S.

In order to appreciate the modifications that Kripke brought to this argument, recall that he loosened the bonds linking the analytic, the a priori, and the necessary, and their contraries, the synthetic, the a posteriori and the contingent. His analysis requires us to appreciate that some statements, such as the identity statement ‘Water is H2O’, or ‘heat is mean molecular momentum’, might be both necessary and a posteriori. The necessity involved is ontological, not epistemic: Kripke allows that there is nothing wrong with the statement that for all we knew there might have been some different physical process responsible for the feeling of heat. But as it happens, there isn’t and there couldn’t be compatibly with the laws of nature (Kripke 1980, 333).

Kripke’s doctrine actually constitutes a minor puzzle in the history of contemporary philosophy. For to have thus relaxed the link between what can be known a priori and what is necessary opens the door to an identity theory immune to the old arguments about what can be imagined to be identical. Yet no sooner did Kripke open this door than he tried to push it shut: nothing is more certain, he argued, than the fact that to be painful is a necessary property of any pain. So if N is really identical to S, then it must be the case that N is necessarily painful too. And that Kripke finds incredible, on the ground that while in the case of heat there is something, namely the process of molecular motion, between the sensation and the heat, there is no analogue to this “something in the middle” in the case of pain. For the essence of pain is nothing but quale felt by the sufferer. (339)

But what’s wrong with the other possibility? Why not say that the neural process N is indeed necessarily painful? Since Kripke has duly shown that some necessary truths can be known a posteriori, the burden of proof is now on him to show that this isn’t one such necessary truth. All that we need grant his argument is the analyticity of ‘pain is painful’, and the fact that any physiological state’s painfulness, by contrast, is synthetic. Given that, however, it remains perfectly possible that some physiological state really is necessarily painful just as water is necessarily H2O, regardless of the fact that neither, in advance of scientific knowledge, can be expected to seem analytic.

12. Phenomenal experience.

I have left till last the most hotly contested of subjectivity’s battlefields. Chalmers puts the centrality of qualia in these terms:

The problem of explaining these philosophical qualities is just the problem of consciousness. This is the really hard part of the mind-body problem (Chalmers 1996, 4).

But it seems to be characteristic of those who take this form of subjectivity as central that any attempt at explaining our talk about qualia in materialist term is taken as a refusal to take the hard problem seriously. Dennett, for example, has repeatedly been accused of denying that we are conscious: “Dennett thinks there are no such things as qualia, subjective experiences, first-person phenomena, or any of the rest of it.” ((Searle 1997, 99) Apart from being admirably unambiguous about the “charge” against Dennett, this is a nice example of how the vast and unspecified “rest of it” is thrown into the same bag as qualia. Dennett has repeatedly denied denying that we have conscious experience, but since he has indeed also “quined qualia” (Dennett 1990), it’s clear without assessing the argument that Searle’s mere assertion of the intuition that qualia just are the “data” doesn’t settle the matter. Dennett doesn’t deny that we are conscious, or that we have experiences, “and the rest of it.” He just claims that the philosophical mystery made about qualia can be dispelled once it is kept separate from other theoretical issues that surround it. He has also argued that once one focuses on the functions of qualia — which can perfectly well be discussed in the third person — there is nothing left for the ineffably private qualia to be.

The issue of qualia seems to me open to a vice-grip strategy, which consists in squeezing the irreducibility of qualia between two complementary poles, the materialist identity theory and functionalism. Both are reductive in the sense that they propose third-person accounts of the qualia in question.

(1) Materialism: Churchland has recently argued that the identity theory of qualia can be rehabilitated. His strategy is illustrated by showing how the colour solid is isomorphic to the solid generated by the 3 dimensional structure of the antagonistic receptors which receive input from the three types of cones. The essential strategy here consists in the challenge: “what more do you want than full formal coherence between the physiological mechanisms and the phenomenological structure of the colour solid (or mutatis mutandis)?” (Churchland and Churchland 1998). If it is then objected that correlations don’t establish identity, the objector owes an account of what more is required. The answer can only be that two objects, however perfectly correlated, might differ in their causes and effects. The argument is then ready to be turned over to the functionalist.

(2) Functionalism: This is also best summed up in a challenge, the zombie challenge: if you can imagine some being whose reactions to a given scene (sound, sight, stimulus, or whatever) are like yours in every possible way, including synesthetic, associative, recollections evoked, etc.) can you really also imagine that this being might not be like you merely in lacking qualitative experience? If you can, then subjective consciousness, as such, is strictly epiphenomenal in a sense so strong as to make it “a concept that has no utility whatsoever” (Dennett 1991 402.)

At this point, an objector might suggest that functional equivalence is not enough, since two functionally equivalent items might be substantially different. But this objector is one that can safely be left in the hands of the identity theorist, who can once again appeal to the structural correlations between qualia and their physiological underpinnings.

At the risk of dismissing this huge debate with cavalierly short shrift, this vice-grip strategy seems to me sufficiently promising to suggest that the subjectivity of qualia does not actually constitute an insolubly “hard problem”.


Some of these “senses” or “aspects” of subjectivity may be redundant. I certainly am not confident that none could, by means of some ingenious argument, be reduced or assimilated to others. But to establish conclusively that they are all distinct would involve 11! or 65 pairwise comparison: I leave this as a rich mine of thesis topics for future doctoral students to explore. All I will rest on now is the thought that there are plausible, non-mystifying avenues of research open on each of the twelve forms of subjectivity I have described, and that in such piece-meal solutions lies the hope of solving the so-called problem of subjective consciousness.

Lycan’s own strategy, however, is much like the one I espouse, though he concentrates on anatomizing the term consciousness rather than the term subjectivity. See Lycan (1996, 2-7).
See “Contre la phénoménologie”, forthcoming, available at
Christians solved this problem with the usual combination of ingenuity and absurdity: God, while eternal, becomes incarnate, just to show that it’s not logically impossible for Him to say things like “Thank Goodness it’s Friday.”
Nagel, in The View from Nowhere, conflates several types of subjectivity, though I haven’t ascertained that all twelve can be found there. Not surprisingly, subjectivity turns out to be something of a mystery, and an irreducible one at that.
Note how the language here supports the idea that we are dealing with a single and mysterious “ontological mode”: by speaking of a “consequence” of subjectivity, rather than a component, type, or sense of subjectivity, Searle suggests that we can have a prior knowledge of this “ontological mode” and that we are merely drawing out its implications.
The dilemma I have just articulated leaves out what may appear to be the most poignant case: namely where I am already too gaga to express any opinion. In those cases, however, we might justify giving a “living will” authority by default even in a Parfitean world, as a sort of legal fiction, just as we grant authority by default to nearest relatives for certain other decisions affecting the welfare of the incompetent.
By the existentialists, by Stuart Hampshire in his Thought and Action (Hampshire 1983) and by Charles Taylor, in Sources of the Self (Taylor 1989) among others.
Descartes’s cogito can also be credited with stressing subjectivity in a sense I’m not sure how to classify. This emerges from the consideration that if we try to paraphrase his argument in Med. II as a syllogism with a universal major premise, “Whatever thinks, exists” the argument will collapse because the premise is false, since thinking is done admirably well by many a fictional character.
In the phenomenological literature, there is supposed to be a “second phenomenological reduction” that has to do with ownness; but since I have never understood what that is, I confine myself to the non-phenomenological sources literature.
Cf. Aristotle, Categories 2: “There is, lastly, a class of things which are neither present in a subject nor predicable of a subject, such as the individual man or the individual horse. But, to speak more generally, that which is individual and has the character of a unit is never predicable of a subject. Yet in some cases there is nothing to prevent such being present in a subject. Thus a certain point of grammatical knowledge is present in a subject.”
One set of thought experiments are attributed to Zuboff in (Tye 1995, 78 ff). (Ramachandran and Hirstein 1997) offer similar speculations, in which judicious rewiring of brain connections results in two people sharing or exchanging experiences. They infer that sensations are actually not in principle unobservable by others. The contrary appearance, they argue, is due simply to the fact that in ordinary situations I can only have access to the experiences of others through “translation”. But if my brain were wired in just the right way to yours I would have direct access to your mental states.
A novel the name of which I’ve forgotten tells the story of a detective who suffers from amnesia and gradually comes to the conclusion that the criminal he is tracking is he himself.
For further debate see (Davies and Stone 1995).

Ainslie, G. (1992). Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person. Cambridge: Cambridge University Press.

Aristotle. (1963). Categories and de Interpretatione (J. Ackrill, trans and notes). Oxford: Oxford University Press.

Berkeley, G. (1957). A Treatise Concerning the Principles of Human Knowledge (C. M. Turbayne, ed & introd by). Liberal Arts Press. Indianapolis, New York: Bobbs-Merril.

Campbell, S. (1998). Interpreting the Personal: Expression and the Formation of Feeling. Ithaca: Cornell University.

Castaneda, H.-N. (1988). Self-consciousness, demonstrative reference, and self-ascription. In J. E. Tomberlin (Ed.), Philosophical Perspectives, pp. 405-454. Atascadero: Ridgeview.

Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford; New York: Oxford University Press.

Churchland, P. M. & Churchland, P. (1998). On the Contrary: Critical Essays. Cambridge, MA: MIT Press.

Cole, J. (1997). On ‘being faceless’: Selfhood and facial embodiment. Journal of Consciousness Studies, 4(5-6), 467-484.

Cole, J. (1998). About Face. Cambridge, MA : MIT Press Davies, M. & Stone, T. (eds). (1995). Mental Simulation: Evaluations and Applications. Readings in Mind and Language. Oxford: Blackwell.

Dennett, D. (1990). Quining qualia. In Mind and Cognition: A Reader, pp. 519-547. Oxford : Blackwell.

Dennett, D. C. (1991). Consciousness Explained. Boston, Toronto, London: Little, Brown.

Descartes, R. (1984-85) [1649]. The Philosophical Writings of Descartes. (J. Cottingham, R. Stoothoff & D. Murdoch, Trans.). Cambridge: Cambridge University Press. Original work published 1649.

Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.

Goldman, A. I. (1992). In defense of the simulation theory. Mind and Language, 7, 104-119.

Gopnik, A. (1993) How we know our own minds: the illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences, 16, 145-171.

Gordon, R. M. (1986). Folk psychology and simulation. Mind and Language, 1, 158-171.

Gordon, R. M. (1992). The simulation theory: Objections and misconceptions. Mind and Language, 7, 11-34.

Hampshire, S. (1983). Thought and Action, 2nd ed. Notre Dame, Indiana: University of Notre Dame Press.

Johnson-Laird, P. N. (1988). The Computer and the Mind: An Introduction to Cognitive Science. Cambridge, MA: Harvard University Press.

Kripke, S. A. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.

Lycan, W. (1996). Consciousness and Experience. Cambridge, MA: MIT Press.

Meltzoff, A. & Gopnik, A. (1993). The role of imitation in understanding persons and developing a theory of mind. In S. Baron-Cohen, H. Tager-Flusberg & D. Cohen (eds), Understanding Other Minds: Perspectives from Autism. Oxford: Oxford University Press.

Moravia, S. (1995) [1986]. The Enigma of the Mind, Tr. from the Italian L’enigma della mente. Cambridge: Cambridge University Press. Original work published 1986.

Nagel, T. (1986). The View from Nowhere. Oxford: Oxford University Press.

Parfit, D. (1971). Personal identity. Philosophical Review, 80.

Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press .

Perry, J. (1979). The problem of the essential indexical. Nous, 3, 3-21.

Proust, J. (1997) Comment l’esprit vient aux bêtes : essai sur la représentation. Paris: NRF Essais.

Ramachandran, V. & Hirstein, W. (1997). Three laws of qualia: What neurology tells us about the biological functions of consciousness. Journal of Consciousness Studies, 4(5-6), 429-457.

Searle, J. R. (1992). The Rediscovery of the Mind. Cambridge, MA : MIT Press. A Bradford Book.

Searle, J. R. (1997). The Mystery of Consciousness, And exchanges with Daniel C. Dennett and David J. Chalmers. London: Granta.

Sellars, W. (1963). Science, Perception and Reality. New York: Humanities Press.

Sober, E. & Wilson, D. S. (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press.

Stich, S. & Nichols, S. (1992). Folk psychology: Simulation or tacit theory? Mind and Language, 7, 35-71.

Strawson, G. (1997). The self. Journal of Consciousness Studies, 4(5/6), 405-428.

Taylor, C. (1989). Sources of the Self. Cambridge, MA: Harvard University Press.

Thompson, E. (1995). Colour vision: A study in cognitive science and the philosophy of Perception. London: Routledge.

Tye, M. (1990). A representational theory of pains and their phenomenal character. Philosophical Perspectives, 9, 223-239, J. Tomberlin (Ed.). Atascadero: Ridgeview Publishing.

Tye, M. (1995). Ten Problems of Consciousness. Cambridge, MA: MIT Press.

Twelve Varieties of Subjectivity: Dividing in Hopes of Conquest, ©Ronald de Sousa, University of Toronto, Penultimate draft of paper now published in Knowledge, Language, and Representation Proceedings of ICCS conference, San Sebastian, Spain, May 15 1999, ed. J. M. Larrazabal and L. A. Pérez Miranda,  Kluwer 2002.