Mind & Brain

We all believe that we have minds – and that minds, whatever they may be, are not like other worldly things. What makes us think that thoughts are made of different stuff? Because, it seems, thoughts can’t be things; they have no weights or sounds or shapes, and cannot be touched or heard or seen. In order to explain all this, most thinkers of the past believed that feelings, concepts, and ideas must exist in a separate mental world. But this raises too many questions. What links our concept about, say, a cat with an actual cat in the physical world? How does a cause in either world affect what takes place in the other world? In the physical world we make new things by rearranging other things; is that how new ideas come to be, or were they somewhere all along? Are minds peculiar entities, possessed alone by brains like ours – or could such qualities be shared, to different degrees, by everything? It seems to me that the dual-world scheme creates a maze of mysteries that leads to problems worse than before.

We’ve heard a good deal of discussion about the idea that the brain is the bridge between those worlds. At first this seems appealing but it soon leads to yet worse problems in philosophy. I maintain that all the trouble stems from making a single great mistake. Brains and minds are not different at all; they do not exist in separate worlds; they are simply different points of view–ways of describing the very same things. Once we see how this is so, that famous problem of mind and brain will scarcely seem a problem at all, because …

Minds are simply what brains do.

I don’t mean to say that brains or minds are simple; brains are immensely complex machines-and so are what they do. I merely mean to say that the nature of their relationship is simple. Whenever we speak about a mind, we’re referring to the processes that move our brains from state to state. Naturally, we cannot expect to find any compact description to cover every detail of all the processes in a human brain, because that would involve the details of the architectures of perhaps a hundred different sorts of computers, interconnected by thousands of specialized bundles of connections. It is an immensely complex matter of engineering. Nevertheless, when the mind is regarded, in principle, in terms of what the brain may do, many questions that are usually considered to be philosophical can now be recognized as merely psychological-because the long-sought connections between mind and brain do not involve two separate worlds, but merely relate two points of view.

Memory and Change

What do brains do? Doing means changing. Whenever we learn or ‘change our minds’, our brains are engaged in changing their states. To comprehend the relationship between mind and brain, we must understand the relationship between what things do and what things are; what something does is simply an aspect of that thing considered over some span of time. When we see a ball roll down a hill, we appreciate that the rolling is neither the ball itself, nor something apart in some other world – but merely an aspect of the ball’s extension in space-time; it is a description of the ball, over time, seen from the viewpoint of physical laws. Why is it so much harder to appreciate that thinking is an aspect of the brain, that also could be described, in principle, in terms of the self-same physical laws? The answer is that minds do not seem physical to us because we know so little of the processes inside brains.

We can only describe how something changes by contrast with what remains the same. Consider how we use expressions like “I remember X.” Memories must be involved with a record of changes in our brains, but such changes must be rather small because to undergo too large a change is to lose any sense of identity. This intrusion of a sense of self makes the subject of memory difficult; we like to think of ourselves as remaining unchanged – no matter how much we change what we think. For example, we tend to talk about remembering events (or learning facts, or acquiring skills) as though there were a clear separation between what we call the Self and what we regard as like data that are separate from but accessible to the self. However, it is hard to draw the boundary between a mind and what that mind may think about and this is another aspect of brains that makes them seem different to us from machines. We are used to thinking about machines in terms of how they affect other materials. But it makes little sense to think of brains as though they manufacture thoughts the way that factories makes cars because brains, like computers, are largely engaged in processes that change themselves . Whenever a brain makes a memory, this alters what that brain may later do.

Our experience with computers over the past few decades has helped us to clarify our understanding of such matters. The early applications of computers usually maintained a rather clear distinction between the program and the data on which it operates. But once we started to develop programs that changed themselves, we also began to understand that there is no fundamental difference between acquiring new data and acquiring new processes. Such distinctions turned out to be not absolute, but relative to other issues of perspective and complexity. When we say that minds are what brains do, we must also ask whether every other process has some corresponding sort of mind. One reply might be that this is merely a matter of degree: people have well-developed minds, while bricks or stones have almost none. Another reply might try to insist that only a person can have a mind -and, maybe, certain animals. But neither side would be wrong or right; the issue is not about a fact, but about when to use a certain word. Those who wish to use the term “mind” only for certain processes should specify which processes. The problem with this is that we don’t yet have adequate ways to classify processes. Human brains are uniquely complex, and do things that no other things do – and we must try to learn how brains do those things.

This brings us back to what it means to talk about what something does. Is that different from the thing itself? Again it is a matter of how we describe it. What complicates that problem for common sense psychology is that we feel compelled to think in terms of Selves, and of what those Selves proceed to think about. To make this into a useful technical distinction, we need some basis for dividing the brain into parts that change quickly and parts that change slowly. The trouble is that we don’t yet know enough about the brain to make such distinctions properly. In any case, if we agree that minds are simply what brains do, it makes no further sense to ask how minds do what they do.

Embodiments of Minds

One reason why the mind-brain problem has always seemed mysterious is that minds seem to us so separate from their physical embodiments. Why do we find it so easy to imagine the same mind being moved to a different body or brain – or even existing by itself? One reason could be that concerns about minds are mainly concerns about changes in states – and these do not often have much to do with the natures of those states themselves. From a functional or procedural viewpoint, we often care only about how each agent changes state in response to the actions upon it of other agents. This is why we so often can discuss the organization of a community without much concern for the physical constitution of its members. It is the same inside a computer; it is only signals representing changes that matter, whereas we have no reason to be concerned with properties that do not change. Consider that it is just those properties of physical objects that change the least – such as their colors, sizes, weights, or shapes – that, naturally, are the easiest to sense. Yet these, precisely because they don’t change, are the ones that matter least of all, in computational processes. So naturally minds seem detached from the physical. In regard to mental processes, it matters not what the parts of brains are; it only matters what they do–and what they are connected to.

A related reason why the mind-brain problem seems hard is that we all believe in having a Self – some sort of compact, pointlike entity that somehow knows what’s happening throughout a vast and complex mind. It seems to us that this entity persists through our lives in spite of change. This feeling manifests itself when we say “I think” rather than “thinking is happening”, or when we agree that “I think therefore I am,” instead of “I think, therefore I change”. Even when we recognize that memories must change our minds, we feel that something else stays fixed – the thing that has those memories. In chapter 4 of The Society of Mind[l] I argue that this sense of having a Self is an elaborately constructed illusion – albeit one of great and practical value. Our brains are endowed with machinery destined to develop persistent self-images and to maintain their coherence in the face of continuous change. But those changes are substantial, too; your adult mind is not very like the one mind you had in infancy. To be sure, you may have changed much since childhood – but if one succeeds, in later life, to manage to avoid much growth, that poses no great mystery.

We tend to think about reasoning as though it were something quite apart from the knowledge and memories that it exploits. If we’re told that Tweety is a bird, and that any bird should be able to fly, then it seems to us quite evident that Tweety should be able to fly. This ability to draw conclusions seems (to adults) so separate from the things we learn that it seems inherent in having a mind. Yet over the past half century, research in child psychology has taught us to distrust such beliefs. Very young children do not find adult logic to be so self evident. On the contrary, the experiments of Jean Piaget and others have shown that our reasoning abilities evolve through various stages. Perhaps it is because we forget how hard these were to learn that they now appear so obvious. Why do we have such an amnesia about learning to reason and to remember? Perhaps because those very processes are involved in how we remember in later life. Then, naturally, it would be hard to remember what it was like to be without reason – or what it was like to learn such things. Whether we learn them or are born with them, our reasoning processes somehow become embodied in the structures of our brains. We all know how our logic can fail when the brain is deranged by exhaustion, intoxication or injury; in any case, the more complex situations get, the more we’re prone to making mistakes. If logic were somehow inherent in Mind, it would be hard to explain how things ever go wrong but this is exactly what one would expect from what happens inside any real machine.

Freedom of Will

We all believe in possessing a self from which we choose what we shall do. But this conflicts with the scientific view that all events in the universe depend on either random chance or on deterministic laws. What makes us yearn for a third alternative? There are powerful social advantages in evolving such beliefs. They support our sense of personal responsibility, and thus help us justify moral codes that maintain order among the tribe. Unless we believed in choice-making entities, nothing would bear any credit or blame. Believing in the freedom of will also brings psychological advantages; it helps us to be satisfied with our limited abilities to make predictions about ourselves – without having to take into account all the unknown details of our complex machinery. Indeed, I maintain that our decisions seem “free” at just the times at which what we do depends upon unconscious lower level processes of which our higher levels are unaware – that is, when we do not sense, inside ourselves, any details of the processes that moved us in one direction or the other. We say that this is freedom of will, yet, really, when we make such a choice, it would be better to call it an act of won’t. This is because, as I’ll argue below, it amounts to terminating thought and letting stand whatever choice the rest of the mind already has made.

To see an example of how this works, imagine choosing between two homes, one of which offers a mountain-view, while the other is closer to where you work. There is no particularly natural way to compare such unrelated things. One of the mental processes that are likely to become engaged might be constructing a sort of hallucination of living in that house, and then reacting to that imaginary episode. Another process might imagine a long drive to work, and then reacting to that. Yet one more process might then attempt to compare those two reactions by exploiting some memory traces of those simulations. How, then, might you finally decide? In one type of scenario, the comparison of the two descriptions may seem sufficiently logical or rational that the decision seems to be no mystery. In such a case we might have the sense of having found a “compelling reason”–and feel no need to regard that choice as being peculiarly free.

In another type of scenario, no such compelling reason appears. Then the process can go on to engage more and more mechanisms at increasingly lower levels, until it engages processes involving billions of brain cells. Naturally, your higher level agencies – such as those involved with verbal expressions–will know virtually nothing about such activities, except that they are consuming time. If no compelling basis emerges upon which to base a definite choice, the process might threaten to go on forever. However, that doesn’t happen in a balanced mind because there will always be other, competing demands from other agencies. Eventually some other agency will intervene – perhaps one of a supervisory character[2] whose job it is to be concerned, not with the details of what is being decided, but with some other economic aspect of the other systems’ activities. When this is what terminates the decision process, and the rest is left to adopt whichever alternative presently emerges from their interrupted activities, our higher level agencies will have no reasonable explanation of how the decision was made. In such a case, if we are compelled to explain what was done, then, by default, we usually say something like “I decided to.'[3] This, I submit, is the type of situation in which we speak of freedom of choice. But such expressions refer less to the processes which actually make our decisions than to the systems which intervene to halt those processes. Freedom of will is less to do with how we think than with how we stop thinking.

Uncertainty and Stability

What connects the mind to the world? This problem has always caused conflicts between physics, psychology, and religion. In the world of Newton’s mechanical laws, every event was entirely caused by what had happened earlier. There was simply no room for anything else. Yet common sense psychology said that events in the world were affected by minds: people could decide what occurred by using their freedom of will. Most religions concurred in this, although some preferred to believe in schemes involving divine predestination. Most theories in psychology were designed to support deterministic schemes, but those theories were usually too weak to explain enough of what happens in brains. In any case, neither physical nor psychological determinism left a place for the freedom of will.

The situation appeared to change when, early in this century, some physicists began to speculate that the uncertainty principle of quantum mechanics left room for the freedom of will. What attracted those physicists to such views? As I see it, they still believed in freedom of will as well as in quantum uncertainty–and these subjects had one thing in common: they both confounded those scientists’ conceptions of causality. But I see no merit in that idea because probabilistic uncertainty offers no genuine freedom, but merely adds a capricious master to one that is based on lawful rules.

Nonetheless, quantum uncertainty does indeed play a critical role in the function of brain. However, this role is neither concerned with trans-world connections nor with freedom of will. Instead, and paradoxically, it is just those quantized atomic states that enable us to have certainty! This may surprise those who have heard that Newton’s laws were replaced by ones in which such fundamental quantities as location, speed, and even time, are separately indeterminate. But although those statements are basically right, their implications are not what they seem – but almost exactly the opposite. For it was the planetary orbits of classical mechanics that were truly undependable – whereas the atomic orbits of quantum mechanics are much more predictably reliable. To explain this, let us compare a system of planets orbiting a star, in accord with the laws of classical mechanics, with a system of electrons orbiting an atomic nucleus, in accord with quantum mechanical laws. Each consists of a central mass with a number of orbiting satellites. However, there are fundamental differences. In a solar system, each planet could be initially placed at any point, and with any speed; then those orbits would proceed to change. Each planet would continually interact with all the others by exchanging momentum. Eventually, a large planet like Jupiter might even transfer enough energy to hurl the Earth into outer space. The situation is even less stable when two such systems interact; then all the orbits will so be disturbed that even the largest of planets may leave. It is a great irony that so much chaos was inherent in the old, deterministic laws. No stable structures could have evolved from a universe in which everything was constantly perturbed by everything else. If the particles of our universe were constrained only by Newton’s laws, there could exist no well defined molecules, but only drifting, featureless clouds. Our parents would pass on no precious genes; our bodies would have no separate cells; there would not be any animals at all, with nerves, synapses, and memories.

In contrast, chemical atoms are actually extremely stable because their electrons are constrained by quantum laws to occupy only certain separate levels of energy and momentum. Consequently, except when the temperature is very high, an atomic system can retain the same state for decillions of years, with no change whatever. Furthermore, combinations of atoms can combine to form configurations, called molecules, that are also confined to have definite states. Although those systems can change suddenly and unpredictably, those events may not happen for billions of years during which there is absolutely no change at all. Our stability comes from those quantum fields, by which everything is locked into place, except during moments of clean, sudden change. It is only because of quantum laws that what we call things exist at all, or that we have genes to specify brains in which memories can be maintained – so that we can have our illusions of will.[4]


Question: Can you discuss the possible relevance of artificial intelligence in dealing with this conference?
Artificial intelligence and its predecessor, cybernetics, have given us a new view of the world in general and of machines in particular. In previous times, if someone said that a human brain is just a machine, what would that have meant to the average person? It would have seemed to imply that a person must be something like a locomotive or a typewriter. This is because, in earlier days, the word machine was applied only to things that were simple and completely comprehensible. Until the past half century – starting with the work of Kurt Goedel and Alan Turing in the 1930s and of Warren McCulloch and Walter Pitts a decade later – we had never conceived of the possible ranges of computational processes. The situation is different today, not only because of those new theories, but also because we now can actually build and use machines that have thousands of millions of parts. This experience has changed our view. It is only partly that artificial intelligence has produced machines that do things that resemble thinking. It is also that we can see that our old ideas about the limitations of machines were not well founded. We have learned much more about how little we know about such matters.

I recently started to use a personal computer whose memory disk had arrived equipped with millions of words of programs and instructive text. It is not difficult to understand how the basic hardware of this computer works. But it would surely take months, and possibly years, to understand in all detail the huge mass of descriptions recorded in that memory. Every day, while I am typing instructions to this machine, screens full of unfamiliar text appear. The other day, I typed the command “Lisp Explorer”, and on the screen appeared an index to some three hundred pages of lectures about how to use, with this machine, a particular version of LISP, the computer language most used for research in artificial intelligence. The lectures were composed by a former student of mine, Patrick Winston, and I had no idea that they were in there. Suddenly there emerged, from what one might have expected to be nothing more than a reasonably simple machine, an entire heritage of records not only of a quarter century of technical work on the part of many friends and students, but also the unmistakable traces of their personalities.

In the old days, to say that a person is like a machine was like suggesting that a person is like a paper clip. Naturally it was insulting to be called any such simple thing. Today, the concept of machine no longer implies triviality. The genetic machines inside our cells contain billions of units of DNA that embody the accumulated experience of a billion years of evolutionary search. Those are systems we can respect; they are more complex than anything that anyone has ever understood. We need not lose our self-respect when someone describes us as machines; we should consider it wonderful that what we are and what we do depends upon a billion parts. As for more traditional views, I find it demeaning to be told that all the things that I can do depend on some structureless spirit or soul. It seems wrong to attribute very much to anything without enough parts. I feel the same discomfort when being told that virtues depend on the grace of some god, instead of on structures that grew from the honest work of searching, learning, and remembering. I think those tables should be turned; one ought to feel insulted when accused of being not a machine. Rather than depending upon some single, sourceless source, I much prefer the adventurous view of being made of a trillion parts–not working for some single cause, but interminably engaged in resolving old conflicts while engaging new ones. I see such conflicts in Professor Eccles’ view: in his mind are one set of ideas about the mind, and a different set of ideas that have led him to discover wonderful things about how synapses work. But he himself is still in conflict. He cannot believe that billions of cells and trillions of synapses could do enough. He wants to have yet one more part, the mind. What goodness is that extra part for? Why be so greedy that a trillion parts will not suffice? Why must there be a trillion and one?


Marvin Minsky, The Society of Mind, Simon and Schuster, 1987; Heinemann & Co., 1987.
The idea of supervisory agencies is discussed in section [6.4] of [1].
In 22.7 of [1] I postulate that our brains are genetically predisposed to compel us to try to assign some cause or purpose to every change – including ones that occur inside our brains. This is because the mechanisms (called trans-frames) that are used for representing change are built automatically to assign a cause by default if no explicit one is provided.
This text is not the same as my informal talk at the conference. I revised it to be more consistent with the terminology in [1].


MINDS ARE SIMPLY WHAT BRAINS DO, Marvin Minsky, Massachusetts Institute of Technology

Recommended Reading: Pinker’s ‘How the Mind Works’

Stephen Pinker’s How the Mind Works is an ambitious attempt to bring recent developments in cognitive science to a non-specialist audience. Philosophers’ quibbles be damned, Pinker reaches right for the brass ring: his title refers to the mind, not just to gray matters like the brain. Pinker means to do for mentality what Stephen Jay Gould does for life or Carl Sagan did for the universe.

He’s got a lot of company. There’s been a stampede lately of would-be “re-discoverers” and “rethinkers” and “explainers” of the mind and consciousness, including John Searle, the Churchlands, John Eccles, Alwyn Scott, David Chalmers, Daniel Dennett, et al. Indeed, this field is becoming so crowded it may well take Pinkeresque cheek to, as the advertisers dream, “cut through the clutter.”

How the Mind Works reads like a more broadly-focused sequel to Pinker’s fast-selling The Language Instinct (1994). That book attempted a synthesis of Chomskian generative linguistics and Darwinian natural selection-a shotgun marriage if there ever was one, since Noam Chomsky is renowned for his evolutionary agnosticism. The new book seems to have been written in a spirit of “mopping up” remaining pockets of resistance to Pinker’s view of the mind as a bunch of specialized processing “organs,” like Chomsky’s language module (add a vision module, a physics module, a sex-getting module etc.) Two other leading proponents of this model, evolutionary psychologists Leda Cosmides and John Tooby, have famously advanced the simile of the mind being like a swiss Army knife: an all-in-one collection of purpose-built, content-rich devices, albeit in the mind’s case designed not by the Swiss, but by natural selection. Writes Pinker:

The mind is a system of organs of computation, designed by natural selection to solve the kinds of problems our ancestors faced in their foraging way of life, in particular, understanding and outmaneuvering objects, animals, plants, and other people…The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. The modules’ basic logic is specified by our genetic program.

He develops these ideas into a smooth but selective confection of experimental results, reasonable-sounding argument, and trenchant criticism, leavened frequently with humor and counter-intuitive but demonstrable observations of how the mind “really” works:

When Hamlet says, “What a piece of work is man! how noble in reason! how infinite in faculty! in form and moving how express and admirable!” we should direct our awe not at Shakespeare or Mozart or Einstein or Kareem Abdul-Jabbar but a four-year old carrying out a request to put a toy on a shelf…I want to convince you that our minds are not animated by some godly vapor or single wonder principle. The mind, like the Apollo spacecraft, is designed to solve many engineering problems, and thus is packed with high-tech systems each contrived to overcome its own obstacles.

The dish goes down surprisingly easily-Pinker makes hundreds of pages of technospeak go by about as quickly as Tom Clancy. Perhaps this is because, like any airport thriller, Pinker’s book has clear villains. These are perpetrators of what Cosmides and Tooby call the “Standard Social Science Model”: those folks (once based in philosophical behaviorism and psychology, with certain elements inherited by current cultural anthropology and literary studies) who insist on believing that the mind is primarily a social construction, entering the world as a blank slate that gets written upon by the environment and by culture.

As Pinker everywhere argues, “learning” as commonly conceived is simply too underpowered a process to explain the complex abilities the mind acquires and performs, all largely beneath our conscious awareness. Hence those “high tech” features our minds possess as “standard equipment;” hence the special scorn Pinker pours on those who continue to press the “folklore” that language, perception, etc. are the fruits of general learning mechanisms. “…the contents of the world are not just there for the knowing,” he asserts, “but have to be grasped with suitable mental machinery.”
High-tech and mechanistic imagery aside, a clear intellectual pedigree can be traced from Pinker to Chomsky to Descartes and the decidedly unmechanistic Plato. Chomsky himself has been more forthcoming in his debt to these earlier thinkers, on occasion allowing himself to be called “neo-Cartesian.”

Indeed, so enamored is Chomsky of his “law and order” view of mental life, he has denied the legitimacy of studying real-life utterances in a “properly” scientific linguistics. Instead, the Chomskian view of language study verges on medieval scholasticism, with colleges of closeted linguists hunched over their manuscripts, musing over rules like how many prepositions can dance on the head of a noun phrase. Actual language, meanwhile, rages on unstudied outside the monastery walls.

Pinker can never quite bring himself to go this far. Sometimes, he comes close:

Systems of [mental] rules are idealizations that abstract away from complicating aspects of reality. They are never visible in pure form, but are no less real for all that…[the idealizations] are masked by the complexity and finiteness of the world and by many layers of noise…Just as friction does not refute Newton, exotic disruptions of the idealized alignment of genetics, physiology, and law do not make “mother” any fuzzier within each of these systems.

That is, in Pinker’s mind the rules are just as real as the reality, though they are abstractions. Of course, what counts as a natural “law” and what counts as distracting “noise” is never so easily resolvable: while it is true that friction doesn’t refute Newtonian notions of gravitation, the “laws” of friction are also handy for keeping airplanes in the air and braking your car. What counts as law and what counts as “noise” therefore depends on context. Unfortunately, in the Chomskian case- Pinker included- these are suspiciously often matters of authority and/or selective attention.

The Dawn of the Chuck

As in The Language Instinct, Pinker has an unfortunate habit of making issues seem resolved that aren’t. As Cosmides and Tooby themselves note, there’s a problem with using “learning” as an explanation:

Advocates of the Standard Social Science Model have believed for nearly a century that they have a solid explanation for how the social world inserts organization into the psychology of the developing individual. They maintain that structure enters from the social (and physical) world by a process of “learning”- individuals “learn” their language, they “learn” their culture, they “learn” to walk, and so on…Of course, as most cognitive scientists know (and all should), “learning”…is not an explanation for anything, but is rather a phenomenon that itself requires explanation.

Cosmides and Tooby use their critique of learning to support their innativist views. However, it also suggests that it is not learning itself that is lacking, but our conception of learning. More to the point, there’s some evidence that Skinnerian stimulus-response, rats-pressing-levers-and-running-mazes-type learning is not the only kind of learning there is, especially in infants and children. This has led some fans of general intelligence to tell the evolutionary psychologists to put away their Swiss Army knives.

From outside academe, the differences between camps of cognitive scientists look positively trifling. Most within the field agree that the mind/brain does not come “out of the box”totally unstructured, a blank slate mostly “filled” by culture. Most agree that this innate structure is mediated to some degree by natural selection. There’s even some broad agreement that cultural factors (language, for instance), if they operate long enough and consistently enough (a few thousand generations, give or take), can also act as selective pressures, helping to reshape both the mind and body of what Jared Diamond has called “the third chimpanzee.”

From within the field, the remaining arguments look bitter. People like Pinker, Cosmides and Tooby, Dan Sperber, Nicholas Humphrey, and Elizabeth Spelke see modules, modules everywhere, each as innate and superbly adapted for their functions as the pancreas or an elephant’s trunk. Meanwhile, “domain-generalists” like Jeffrey Elman, Elizabeth Bates and Anna Karmiloff-Smith turn this reasoning on its head . That is, they don’t deny modularity per se (modules are, after all, a good way to package complex neural structures in the limited volume inside the skull) but maintain that our specialized abilities emerge from our predisposition to attend to certain regularities in the world. What’s innate is not the knowledge, but the capacity to observe the regularities and learn them quickly.

For example, Pinker invites us to marvel at the “software driver” that controls the human hand:

A still more remarkable feat is controlling the hand. . . It is a single tool that manipulates objects of an astonishing range of sizes, shapes, and weights, from a log to a millet seed. . . “A common man marvels at uncommon things; a wise man marvels at the commonplace . ” Keeping Confucius’ dictum in mind, let’s continue to look at commonplace human acts with the fresh eye of a robot designer seeking to duplicate them . . .

Typically, Pinker notes a complex adult capacity and wonders how to “reverse engineer” what natural selection hath wrought. He hardly considers the alternative, however: that the mind/brain is disposed to learn quickly and efficiently how to operate whatever appendage it happens to find at the end of its arm, whether it be a hand, a paw, or a flipper.

Neuroscientists have long known that when neurons fire, they not only can make muscles move and glands secrete, they also reinforce their own tendency to fire the same way in the future. (The principle is called Hebbian learning; its dictum is “Fire together, wire together.”) Conceivably, the act of using the hand reinforces the pattern of synaptic connections that control the thumb, the fingers, etc. The end result in the adult looks so well-designed and appropriate it might seem like an innate “program” for moving the hand was genetically “wired in”-but it wasn’t. We began only with a proper neural connection between hand and motor cortex, and a need to manipulate.

The point is not whether a Hebbian model fully explains motor learning. The point is that “gee whiz” explanations of complex adult abilities don’t necessarily prove full-blown innateness. (Most obviously because nobody starts off as an adult!) Furthermore, in evolutionary terms, the weaker model of predisposition, the “disposed to learn” model, is a more parsimonious explanation than inborn mental modules. This is because, in the case of the hand, it doesn’t require genes to somehow code the “hook grip,” “the five-jaw chuck”, “the two-jaw pad-to-side chuck,” “the scissors grip” etc. etc.. It only obliges our genes to motivate us to learn.

Indeed, exactly how genes might code things like language or “social intelligence” or “natural history intelligence” has never been too clear. DNA, after all, actually regulates nothing more than protein production and other DNA. In this sense, Cosmides and Tooby’s critique of learning might also be leveled at the catch-all notion of innateness. Like learning, innateness “…is not an explanation for anything, but is rather a phenomenon that itself requires explanation.”

Toy Neurons

Perhaps the most fascinating aspect of How the Mind Works is watching Pinker wrestle with the problem of connectionism. On the one hand, the fact that experimenters have succeeded in teaching artificial neural nets to do some pretty human-like things, like recognize written letters and put English verbs into the past tense, is a vindication of one of the pillars of his model: the computational theory of mind. On the other hand, the uncanny way neural nets have of learning the regularities of input data without set rules being programmed in is a challenge to Chomsky’s “poverty of the stimulus” arguments for innate knowledge. Some connectionist nets have shown modularity of function, and even human-like cognitive deficits when experimenters simulate “injuries” by removing parts of the system. Importantly, these mind-ish qualities have emerged with learning, and were not introduced pre-formed.

Pinker navigates this quandary by pure elan. After devoting 14 pages to the nature and advantages of “toy neurons” for understanding the mind, and thereby establishing his connectionist credentials, he suddenly takes to calling nets “connectoplasm” (a term clearly meant to evoke that other discredited substance, protoplasm), and asserts “neural networks alone cannot do the job” of accounting for human intelligence:

I do think that connectionism is oversold. Because networks are advertised as soft, parallel, analogical, biological and continuous, they have acquired a cuddly connotation and a diverse fan club. But neural networks don’t perform miracles, only some logical and statistical operations.

Of course, nobody thinks networks or neurons “perform miracles.” As a supporter of the computational theory of mind, Pinker must also believe that the mind/brain itself, at a certain basic level, performs “only some logical and statistical operations.” So what is he talking about?
The real sin of the “strong” version of connectionism- the argument that language, creativity, consciousness itself are all ultimately explicable along connectionist lines- is that it resurrects the associative model of learning. Connectionist nets, after all, learn by doing, and by crudely “associating” certain patterns of inputs and outputs. Following Chomsky, Pinker prefers to imagine a structure of cognitive rules and regulations is doing the real work of mindedness. These rules are not epiphenomenal artifacts of the learning process, or post hoc abstractions from regularities of behavior. Rather, they are ontologically “real.”

To debunk associationism, Pinker makes a list of human-like things that nets can’t do (yet). At least one of these is downright silly: Pinker claims that nets can’t distinguish individual examples of a class of things from each other. Within the connectionist paradigm, “there is no longer a way to tell apart individuals with identical properties. They are represented in one and the same way, and the system is blind to the fact that they are not the same hunk of matter.” People make such distinctions all the time; for instance, identical twins are different people, regardless of how much they look and seem alike: “The spouse of one identical twin feels no romantic attraction toward the other twin. Love locks our feelings in to another person as that person, not as a kind of person, no matter how narrow the kind.”

The fallacy here resides in the assumption that any two examples of any real-world class are really identical with each other. Distinguishing individuals has to do with noticing subtler and subtler kinds of variation. It often takes some time, for instance, for field ethologists to begin to see their subject animals as individuals- at first, they all look the same. To take a more commonplace example, my wife and I own two Himalayan cats that happen to be siblings. Despite the fact that the female is a tortoise-shell point, has a smaller head, and a completely different carriage and personality, houseguests invariably can’t distinguish her from her blue-point, big-headed, lay-about brother. My wife and I have had a longer amount of time to make the appropriate, fine associations.

Despite their genetic identity, not even monozygotic twins are phenotypically or behaviorally identical. Indeed, the one place where such identities do exist is the abstract mathematical world that inspired Chomskian linguistics- it’s a rule, for instance, that a line segment of length X is identical to any other of length X. This is an area where Pinker’s intellectual roots are exposed, and they mislead him. (In the non-mathematical world, incidentally, people do have an uncanny knack for associating certain romantic feelings with the same “types”- the same hair, same build, same foibles. It’s no secret. Maybe Pinker just doesn’t get out much.)

Some of his other objections are more persuasive. It is indeed hard to visualize how nets can handle complex combinatorics, or alter the quantification of elements in a problem when they’re the same-but-different, or process recursively unless specifically constructed to do so. (In such cases the connection weights have a tendency to interfere with each other.) On the other hand, all of these problems have the definite air of claims once made by reputable Victorian physicists who asserted the physical impossibility of heavier than-air flight. Above all, we should know by now that it’s not too smart to bet the farm on something(s) being technically impossible.

Pinker’s treatment of the other key concept in the book-evolution- is equally provocative. He’s clearly very much aware of the principles and objections to the reigning synthesis of Darwinian natural selection and Mendelian genetics. Steering clear of the pan-adaptationism decried by Gould and Richard Lewontin, he rightly observes that not everything about an organism is necessarily adaptive: “A sane person can believe that a complex organ is an adaptation, that is, a product of natural selection, while also believing that features of an organism that are not complex organs are a product of drift or a by-product of some other adaptation”

Gould and Lewontin once wrote of the “Panglossian paradigm”: the tendency among some evolutionary scientists to mistake how things actually happened for the optimal way things could  have happened. Though Pinker disavows it, his work fits the paradigm anyway. For instance, in a discussion of whether the development of intelligent life is inevitable on any life-supporting planet, he compiles a list of the unlikely factors that “made it especially easy and worth their while [for organisms] to evolve better powers of causal reasoning.” First on the list is the primates’ fortunate dependence on the visual sense. Why? Because “Depth perception defines a three-dimensional space filled with movable solid objects…Our capacity for abstract thought has coopted the coordinate system and inventory of objects made available by a well-developed visual system.”
Compare this to other mammals, such a dogs, who rely more on olfactory information:

Rather than living in a three-dimensional coordinate space hung with movable objects, standard mammals [sic] live in a two-dimensional flatland [the ground] which they explore through a zero-dimensional peephole [the nose]…If most mammals think in a cognitive flatland, they would lack the mental models of movable solid objects in 3-D spatial and mechanical relationships that became so essential to our mental life.

Anyone who has seen an earthworm bury itself or a dog sniff his way up the trunk of a tree knows olfactory dependence is not synonymous with living in a “two-dimensional flatland.” Nor does Pinker take note of other 3-D modalities, such as echolocation in bats and cetaceans, which likewise represent a world of “solid objects in 3-D spatial and mechanical relationships.” Faced with such a poverty of imagination with respect to terrestrial creatures, it’s hard to take Pinker’s musings over the unlikelihood of extraterrestrial intelligence very seriously. What about an alien creature living in liquid methane that uses short-wave radar? Or one that lives underground and finds petrocarbon “food” by using seismic “thumps”? What’s wrong with 2-D intelligence anyway? Would a creature able to reason in 4-, 8-, or 1,000- dimensions be justified in denying the significance of our 3-D intelligence? Perhaps we shouldn’t give up on SETI just yet.

Pinker’s discussion falls prey to the Panglossian paradigm because he thinks a sufficient condition for human intelligence is a necessary one for all intelligence-the particular way we evolved, in other words, is established as “optimal” for invading the cognitive niche. He plays Pangloss again elsewhere, in his criticism of idea of “meme evolution.” This is the notion, notably suggested by Richard Dawkins, that ideas, like organisms, might reproduce and evolve in the “habitat” of human brains. Sensing an opening for the cultural constructivists, Pinker tries to slam the door by asserting “When ideas are passed around, they aren’t merely copied with occasional typographical errors; they are evaluated, discussed, improved on, or rejected. Indeed, a mind that passively accepted ambient memes would be a sitting duck for exploitation by others and would have quickly been selected against.”

Try telling that to a Scientologist. Unlike in Pinker’s cognitive symposium, real people are actually very good at “passively accepting ambient memes.” It might even be adaptive to do so: Pinker himself suggests the survival benefit of not standing out, of hanging with the herd. In fact, Pinker is telling a variation on a “just so” story here, using an argument for adaptation to justify a point he asserts to be true. This is precisely what Gould and Lewontin warned against when they observed how, wrongly applied, tales of adaptation could be concocted to justify virtually any position. They note “…Since the range of adaptive stories is as wide as our minds are fertile, new stories can always be postulated.” Though Pinker professes an understanding of non-adaptationist factors in evolution, his work clearly falls into that category where, as Gould and Lewontin lament, “Constraints upon the pervasive power of natural selection are recognized…But…are usually dismissed as unimportant or else, more frustratingly, simply acknowledged and then not taken to heart and invoked.”

All of these problems might be traced to the consequences of Pinker’s primary methodology. This is the idea that we can figure out the mind/brain by “reverse engineering” it:

…psychology is engineering in reverse. Reverse engineering is what the boffins at Sony do when a new product is announced by Panasonic, or vice versa. They buy one, bring it back to the lab, take a screwdriver to it, and try to figure out what all the parts are for and how they combine to make the device work.

Up to a point, this seems like a reasonable analogy. Bodies and brains are, after all, kinds of organo-chemical mechanisms, and as Dawkins has notably observed, natural selection is “the blind watchmaker.” Why not pry the back off the timepiece of the mind and take a look?
Trouble is, human engineers and natural selection work in quite different ways. Following C.G. Langton, Daniel Dennett explains in Consciousness Explained:

…human engineers, being farsighted but blinkered, tend to find their designs thwarted by unforeseen side effects and interactions, so they try to guard against them by giving each element in the system a single function, and insulating it from all the other elements. In contrast, Mother Nature…is famously myopic and lacking in goals. Since she doesn’t foresee at all, she has no way of worrying about unforeseen side effects. Not “trying” to avoid them, she tries out designs in which many side effects occur…[and] every now and then there is a serendipitous side effect: two or more unrelated functional systems interact to produce a bonus: multiple functions for single elements.

The difference in how human engineers and the natural one build mechanisms entails more than the obvious fact that organisms self-organize (they grow) and machines get built. It affects every stage of the “design” process. When some capacity evolves in nature (say, flight), Darwinian selection doesn’t start out with a dream and a blank piece of paper- it starts out with an existing, functional organism. If the Wright Brothers had worked this way, they wouldn’t have designed a new machine from scratch. Instead, they would have gradually “retrofitted” some existing vehicle, like a horseless carriage. The resulting “flying flivver” might have taken much longer to realize than a purpose-built flyer; it might have suffered many more failed test flights until it achieved a sustained glide, then powered flight; it might have taken longer to get the heavy weight of the car down and the wingspan just right. In any case, aeronautical history would have been quite different.

All of which goes to show the problem with “reverse engineering” natural mechanisms: you can never be sure a widget was designed for some function, only that it presently serves that function. In the case of the “flying flivver,” it would be useless to wonder how the fenders and the bumper help the car fly better. Those features have to do with the history of structure, not its present function.

Of course, Pinker and every informed adaptationist knows all this. Furthermore, they would argue that certain essential features (like the wings) are so directly necessary in the evolved function that we must invoke adaptation. All true enough. But this is not the same as saying the human mind is “like the Apollo spacecraft…packed with high tech systems, each contrived to overcome its own obstacles.” As Langton argues, each system may well overcome several obstacles, and it pays not to be too categorical in assigning roles to each widget. If I were asked whether the brain is more like the Apollo spacecraft or a more like a petunia, I’d have to confess I’m not sure.

The Nature of Nature

Pinker is a master rhetorician. When he is on firm ground, he’s a superbly articulate popularizer. When he isn’t, he spins beautifully, exploits what he can, and knows when to beat a tactical retreat. His wit can disarm criticism.

All of which makes it surprising when his sense of humor deserts him and he reverts to dull partisanship. The ceaseless drumbeat of distortion and belittlement of social scientists is one such puzzling element of his book. These people, we learn, are too dense to understand the problem with Lamarckianism; they’re wrong, wrong, wrong about associationism; they insist on believing in “folklore” about the mind because they’re either bent on “feel good” politics or distracted by moral straw-men like genetic determinism.

If cultural anthropologists agree on any human universal, it is the tendency of all cultures to justify their own cultural constructions by “naturalizing” them. As Cosmides and Tooby argue and Pinker agrees, this has led many anthropologists either to deny any “human nature” exists, or to declare the search for universals as unavoidably an exercise in Western ethnocentrism.

Yet human beings did have an origin, and do have some sort of nature. Dread or misunderstanding of these facts have too often resulted in an incurious particularism that prefers to celebrate, not to explain, difference. If anthropology is traditionally a boat powered by two oars-the study of difference and the study of commonality amongst peoples-then the modern discipline has an empty oarlock and is rowing in circles.

But none of this is to say that “naturalization” doesn’t happen, especially among thinkers who profess totalizing theories. When Pinker is spinning his synthesis with respect to stereoscopic vision and incest avoidance, he talks a good game. But when we are expected to believe that, for instance, most peoples’ taste in landscapes is a feature of Cosmides and Tooby’s Swiss Army Knife, he strays into the full-blown ridiculous. He argues, for instance, that we exhibit a “default habitat preference” for savannas-according to certain cross-cultural surveys, everybody likes “semi-open space…even ground cover, views to the horizon, large trees, changes in elevation, and multiple paths leading out…” Though the very idea that we evolved in savannas is fiercely debated, Pinker conclusively declares “No one likes the deserts and the rainforests.” (Color me weird, then.) Nor does Pinker shy from drawing the logical aesthetic conclusions from this bit of human standard equipment-“…we are designed to be dissatisfied by bleak, featureless scenes and attracted to colorful, patterned ones.” There, I knew there was a reason I prefer Henri Rousseau to Georgia O’Keefe.
This is naturalizing. Based on such arguments, and observations of the range of human variation, anthropologists et al. may still have quite defensible reservations about importing whole disciplinary paradigms like that of cognitive science into anthropology, history, linguistics, etc. As Pinker himself suggests, it is quite reasonable for people- and that does include social scientists- not to “passively accept ambient memes.”


How the Mind Works,  Stephen Pinker, W.W. Norton, 565 pages
This review by Nick Nicastro

Morality & Neuroscience

An Ravelingien reports on the conference ‘Double standards. Towards an integration of evolutionary and neurological perspectives on human morality.’ (Ghent University, 21-22 Oct. 2006)

In Love in the Ruins, Walker Percy tells the story of Tom More, the inventor of the extraordinary ‘ontological lapsometer’1. The lapsometer is a diagnostic tool, a ‘stethoscope of the human soul’.  Just as a stethoscope or an EEG can trace certain physical dysfunctions, the lapsometer can measure the frailties of the human mind. The device can measure ‘how deep the soul has fallen’ and allows for early diagnoses of potential suicides, paranoia, depression, or other mood disorders. Bioethicist Carl Elliott refers to this novel to illustrate a well-known debate within psychiatry2. According to Elliott, the image of the physician that uses the lapsometer to unravel the mysteries of the soul is a comically desperate attempt to objectify experiences that cannot accommodate such scientific analysis. His objection carries back to the conflict between a sociological perspective – that would stress the subjective experiences related to the cultural and social context of human psychology – and a biological perspective – that would rather determine the physiological causes of mental and mood dysfunction. It is very likely that debate about the subjective and indefinite nature of some experiences will climax when empirical science is applied to trace and explain the biology of our moral sentiments and convictions. For most of us, I presume, nothing would appear to be more inextricably a part of our personal experience and merit than our moral competence. The conference ‘Double Standards’ questioned this intuition and demonstrated that the concept of ‘morality’ is becoming more and more tangible.

Jan Verplaetse and Johan Braeckman, the organizers of the conference, gathered 13 reputable experts and more than 150 participants to ponder one of the oldest and most fundamental philosophical questions: how did morality come into existence? For this, they drew upon two different scientific approaches: evolutionary psychology and neuroscience. In theory, these disciplines are complementary.  Neuroscientists assume that morality is generated by specific neural mechanisms and structures, which they hope to find by way of sophisticated brain imaging techniques. Evolutionary scientists, by contrast, want to figure out what the adaptive value of morality is for it to have evolved. According tot hem, morality is – just as all aspects of our human nature – a product of evolution through selection. Moral and social behavior must have had a selective advantage, from which the relevant cognitive and emotional functions developed. Through an interdisciplinary approach, the alleged functions can direct the neuroscientist in searching for the neurological structures that underlie them. Or, the other way around, the imaging of certain neural circuits should help to discover whether and to what extent our moral intuitions are indeed embedded in our ‘nature.’ During the conference, this double perspective gave rise to several interesting hypotheses.

It appears that neuroscientists have already achieved remarkably uniform results regarding the crucial brain areas that are involved in fulfilling moral tasks. Jorge Moll was the first to use functional MRI-studies to show that three major areas are engaged in moral decision making: the frontal lobes, temporal lobe and limbic-paralimbic areas. Other speakers at the conference confirmed this overlapping pattern of neural activity, regardless of differences in the ways in which moral stimuli were presented, and regardless of the specific content of the moral tasks (whether the tasks consisted of complex dilemma’s, simple scenario’s with an emotional undertone, or references to violence and bodily harm). Since these findings, several researchers have started looking for the biological basis of more specific moral intuitions. Jean Decety, for instance, has found the neural correlates that play a role in the cognitive modulation of empathy. fMRI-studies are also being used to compare ‘normal’ individuals with people who show deviant (and in particular criminal/immoral) behavior and to thereby derive new explanations of such a-typical behavior. As such, James Blair suggested that individuals with psychopathy have problems with learned emotional responses to negative stimuli.  According to him, the common neural circuit activated in moral decision making is in a more general sense involved in a rudimentary form of stimulus reinforcement learning. At least one form of morality is developed by such reinforcement learning: what Blair calls care-based morality. Contrary to psychopathic individuals, even very young children realize that there is an important difference between for instance the care-based norm ‘do not hit another child’ and the convention-based norm ‘do not talk during class’. In absence of a clear rule, ‘normal’ individuals will be more easily inclined to transgress social conventions than care-based norms. The reason for this, he proposed, is that transgression of care-based norms confronts us with the suffering of our victim(s). The observation of others in pain, sadness, anger, … immediately evokes a negative response, an aversion, in the self, from which we learn to avoid situations with similar stimuli. Blair offered brain images of psychopathic individuals that showed evidence of reduced brain activity in those parts of the brain that are involved in stimulus reinforcement (the ventromedial prefrontal cortex and the amygdala). Adrian Raine gave an entirely different perspective on ‘immoral behavior,’ in suggesting that certain deviances in the prefrontal cortex point to a predisposition towards antisocial behavior. According to Raine, immoral behavior need not be a dysfunction of normal neural circuits; evolution may just as well have shaped the brain to have a predisposition for immoral rather than moral behavior. Antisocial behavior may have a selective advantage: it can be a very effective means of taking others’ resources. As such, the expression of sham emotions (such as faked shame or remorse) can be interpreted as a strategy to mislead others in thinking that they have corrected their behavior. Raine finds support for his hypothesis in indications of a strong genetic basis for antisocial behavior. He also offered brain imaging results that show an 11% reduction in prefrontal grey matter in antisocial individuals and reduced activity in the prefrontal cortex of affective murderers.

Will we one day be able to evaluate ‘how deep someone’s morality has fallen’? Will there be a ‘stethoscope of morality,’ that can measure the weaknesses of our moral judgments and behaviors? If so, will we able to cure immoral behavior? Or, conversely, will we be able to augment the brain processes that are involved in our moral competence? Perhaps most importantly, what do we do with the notion of moral responsibility when there is evidence of predispositions towards antisocial behavior?  Although there is still a long way to go in understanding the neurobiology of human morality, this conference was an important step in introducing some moral dilemma’s that may confront us as the field of research progresses. More information on www.themoralbrain.be

1. Percy W (1971), Love in the Ruins, Farrar, Straus & Giroux, New York.

2. Elliott C (1999), Bioethics, Culture and Identity. A Philosophical Disease, Routledge, London.


An Ravelingien Ph.D. is a fellow of the IEET, and an assistant researcher in bioethics at the Department of Philosophy, Ghent University.

Manipulating your mind

Manipulating your mind – What will science discover about our brains, and how are we going to deal with it?

The Decade of the Brain, proclaimed by US President George Bush in 1990, passed without making much of an obvious impact. But it did in fact produce considerable scientific advances in neuro-biology, giving scientists an exponentially increasing knowledge of how the brain works and the means to manipulate biochemical processes within and between nerve cells. This knowledge is slowly trickling down to society as well, be it in the pharmaceutical industry, to parents concerned about their child’s performance in school, to students looking for chemical helpers to pass their exams, or to military researchers who have an obvious interest in keeping soldiers awake and alert.

”Unlike the many claimed applications of genetics… diagnostic and therapeutic products from neurobiological research are already available”

The ability to fiddle with the brain with ever-increasing effectiveness has also created critical questions about how to use this knowledge. Francis Fukuyama, in Our Posthuman Future, Leon Kass, Chairman of the US President’s Council on Bioethics, and Steven Rose, a neurobiologist at the Open University, UK, are the most prominent and outspoken critics of the use of psychopharmaceuticals and other neurological techniques to analyse and interfere with human mental capabilities. Their concerns have also grasped the attention of neurobiologists, ethicists, philosophers and the lay public, who are all slowly realising the enormous potential of modern neuroscience. “People closely identify themselves with their brains, they don’t with their genes,” said Arthur L. Caplan, Professor of Bioethics at the University of Pennsylvania, Philadelphia, PA, USA.

Although these debates started in the late 1990s, it took the general public a bit longer to take notice—The New York Times and The Economist did not pick up on the issue until 2002. “There is a great amount of information about the brain but no one’s paying attention to the ethics,” Caplan said. “The attention of ethicists went to genetics because of the Human Genome Project…so we had to jump-start the ethics [in neurobiology].” But that is rapidly changing. Unlike the many claimed applications of genetics, such as gene therapy or molecular medicine, diagnostic and therapeutic products from neurobiological research are already available. Caplan sees four major controversial areas: the definition and diagnosis of certain types of behaviour, such as aggression, terrorism or poor performance in school; the use of drugs to alter such behaviour; questions about moral responsibility—with people going to court and saying ‘this man isn’t responsible because his brain is abnormal’; and eventually new debates about racial and gender differences.

These controversies are not just anticipated: most are already occurring. Society’s pursuit of perfection entails ‘treating’ whatever is not desirable—be it bad mood, aggression or forgetfulness. Many people take herbal memory enhancers, such as ginkgo biloba, even though they are probably no more effective than sugar or coffee. But neurobiology adds a new twist. By understanding the brain’s workings at the chemical level, it paves the way for much more efficient ways to tweak brain function. And many psychopharmaceuticals already enjoy a much broader popularity beyond treating neurological and psychiatric diseases. “When you think of the millions of pills that people take as anti-anxiety drugs, how many of these people are really anxious? Probably just a small percentage,” said James L. McGaugh, Director of the Center for the Neurobiology of Learning and Memory at the University of California, Irvine, CA, USA. Millions of school children in the USA are prescribed antipsychotic drugs or are treated for depression and attention deficit and hyperactivity disorder (ADHD), and the numbers in Western Europe are also increasing (Brower, 2003). There is an epidemic of new behavioural disorders: ADHD, seasonal affective disorder (SAD), post-traumatic stress disorder (PTSD), panic disorder (PD), narcissistic personality disorder (NPD), borderline personality disorder (BPD), antisocial personality disorder (APD), histrionic personality disorder (HPD)—soon we will run out of letter combinations to abbreviate them all. The explosive increase in prescriptions for Ritalin® for school children has already prompted questions about the apparent epidemic of ADHD. “Now it’s not that Ritalin is not effective in sedating an over-active kid, it certainly is, but it’s turning a complex social relationship into a problem inside the brain of a child and therefore inside the genes of a child,” said Rose (see interview, in this issue).

In a way, Ritalin is neuroethics “in a nutshell”, commented Wrye Sententia, co-director of the Center for Cognitive Liberty and Ethics (CCLE), a non-profit education, law and policy center in Davis, CA, USA, and head of its programme on neuroethics. The debate over the drug covers social, ethical and legal issues: who defines behaviour and behavioural disorder, who should control treatment, how should society react to drug misuse, and is it ethical to use drugs to gain an advantage over others? These are valid questions that apply equally to neuroethics in general.

Neuropharmaceuticals have already found applications outside a medical setting. Like amphetamines before it, Ritalin is increasingly used by healthy people to help them focus their attention. Similarly, the development of new drugs to influence the biochemistry of brain function also has broad economic potential outside the medical setting. Most memory-enhancing drugs available to treat Alzheimer’s, such as donezepil, galantamine or rivastigmine, inhibit cholinesterase to slow down the turnover of the neurotransmitter acetylcholine in the synapse. New drugs in the development pipeline will act on other compounds in the biochemical pathway that encodes memory: Cortex Pharmaceuticals (Irvine, CA, USA) are studying compounds called Ampakines®, which act on the AMPA receptor. This receptor responds to glutamate, which is itself involved in memory acquisition. Another class of drugs under development acts on the cAMP responsive element-binding protein (CREB), the last step in establishing long-term memory. “What we would expect is that drugs that enhance CREB signalling would be specific to inducing long-term memory and not affect upstream events of memory, such as memory acquisition and short term memory,” explained Tim Tully, Professor at Cold Spring Harbor Laboratory (NY, USA) and founder of Helicon Therapeutics (Farmingdale, NY, USA), one of two companies now working on drugs to increase CREB function.

None of these drugs, however, tackles brain degeneration itself, the cause of Alzheimer’s and other neurodegenerative diseases, but instead they delay the disease by squeezing a little more out of the remaining brain material. Consequently, they will also work on healthy people. Not surprisingly, the pharmaceutical industry has a great interest in this non-medical use of memory-enhancing drugs, according to McGaugh: “The Alzheimer market is a very important one, but small. The real market is everyone else out there who would like to learn a little easier. So they take a pill in place of studying harder.” Tully warned about the dangers of this off-label use of memory enhancers. The side effects of the first generation of memory drugs are a risk that should not be taken when there is no reason, he said. And this may never become an application, due to other intrinsic side effects. “Maybe it is not a good thing to have memory enhanced chronically every day for the rest of your life. Maybe that will produce psychological side effects, like cramp your head with too many things you can’t forget,” Tully said.

“The strong military interest in psychopharmaceuticals also presents another conundrum: if the military allows their off-label use, it would be hard to call for a ban on their civil use…”

Although memory is important, so too is the ability to forget negative experiences. As long-term memory is largely enhanced by stress hormones and emotional arousal, a horrendous event can overload the system and lead to PTSD: patients persistently re-experience the trauma. Researchers at Harvard University are now studying propranolol, a beta-blocker commonly used as a cardiac drug, as a means to decrease PTSD. Similarly, Helicon Therapeutics is working on CREB suppressors to achieve the same goal: forgetting unwanted memories. These drugs could be valuable for rape victims, survivors of terrorist attacks or young soldiers suffering from PTSD as a result of battlefield experiences. Nevertheless, an ethical debate over memory suppressors has emerged. Kass has described them as the “morning-after pill for just about anything that produces regret, remorse, pain or guilt” (Baard, 2003). But “if the soldier should be shot in the leg, he is treated. They mend the wounds. Now why wouldn’t they mend the mental wounds? On what moral grounds?” countered McGaugh. “We need the right regulations and we need the right education of society so that the social acceptance of how to use such drugs is appropriate,” said Tully. “Just to give the drug to every soldier that has been out in the field, that would be an abuse… A commander-in-chief, one would hope, would decide against such a use based on his education and on his advisors telling him scientists and experts have discussed this issue and it’s immoral to do something like that.”

“Freedom of thought is situated at the core of what it means to be a free person”

Cognitive enhancement is of just as much military interest as the treatment of PTSD. German fighter pilots in World War II took amphetamines to stay alert during British bombing raids at night. During the war against Iraq, US fighter and bomber pilots used drugs to keep awake during the long flights to and from their targets, which with briefing and debriefing could easily exceed 24 hours. Not surprisingly, the US Air Force is carrying out research on how donepezil could improve pilots’ performance. The strong military interest in psychopharmaceuticals also presents another conundrum: if the military allows their off-label use, it would be hard to call for a ban on their civil use, as Kass has suggested.

Neurological advances are not limited to new drugs. Brain imaging techniques, such as functional magnetic resonance imaging (fMRI) or positron emission tomography (PET), offer enormous potential for analysing higher behaviour. While neurologists originally used them to analyse basic sensual, motor and cognitive processes, they are now increasingly being used by psychologists and philosophers to investigate the mechanics of social and moral attitudes, reasoning and moral perceptions (Illes et al, 2003). Joshua Greene, a graduate student at Princeton University’s Center for the Study of Brain, Mind and Behavior, put his human subjects into a fMRI scanner and presented them with hypothetical scenarios in which they had to make a decision between two more or less bad outcomes of the situation (Greene et al, 2001). The results of the studies show how the brain weighs emotional and rational reasoning against each other in its decision-making. Potentially, this could be used as a sophisticated lie detector to see if someone answers a question spontaneously or after considerable reasoning. Other studies showed that the brain reacts differently at first sight when seeing a person of the same or a different skin colour (Hart et al, 2000; Phelps et al, 2000). That does not necessarily mean that everyone is a racist, but refinement of such methods could unveil personal prejudices or preferences. The use of brain scans to evaluate people’s talents or dispositions will therefore draw as much interest as the drugs used to manipulate them. “Parents will be falling over themselves to take these tests,” Caplan said. In contrast to Kass and other conservative critics, he therefore argues that regulation will not make sense but that it should be left to the individual to make decisions about whether to undergo diagnostic tests for behaviour or take behaviour-modifying drugs. “Medicine, business and the public will have to negotiate these boundaries,” Caplan said, but he remains worried that “peer pressure and advertising and marketing will make us take those pills.” Rose also does not call for a ban, but wants society to take control of these new advances and their applications, based on democratic decisions.

The use of these new tests and drugs may cause another problem. Going back to Ritalin, Sententia explained that an important reason for the apparent increase in ADHD may be overcrowded classrooms and overworked teachers, who are quick to label a child with ADHD rather than call for improvements in the school. “From the top down there is a clear message to put these kids on drugs,” Sententia said. Society should instead “put the parents’ rights back into focus” and better educate parents about behavioural disorders. This would give them more freedom to make their own decisions for their child “so they are not at the mercy of doctors or teachers,” she continued. Such “cognitive liberty”, as Sententia described it, would have to rest on better public education and understanding about the risks and benefits, the potentials and myths of neurobiology. “What I think we need to do in the next five or ten years is discuss exactly what is appropriate and inappropriate in applying these things,” said Tully. “Now is the time for education.”

This does not, however, solve the question of who controls diagnostic tools and treatment in the case of people who are not free or able to make their own decisions—such as children, prison inmates or psychiatric patients. CCLE, for instance, filed an amicus curiae (‘friend of the court’) brief to the US Supreme Court on behalf of Charles T. Sell, to argue against a court order requiring Sell to be injected with psychotropic drugs to make him mentally competent to stand trial for insurance fraud. Sententia sees some limitations, however, to cognitive freedom. Children do not enjoy the same civil rights as adults, but it should be the parents—not teachers or schools—who make the decisions about the diagnosis and treatment of their children, she said. Prison inmates also lose some of their individual rights when they are convicted, Sententia continued, and this may include their right to refuse medication. “The legal system will have to decide how to use this knowledge about the brain,” Caplan commented, in light of the “tremendous tension between brain privacy and social interest in controlling dangerous behaviour.” Sententia therefore stressed that all decisions about diagnosis and treatment must at least be in accordance with the US Constitution and the United Nations Declaration of Human Rights.

Some of the most important applications of this right to privacy concern using brain scans as a sophisticated lie detector for prisoners seeking parole, foreigners applying for a visa or employers testing their employees’ honesty. “What and how you think should be private,” Sententia said, because “freedom of thought is situated at the core of what it means to be a free person.” Caplan also expects more pressure from society in future to make sure that no such tests are performed without informed consent.

“The use of brain scans to evaluate people’s talents or dispositions will therefore draw as much interest as the drugs used to manipulate them”

Equally, Caplan, Sententia and others believe that individuals should be free to use neurological technology to enhance their mental abilities outside a medical setting. This is in contrast to the prohibitive stance taken by Kass and other conservatives who argue that it would be neither ‘natural’ nor fair to those who choose not to use such enhancement. “It’s not clear to me that all forms of enhancement are bad,” commented Adina Roskies, a neuroscientist and philosopher at the Massachusetts Institute of Technology’s Department of Linguistics and Philosophy (Cambridge, MA, USA). “There are all sorts of things that we do today that enhance our life prospects and that are not considered to be bad. … We’re far away from the ‘natural’ order already.” Thus, in some cases, instead of controlling or even restricting these new possibilities, it would be better if society focuses on trying to ensure that everyone has access to them, she continued. Given the increasing interest that the public is showing in the new possibilities offered by neuroscience, it may be too late for restrictions anyway. “There is no way of stopping this tide, the genie is out of the bottle,” Sententia said, “so the question is: how can we navigate this sea of change?”


  1. Baard E ( 2003) The guilt-free soldier. The Village Voice, Jan 22
  2. Brower V ( 2003) Analyse this. EMBO Rep 4: 1022–1024
  3. Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD ( 2001) An fMRI investigation of emotional engagement in moral judgement. Science 293: 2105–2108
  4. Hart A, Whalen P, McInerney S, Fischer H, Rauch S ( 2000) Differential response in the human amygdala to racial outgroup versus ingroup stimuli. Neuroreport 11: 2351–2355
  5. Illes J, Kirschen MP, Gabrieli JDE ( 2003) From neuroimaging to neuroethics. Nat Neurosci 6: 205
  6. Phelps EA, O’Connor KJ, Cunningham WA, Funayama ES, Gatenby JC, Gore JC, Banaji MR ( 2000) Performance on indirect measures of race evaluation predicts amygdala activation. J Cogn Neurosci 12: 729–738

Manipulating your mind – What will science discover about our brains, and how are we going to deal with it? Holger Breithaupt & Katrin Weigmann, EMBO reports 5, 3, 230–232 (2004)


Consciousness and Neuroscience


“When all’s said and done, more is said than done.” — Anon.

The main purposes of this review are to set out for neuroscientists one possible approach to the problem of consciousness and to describe the relevant ongoing experimental work. We have not attempted an exhaustive review of other approaches.

Clearing The Ground

We assume that when people talk about “consciousness,” there is something to be explained. While most neuroscientists acknowledge that consciousness exists, and that at present it is something of a mystery, most of them do not attempt to study it, mainly for one of two reasons:

  1. They consider it to be a philosophical problem, and so best left to philosophers.
  2. They concede that it is a scientific problem, but think it is premature to study it now.

We have taken exactly the opposite point of view. We think that most of the philosophical aspects of the problem should, for the moment, be left on one side, and that the time to start the scientific attack is now.

We can state bluntly the major question that neuroscience must first answer: It is probable that at any moment some active neuronal processes in your head correlate with consciousness, while others do not; what is the difference between them? In particular, are the neurons involved of any particular neuronal type? What is special (if anything) about their connections? And what is special (if anything)about their way of firing? The neuronal correlates of consciousness are often referred to as the NCC. Whenever some information is represented in the NCC it is represented in consciousness.

In approaching the problem, we made the tentative assumption (Crick and Koch, 1990) that all the different aspects of consciousness (for example, pain, visual awareness, self-consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then, we hope, we will have gone most of the way towards understanding them all.

We made the personal decision (Crick and Koch, 1990) that several topics should be set aside or merely stated without further discussion, for experience had shown us that otherwise valuable time can be wasted arguing about them without coming to any conclusion.

(1) Everyone has a rough idea of what is meant by being conscious. For now, it is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both. If this seems evasive, try defining the word “gene.” So much is now known about genes that any simple definition is likely to be inadequate. How much more difficult, then, to define a biological term when rather little is known about it.

(2) It is plausible that some species of animals — in particular the higher mammals — possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the mechanisms underlying consciousness. It follows that a language system (of the type found in humans) is not essential for consciousness — that is, one can have the key features of consciousness without language. This is not to say that language does not enrich consciousness considerably.

(3) It is not profitable at this stage to argue about whether simpler animals (such as octopus, fruit flies, nematodes) or even plants are conscious (Nagel, 1997). It is probable, however, that consciousness correlates to some extent with the degree of complexity of any nervous system. When one clearly understands, both in detail and in principle, what consciousness involves in humans, then will be the time to consider the problem of consciousness in much simpler animals. For the same reason, we won’t ask whether some parts of our nervous system have a special, isolated, consciousness of their own. If you say, “Of course my spinal cord is conscious but it’s not telling me,” we are not, at this stage, going to spend time arguing with you about it. Nor will we spend time discussing whether a digital computer could be conscious.

(4) There are many forms of consciousness, such as those associated with seeing, thinking, emotion, pain, and so on. Self-consciousness — that is, the self-referential aspect of consciousness — is probably a special case of consciousness. In our view, it is better left to one side for the moment, especially as it would be difficult to study self-consciousness in a monkey. Various rather unusual states, such as the hypnotic state, lucid dreaming, and sleep walking, will not be considered here, since they do not seem to us to have special features that would make them experimentally advantageous.

Visual Consciousness

How can one approach consciousness in a scientific manner? Consciousness takes many forms, but for an initial scientific attack it usually pays to concentrate on the form that appears easiest to study. We chose visual consciousness rather than other forms, because humans are very visual animals and our visual percepts are especially vivid and rich in information. In addition, the visual input is often highly structured yet easy to control.

The visual system has another advantage. There are many experiments that, for ethical reasons, cannot be done on humans but can be done on animals. Fortunately, the visual system of primates appears fairly similar to our own (Tootell et al., 1996), and many experiments on vision have already been done on animals such as the macaque monkey.

This choice of the visual system is a personal one. Other neuroscientists might prefer one of the other sensory systems. It is, of course, important to work on alert animals. Very light anesthesia may not make much difference to the response of neurons in macaque V1, but it certainly does to neurons in cortical areas like V4 or IT (inferotemporal).

Why Are We Conscious?

We have suggested (Crick and Koch, 1995a) that the biological usefulness of visual consciousness in humans is to produce the best current interpretation of the visual scene in the light of past experience, either of ourselves or of our ancestors (embodied in our genes), and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech.

Philosophers, in their carefree way, have invented a creature they call a “zombie,” who is supposed to act just as normal people do but to be completely unconscious (Chalmers, 1995). This seems to us to be an untenable scientific idea, but there is now suggestive evidence that part of the brain does behave like a zombie. That is, in some cases, a person uses the current visual input to produce a relevant motor output, without being able to say what was seen. Milner and Goodale (1995) point out that a frog has at least two independent systems for action, as shown by Ingle (1973). These may well be unconscious. One is used by the frog to snap at small, prey-like objects, and the other for jumping away from large, looming discs. Why does not our brain consist simply of a series of such specialized zombie systems?

We suggest that such an arrangement is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation.

Milner and Goodale (1995) suggest that in primates there are two systems, which we shall call the on-line system and the seeing system. The latter is conscious, while the former, acting more rapidly, is not. The general characteristics of these two systems and some of the experimental evidence for them are outlined below in the section on the on-line system. There is anecdotal evidence from sports. It is often stated that a trained tennis player reacting to a fast serve has no time to see the ball; the seeing comes afterwards. In a similar way, a sprinter is believed to start to run before he consciously hears the starting pistol.

The Nature of the Visual Representation

We have argued elsewhere (Crick and Koch, 1995a) that to be aware of an object or event, the brain has to construct a multilevel, explicit, symbolic interpretation of part of the visual scene. By multilevel, we mean, in psychological terms, different levels such as those that correspond, for example, to lines or eyes or faces. In neurological terms, we mean, loosely, the different levels in the visual hierarchy (Felleman and Van Essen, 1991).

The important idea is that the representation should be explicit. We have had some difficulty getting this idea across (Crick and Koch, 1995a). By an explicit representation, we mean a smallish group of neurons which employ coarse coding, as it is called (Ballard et al., 1983), to represent some aspect of the visual scene. In the case of a particular face, all of these neurons can fire to somewhat face-like objects (Young and Yamane, 1992). We postulate that one set of such neurons will be all of one type (say, one type of pyramidal cell in one particular layer or sublayer of cortex), will probably be fairly close together, and will all project to roughly the same place. If all such groups of neurons (there may be several of them, stacked one above the other) were destroyed, then the person would not see a face, though he or she might be able to see the parts of a face, such as the eyes, the nose, the mouth, etc. There may be other places in the brain that explicitly represent other aspects of a face, such as the emotion the face is expressing (Adolphs et al., 1994).

Notice that while the information needed to represent a face is contained in the firing of the ganglion cells in the retina, there is, in our terms, no explicit representation of the face there.

How many neurons are there likely to be in such a group? This is not yet known, but we would guess that the number to represent one aspect is likely to be closer to 100-1,000 than to 10,000-1,000,000.

A representation of an object or an event will usually consist of representations of many of the relevant aspects of it, and these are likely to be distributed, to some degree, over different parts of the visual system. How these different representations are bound together is known as the binding problem (von der Malsburg, 1995).

Much neural activity is usually needed for the brain to construct a representation. Most of this is probably unconscious. It may prove useful to consider this unconscious activity as the computations needed to find the best interpretation, while the interpretation itself may be considered to be the results of these computations, only some of which we are then conscious of. To judge from our perception, the results probably have something of a winner-take-all character.

As a working hypothesis we have assumed that only some types of specific neurons will express the NCC. It is already known (see the discussion under”Bistable Percepts”) that the firing of many cortical cells does not correspond to what the animal is currently seeing. An alternative possibility is that the NCC is necessarily global (Greenfield, 1995). In one extreme form this would mean that, at one time or another, any neuron in cortex and associated structures could express the NCC. At this point, we feel it more fruitful to explore the simpler hypothesis — that only particular types of neurons express the NCC — before pursuing the more global hypothesis. It would be a pity to miss the simpler one if it were true. As a rough analogy, consider a typical mammalian cell. The way its complex behavior is controlled and influenced by its genes could be considered to be largely global, but its genetic instructions are localized, and coded in a relatively straightforward manner.

Where is the Visual Representation?

The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen.

We have also wondered (Crick, 1994) whether the visual representation is largely confined to certain neurons in the lower cortical layers (layers 5 and 6). This hypothesis is still very speculative.

What is Essential for Visual Consciousness?

The term “visual consciousness” almost certainly covers a variety of processes. When one is actually looking at a visual scene, the experience is very vivid. This should be contrasted with the much less vivid and less detailed visual images produced by trying to remember the same scene. (A vivid recollection is usually called a hallucination.) We are concerned here mainly with the normal vivid experience. (It is possible that our dimmer visual recollections are mainly due to the back pathways in the visual hierarchy acting on the random activity in the earlier stages of the system.)

Some form of very short-term memory seems almost essential for consciousness, but this memory may be very transient, lasting for only a fraction of a second. Edelman (1989) has used the striking phrase, “the remembered present,” to make this point. The existence of iconic memory, as it is called, is well-established experimentally (Coltheart, 1983; Gegenfurtner and Sperling, 1993).

Psychophysical evidence for short-term memory (Potter, 1976; Subramaniam et al., 1997) suggests that if we do not pay attention to some part or aspect of the visual scene, our memory of it is very transient and can be overwritten (masked) by the following visual stimulus. This probably explains many of our fleeting memories when we drive a car over a familiar route. If we do pay attention (e.g., a child running in front of the car) our recollection of this can be longer lasting.

Our impression that at any moment we see all of a visual scene very clearly and in great detail is illusory, partly due to ever-present eye movements and partly due to our ability to use the scene itself as a readily available form of memory, since in most circumstances the scene usually changes rather little over a short span of time (O’Regan, 1992).

Although working memory (Baddeley, 1992; Goldman-Rakic, 1995) expands the time frame of consciousness, it is not obvious that it is essential for consciousness. It seems to us that working memory is a mechanism for bringing an item, or a small sequence of items, into vivid consciousness, by speech, or silent speech, for example. In a similar way, the episodic memory enabled by the hippocampal system (Zola-Morgan and Squire, 1993) is not essential for consciousness, though a person without it is severely handicapped.

Consciousness, then, is enriched by visual attention, though attention is not essential for visual consciousness to occur (Rock et al., 1992; Braun and Julesz, 1997). Attention is broadly of two types: bottom-up, caused by the sensory input; and top-down, produced by the planning parts of the brain. This is a complicated subject, and we will not try to summarize here all the experimental and theoretical work that has been done on it.

Visual attention can be directed to either a location in the visual field or to one or more (moving) objects (Kanwisher and Driver, 1992). The exact neural mechanisms that achieve this are still being debated. In order to interpret the visual input, the brain must arrive at a coalition of neurons whose firing represents the best interpretation of the visual scene, often in competition with other possible but less likely interpretations; and there is evidence that attentional mechanisms appear to bias this competition (Luck et al., 1997).


Consciousness and Neuroscience, Francis Crick (The Salk Institute) & Christof Koch (Computation and Neural Systems Program, California Institute of Technology)

Has appeared in: Cerebral Cortex, 8:97-107, 1998

Corresponding author:

Francis Crick
The Salk Institute
10010 North Torrey Pines Road
La Jolla, California 92037
(619) 453-4100 x1242
Fax: (619) 550-9959


Seeing and Knowing

The present paper has two major goals, one of which is to argue that seeing is not always perceiving and the other of which is to argue that visual perception alone leads to knowledge of the world. Let me immediately try to make these two cryptic claims more transparent. Not all human vision has been designed to allow visual perception. Seeing can and often does make us visually aware of objects, properties and facts in the world. But it need not. Often enough, seeing allows us to act efficiently on objects of which we are dimly  aware, if at all. While moving at high speed, for example, experienced drivers are sometimes capable of avoiding an interfering obstacle of whose visual attributes they become fully aware afterwards. One may efficiently either catch or avoid being hit by a flying tennis ball without being aware of either its color or texture. This is the sense in which seeing is not always perceiving. If so, then the question arises as to the nature, function and cognitive role of non-perceptual vision. Here, I will make two joint claims. First of all, I will try to argue that the main job of human visual perception is to provide visual information for what functionalist philosophers have called  the “belief box”. In other words, visual percepts are inputs to further conceptual processing whose output can be stored in the belief box. Secondly, I will try to argue that the function of that part of the visual system that produces what I shall call “non-perceptual” or more often “visuomotor” representations is to provide visual guidance to the “intention box”. More specifically, I will argue that, unlike visual percepts, visuomotor representations — which, I shall claim, are genuine representations — present visual information to motor intentions and serve as inputs to “causally indexical” concepts. On the joint assumptions (that I accept) that in the relevant propositional sense, only facts can be known, and that one cannot know a fact unless one believes that this very fact (or state of affairs) holds, then it follows from my distinction between perceptual and visuomotor processing that only visual perception can give rise to “detached” knowledge of the mind-independent world.

I. Not all seeing is perceiving
I.1. The dualistic model of the human visual system
            In their (1982) paper “Two Cortical Visual Systems”, the cognitive neuroscientists Leslie Ungerleider and Mortimer Mishkin posited an anatomical distinction between the ventral pathway and the dorsal pathway in the primate visual system (see Figure 1). The former projects the primary visual cortex onto inferotemporal areas. The latter projects the primary visual cortex onto parietal areas, which serve as a relay between the primary visual cortex, the premotor and the motor cortex. Ungerleider and Mishkin based their anatomical distinction on neurophysiological and behavioral evidence gathered from the study of macaque monkeys. They performed intrusive lesions respectively in the ventral and in the dorsal pathway of the visual system of macaque monkeys and they found the following double dissociation. Animals with a lesion in the ventral pathway were impaired in the identification and recognition of the colors, textures and shapes of objects. But they were relatively unimpaired in tasks or spatial orientation. In tasks of spatial orientation, they were presented with two wells one of which contained food and the other of which was empty: the former was closer to a landmark than the latter (see Figure 2). Animals with a ventral lesion could accurately use the presence of the landmark in order to discriminate the well with food from the well without. By contrast, animals with a dorsal lesion were severely disoriented, but their capacity to identify and recognize the shapes, colors and textures of objects were well-preserved. On this basis, Ungerleider and Mishkin (1982) concluded that the ventral pathway of the primate visual system is the What system and the dorsal pathway of the primate visual system is the Where system.

            In their (1995) book, The Visual Brain in Action, the cognitive neuroscientists David Milner and Mel Goodale presented a number of arguments in favor of a new interpretation of the dualistic model of the human visual system. On their view, the ventral stream of the human visual system serves what they call “vision-for-perception” and the dorsal stream serves what they call “vision-for-action”. The important idea underlying Milner and Goodale’s dualistic model of human vision is that one and the same visual stimulus can be processed in two fundamentally different ways. Now, two caveats are important here. First of all, it is quite clear, I think, that, as Austin (1962) emphasized, humans can see a great variety of things: they can see e.g., tables, trees, rivers, substances, gases, vapors, mountains, flames, clouds, smoke, shadows, holes, pictures, movies, events and actions. Here, I will not examine the ontological status of all the various things that human beings can see and I shall restrict myself to seeing ordinary middle-sized objects that can also happen to be targets of human actions. Secondly, it is no objection to the dualistic model of the human visual system to acknowledge that, in the real life of normal human subjects, the two distinct modes of visual processing are constantly collaborating. Indeed, the very idea that they collaborate — if and when they do — presupposes that they are distinct. The trick of course is to find experimental conditions in which the two modes of visual processing can be dissociated. In the following, I will provide some examples drawn first from the psychophysical study of normal human subjects and then from the neuropsychological study of brain-lesioned human patients.

I.2. Psychophysical evidence
            Bridgeman et al. (1975) Goodale et al. (1986) found that normal subjects can point accurately to a target on the screen of a computer whose motion they could not consciously notice because it coincided with one of their saccadic eye movement (see Jeannerod, 1997: 82). Castiello et al. (1991) found that subjects are able to correct the trajectory of their hand movement directed towards a moving target some 300 milliseconds before they became conscious of the target’s change of location. Pisella et al. (2000) and Rossetti & Pisella (2000) performed experiments involving a pointing task in which subjects were presented with a green target towards which they were requested to point their index finger. Some of them were instructed to stop their pointing movement towards the target when and only when it changed location by jumping either to the left or to the right. Pisella et al. (2000) and Rossetti & Pisella (2000) found a significant percentage of very fast unwilled correction movements generated by what they called the “automatic pilot” for hand movement. In a second experiment, Pisella et al. (2000) presented subjects simultaneously with pairs of a green and a red target. They were instructed to point to the green target, but the color of the two targets could be interchanged unexpectedly at movement onset. Unlike a change of target location, a change of color did not elicit fast unwilled corrective movements by the “automatic pilot”. On this basis, Pisella et al. (2000) draw a contrast between the fast visuomotor processing of the location of a target in egocentric coordinates and  the slower visual processing of the color of an object.

            One psychophysical area of particular interest is the study of visual size-contrast illusions. One particularly well-known such illusion is the Titchener or Ebbinghaus illusion. The standard version of the illusion consists of the display of two circles of equal diameter, one surrounded by an annulus of circles greater than it, and the other surrounded by an annulus of circles smaller than it. Although they are equal, the former looks smaller than the latter (see Figure 3). One plausible account of the Titchener illusion is that the array of smaller circles is judged to be more distant than the array of larger circles. Visually based perceptual judgments of distance and size are typically relative judgments: in a perceptual task, one cannot but fail to see some things as smaller (or larger) and closer (or further away) than other neighboring things that are parts of a single visual array. In perceptual tasks, the output of obligatory comparisons of sizes, distances and positions of constituents of a visual array serves as input to perceptual constancy mechanisms. As a result, of two physically equal objects, if one is perceived as more distant from the observer than the other, the former will be perceived as larger than the latter. A non-standard version of the illusion consists in the display of two circles of unequal diameter: the larger of the two is surrounded by an annulus of circles larger than it, while the smaller of the two is surrounded by an annulus of circles smaller than it, so that the two unequal circles look equal.

            Aglioti et al. (1995) designed an experiment in which they replaced the two central circles by two graspable three-dimensional plastic disks, which they displayed within a horizontal plane. In a first row of experiments with pairs of unequal disks whose diameters ranged from 27 mm to 33 mm, they found that on average the disk in the annulus of larger circles had to be 2,5 mm wider than the disk in the annulus of smaller circles in order for both to look equal. These numbers provide a measure of the delicacy of the human visual system. Finally, Aglioti et al. (1995) alternated presentations of physically unequal disks, which looked equal, and presentations of physically equal disks, which looked unequal. Both kinds of trials were presented randomly and so were the left vs. right positions of either kind of stimuli. Subjects were instructed to pick up the disk on the left between the thumb and index finger of their right hand if they thought the two disks to be equal or to pick up the disk on the right if they judged them to be unequal.

            The sequence of subjects’ choices of the disk on the right or the disk on the left provided a measure of the magnitude of the illusion prompted by the perceptual comparison between two disks surrounded by two distinct annuli. In the visuomotor task, the measure of grip size was based on the unfolding of the natural grasping movement performed by subjects while their hand approached the object. During a prehension movement, fingers progressively stretch to a maximal aperture before they close down until contact with the object. It has been found that the maximum grip aperture (MGA) takes place at a relatively fixed point, i.e., at about 60% of the duration of the movement (cf. Jeannerod, 1984). In non-illusory contexts, MGA has been found to be reliably correlated with the object’s physical size. Although much larger, it is directly proportional to the actual physical size of the object. MGA cannot depend on a conscious visual comparison between the size of the object and subjects’ hand during the prehension movement since the correlation between MGA and object’s size is reliable even when subjects have no visual access to their own hand. Rather, MGA is assumed to result from an early anticipatory automatic visual process of calibration. Thus, Aglioti et al. (1995) measured MGA in flight using optoelectronic recording.

            What Aglioti et al. (1995) found was that, unlike comparative perceptual judgment expressed by the sequence of choices of either the disk on the left or the disk on the right, the grip was not significantly affected by the illusion. The influence of the illusion was significantly stronger on perceptual judgment than on the grasping task. This experiment, however, raises a number of methodological problems. The main issue, raised by Pavani et al. (1999) and Franz et al. (2000), is the asymmetry between the two tasks. In the perceptual task, subjects are asked to compare two distinct disks surrounded by two different annuli. But in the grasping task, subjects focus on a single disk surrounded by an annulus. So the question arises whether, from the observation that the comparative perceptual judgment is more affected by the illusion than the grasping task, one may conclude that perception and action are based on two distinct representational systems.

            Aware of this problem, Haffenden & Goodale (1998) performed the same experiment, but they designed one more task: in addition to instructing subjects to pick up the disk on the left if they judged the two disks to be equal in size or to pick up the disk on the right if they judged them to be unequal, they required subjects to manually estimate between the thumb and index finger of their right hand the size of the disk on the left if they judged the disks to be equal in size and to manually estimate the size of the disk on the right if they judged them to be unequal (see Figure 4). Haffenden & Goodale (1998) found that the effect of the illusion on the manual estimation of the size of a disk (after comparison) was intermediary between comparative judgment and grasping.

            Furthermore, Haffenden & Goodale (1998) found that the presence of an annulus had a selective effect on grasping. They contrasted the presentation of pairs of disks either against a blank background or surrounded by an annulus of circles of intermediate size, i.e., of size intermediary between the size of the smaller circles and the size of the larger circles involved in the contrasting pair of illusory annuli. The circles of intermediate size in the annulus were slightly larger than the disks of equal size. When a pair of physically different disks were presented against either a blank background or a pair of annuli made of intermediate size circles, both grip scaling and manual estimates reflected the physical difference in size between the disks. When physically equal disks were displayed against either a blank background or a pair of annuli made of circles of intermediate size, no significant difference was found between grasping and manual estimate. The following dissociation, however, turned up: when physically equal disks were presented with a middle-sized annulus, overall MGA was smaller than when physically equal disks were presented against a blank background. Thus, the presence of an annulus of middle-sized circles prompted a smaller MGA than a blank background. Conversely, overall manual estimate was larger when physically equal disks were presented against a background with a middle-sized  annulus than when they were presented against a blank background. The illusory effect of the middle-size annulus presumably arises from the fact that the circles in the annulus were slightly larger than the equal disks. Thus, whereas the presence of a middle-sized annulus contributes to increasing manual estimation, it contributes to decreasing grip scaling. This dissociation shows that the presence of an annulus may have conflicting effects on perceptual estimate and on grip aperture.

            Finally, Haffenden, Schiff & Goodale (2001) went one step further. They presented subjects with three distinct Titchener circle displays one at a time, two of which are the traditional Titchener central disk surrounded by an annulus of circles either smaller than it or larger than it. In the former case, the gap between the edge of the disk and the annulus is 3 mm. In the latter case, the gap between the edge of the disk and the annulus is 11 mm. In the third display, the annulus is made of small circles (of the same size as in the first display), but the gap between the edge of the disk and the annulus is 11 mm (like the gap in the second display with an annulus of larger circles) (see Figure 5). What Haffenden, Schiff and Goodale (2001) found was the following dissociation: in the perceptual task, subjects estimated the third display very much like the first display and unlike the second display. In the visuomotor task, subjects’ grasping in the third condition was much more similar to grasping in the second than in the first condition (see Figure 6). Thus, perceptual estimate was far more sensitive to the size of the circles in the annulus than to the distance between target and annulus. Conversely, grasping was far more sensitive to the distance between target and annulus than to the size of the circles in the annulus. The idea here is that the annulus is processed by the visuomotor processing as a potential obstacle for the position of the fingers on the target disk.

            From this selective review of evidence on size-contrast illusions, I would like to draw two temporary conclusions. First of all, visual perception and visually guided hand actions directed towards objects impose different computational requirements on the human visual system. As I said above, visually based perceptual judgments of distance and size are typically relative comparative judgments. By contrast, visually guided actions directed towards objects are typically based on the computation of the absolute size and the egocentric representation of the location of objects on which to act. In order to successfully grab a branch or a rung, one must presumably compute the distance and the metrical properties of the object to be grabbed quite independently of pictorial contextual features in the visual array.

            Second of all, what the above experiments suggest is not that, unlike perceptual judgments, the visuomotor control of grasping is immune to illusions. Rather, both perceptual judgment and the visuomotor control of action can be fooled by the environment. But if so, then they can be fooled by different features of the visual display. The effect of the Titchener size-contrast illusion on perceptual judgment arises mostly from the comparison between the diameter of the disk and the diameter of the circles in the surrounding annulus. The visuomotor processing, which delivers a visual representation of the absolute size of a target of prehension, is so sensitive to the distance between the edge of the target and its immediate environment that it can be led to process two-dimensional cues as if they were three-dimensional obstacles. I take this last point quite seriously because I claim that it is evidence that the output of the visuomotor processing of the target of an action can misrepresent features of the distal stimulus and is thus a genuine mental representation.

I.3. Neuropsychological evidence
            In the 1970’s, Weiskrantz and others discovered a neuropsychological condition called “blindsight” (see Weiskrantz, 1986, 1997). Since then, the phenomenon has been extensively studied and discussed by philosophers. Blindsight results from a lesion in the primary visual cortex anatomically located prior to the bifurcation between the ventral and the dorsal streams. The significance of the discovery of this phenomenon lies in the fact that although blindsight patients have no phenomenal subjective visual experience of the world in their blind field, nonetheless it was found out that they are capable of striking residual visuomotor capacities. In situations of forced choice, they can do such remarkable things as grap quandragular blocks and insert a hand-held card into an oriented slot. According to most neuropsychologists who have studied such cases, in blindsight patients, the visual information is processed by subcortical pathways that bypass the visual cortex and relay visual information to the motor cortex.

            In the early 1990’s, DF, a British woman suffered an extensive lesion in the ventral stream of her visual system as a result of poisoning by carbon monoxide. She thus became an apperceptive agnosic, i.e., a visual form agnosic patient (see Farah, 1990 for the distinction between apperceptive and associative agnosia). Following the discovery of blindsight, the main novelty of the neuropsychological description of patient DF’s condition — first examined by Goodale et al. (1991) and his colleagues — lies in the fact that DF’s examination did not focus exclusively on what she could not do as a result of her lesion. Rather, she was investigated in depth for what she was still able to do.

            Careful sensory testing of DF revealed subnormal performance for color perception and for visual acuity with high spatial frequencies, though detection of low spatial frequencies was impaired. Her motion perception was poor. DF’s perception of shape and patterns was very poor. She was unable to report the size of an object by matching it by the appropriate distance between the index finger and the thumb of her right hand. Her line orientation detection (reveald by either verbal report or by turning a hand-held card until it matched the orientation presented) was highly variable: although she was above chance for large angular orientation differences between two objects, she fell at chance level for smaller angles. DF was unable to recognize the shape of objects. Interestingly, however, her visual imagery was preserved. For example, although she could hardly draw copies of seen objects, she could draw copies of objects from memory — which she then could hardly later recognize.

            By contrast with her impairment in object recognition, DF was normally accurate when object orientation or size had to be processed, not in view of a perceptual judgment, but in the context of a goal-directed hand movement. During reaching and grasping between her index finger and thumb the very same objects that she could not recognize, she performed accurate prehension movements. Similarly, while transporting a hand-held car towards a slit as part of the process of inserting the former into the latter, she could normally orient her hand through the slit at different orientations (Goodale et al., 1991, Carey et al., 1996). When presented with a pair of rectangular blocks of either the same or different dimensions and asked whether they were the same or different, she failed. When she was asked to reach out and pick up a block, the measure of her (maximal) grip aperture between thumb and index finger revealed that her grip was calibrated to the physical size of the objects, like that of normal subjects. When shown a pair of objects selected from twelve objects of different shapes for same/different judgment, she failed. When asked to grasp them using a “precision grip” between thumb and index finger, she succeeded.

            Conversely, optic ataxia is a syndrome produced by lesions in the dorsal stream. An optic ataxic patient, AT, examined by Jeannerod et al. (1994) shows the reversed dissociation. While she can recognize and identify the shape of visually presented objects, she has serious visuomotor deficits: her reach is misdirected and her finger grip is improperly adjusted to the size and shape of the target of her movements.

            At bottom, DF turns out to be able to visually process size, orientation and shape required for grasping objects, i.e., in the context of a reaching and grasping action, but not in the context of a perceptual judgment. Other experimental results with DF, however, indicate that her visuomotor abilities are restricted in at least two respects. First, in the context of an action, she turns out to be able to visually process simple sizes, shapes and orientations. But she fails to visually process more complex shapes. For example, she can insert a hand-held card into a slot at different orientations. But when asked to insert a T-shaped object (as opposed to a rectangular card) into a T-shaped aperture (as opposed to a simple oriented slit), her performance deteriorated sharply. Inserting a T-shaped object into a T-shaped aperture requires the ability to combine the computations of the orientation of the stem with the orientation of the top of the object together with the computation of the corresponding parts of the aperture. There are good reasons to think that, unlike the quick visuomotor processing of simple shapes, sizes and orientations, the computations of complex contours, sizes and orientations require the contribution of visual perceptual processes performed by the ventral stream — which, we know, has been severely damaged in DF.

            Secondly, the contours of an object can and often are computed by a process of extraction from differences in colors and luminance cues. But normal humans can also extract the contours or boundaries of an object from other cues — such as differences in brightness, texture, shades and complex principles of Gestalt grouping and organization of similarity and good form. Now, when asked to insert a hand-held card into a slot defined by Gestalt principles of good form or by textural information, DF failed (see e.g., Goodale, 1995).

            Apperceptive agnosic patients like DF raise the question: What is it like to see with an intact dorsal system alone? I presently want to emphasize what I take to be a crucial characteristic of the content of visuomotor representations jointly from the examination of DF’s condition and from the visuomotor representations of normal subjects engaged in tasks of grasping illusory displays such as Titchener circles. As I said above, a visual percept yields a representation of the relative size and distance of various neighboring elements within a visual array. I take it that it is of the essence of a percept that the processing of such visual attributes of an object as its size, shape and position or distance must be available for comparative judgment. By contrast, a visuomotor representation of a target in a task of reaching and grasping provides information about the absolute size of the object to be grasped. Crucially, the spatial position of any object can be coded in at least two major coordinate systems or frames of reference: it may be coded in an egocentric frame of reference centered on the agent’s body or it may be coded in an allocentric frame of reference centered on some object present in the visual array. The former is required for allowing an agent to reach and grasp an object. The latter is required in order to locate an object relative to some other object in the visual display.

            Consider e.g., a visual percept of a glass to the left of a telephone. In the visual percept, the location of the glass relative to the location of the telephone is coded in allocentric coordinates. The visual percept has a pictorial content that, I shall argue momentarily, is both informationally richer and more fine-grained than the verbally expressible conceptual content of a different representation of the same fact or state of affairs. For example, unlike the sentence ‘The glass is to the left of the telephone’, the visual percept cannot depict the location of the glass relative to the telephone without depicting ipso facto the orientation, shape, texture, size and color of both the glass and the telephone. Conceptual processing of the pictorial content of the visual percept may yield a representation whose conceptual content can be expressed by the English sentence ‘The glass is to the left of the telephone’. Now the visuomotor representation of the glass as a target of a prehension action requires that information about the size and shape of the glass be contained within a representation of the position of the glass in egocentric coordinates. Unless the telephone interferes with the trajectory of the reaching part of the action of grasping the glass, when one intends to grasp the glass, one does not need to represent the spatial position of the glass relative to the telephone.

            We know that patient DF cannot match the orientation of her wrist to the orientation of a slot in the context of a perceptual task, i.e., when she is not involved in the action of inserting a hand-held card into the slot. She can, however, successfully insert a card into an oriented slot. She cannot perceptually represent the size, shape and orientation of an object. However, she can successfully grasp an object between her thumb and index finger. So the main relevant contrast revealed by the examination of DF is that while she can use an effector (e.g., the distance between her thumb and index finger or the rotation of her wrist) in order to grasp an object or to insert a card into a slot, i.e., in the context of an action, she cannot use the same effector to express a perceptual judgment. What is the main difference between the perceptual and the visuomotor tasks? Both tasks require that visual information about the size and shape of objects be provided. But in the visuomotor task, this information is contained in a representation of the spatial position of the target coded in an egocentric frame of reference. In the perceptual task, information about the size and shape of objects is contained in a representation of the spatial position of the object coded in an allocentric frame of reference. Normal subjects can easily switch from one spatial frame of reference to the other. Such fast transformations may be required when e.g., one switches from counting items lying on a table or from drawing a copy of items lying on a table to grasping one of them. However, DF’s visual system cannot make the very same visual information about the size, shape and orientation of an object available for perceptual comparisons. In DF, information about the size and the shape of an object is trapped within a visuomotor representation of its location coded in egocentric coordinates. It is not available for recoding in an allocentric frame of reference. Coding spatial relationships among different constituents of a visual scene is crucial to forming a visual percept. By contrast, locating a target in egocentric coordinates is crucial to forming a visuomotor representation on the basis of which to act on the target.

II. Visual knowledge of the world

            Although , if the above is on the right track, not all human vision has been designed to allow visual perception, nonetheless one crucial function of human vision is visual perception. Like many psychological words, ‘perception’ can be used at once to refer both to a process and to its product. There are two complementary sides to visual perception: there is an objective side and a subjective side. On the objective side, visual perception is a fundamental source of knowledge about the world. Visual perception is indeed a — if not “the” — paradigmatic process by means of which human beings gather knowledge about objects, events and facts in their environment. On the subjective side, visual perception yields a peculiar kind of awareness of the world, namely: sight. Sight has a special kind of phenomenal character (which is lacking in blindsight patients). The phenomenology of human visual experience is  unlike the phenomenology of human experience in sensory modalities other than vision, e.g., touch, olfaction or audition.

            On my representationalist view (close to Dretske, 1995 and Tye, 1995), much of the distinctive phenomenology of visual experience derives from the fact that the human visual system has been selected in the course of evolution to respond to a specific set of properties. Visual perception makes us aware of such fundamental properties of objects as their size, orientation, shape, color, texture, spatial position, distance and motion, all at once. One of the puzzles that arises from neuroscientific research into the visual system (and which I will not discuss here) is the question of how these various visual attributes are perceived as bound together, given the fact that neuroscience has discovered that they are processed in different areas of the human visual system (see Zeki, 1993). Unlike vision, audition makes us aware of sounds. Olfaction makes us aware of smells and odors. Touch makes us aware of pressure and temperature. Although shape can be both seen and felt, what it is like to see a shape is clearly different from what it is like to touch it. Part of of the reason for the difference lies in the fact that a normally sighted person cannot see e.g., the shape of a cube without seeing its color. But by feeling the shape of a cube, one does not thereby feel its color.

            I will presently argue that visual perception is a fundamental source of knowledge about the world: visual knowledge. I assume that propositional knowledge is knowledge of facts and that one cannot know a fact unless one believes that this fact obtains. I accept something like Dretske’s (1969) distinction between two levels of visual perception: nonepistemic perception (of objects) and epistemic perception (of facts). Importantly, on my view, the nonepistemic perception of objects gives rise to visual percepts and visual percepts are different from what I earlier called visuomotor representations of the targets of one’s action. What Dretske (1969) calls nonepistemic seeing is part of the perceptual processing of visual information. In the previous section, I gave empirical reasons why visual percepts differ from visuomotor representations. Unlike the visuomotor representation of a target, a visual percept makes visual information about colors, shapes, sizes, orientations of constituents of a visual display available for contrastive identification and recognition. This is why visual percepts can serve as input to a conceptual process that can lead to a peculiar kind of knowledge of the world — visual knowledge. Visual percepts serve as inputs to conceptual processes, but percepts are not concepts: perceptual contrasts are not conceptual contrasts. My present task then will be to show that the claim that visual perception can give rise to visual knowledge of the world is consistent with the claim that visual percepts are different from thoughts and beliefs. Visual percepts lead to thoughts and beliefs, but it would be a mistake to confuse the nonconceptual contents of visual percepts with the conceptual contents of beliefs and thoughts.

II. 1. Percepts and thoughts

         As many philosophers of mind and language have argued, what is characteristic of conceptual representations is that they are both productive and systematic. Like sentences of natural languages, thoughts are productive in the sense that they form an open ended infinite set. Although the lexicon of a natural language is made up of finitely many words, thanks to its syntactic rules, a language contains indefinitely many well formed sentences. Similarly, an individual may entertain indefinitely many conceptual thoughts. In particular, both sentences of public languages and conceptual thoughts contain such devices as negation, conjunction and disjunction. So one can form indefinitely many new thoughts by prefixing a thought by a negation operator, by forming a disjunctive or a conjunctive thought out of two simpler thoughts or one can generalize a singular thought by means of quantifiers. Sentences of natural languages are systematic in the sense that if a language contains a sentence S with a syntactic structure e.g., Rab, then it must contain a sentence expressing a syntactically related sentence, e.g., Rba. An individual’s conceptual thoughts are supposed to be systematic too: if a person has the ability to entertain the thought that e.g., John loves Mary, then she must have the ability to entertain the thought that Mary loves John. If a person can form the thought that Fa, then she can form both the thought that Fb and the thought that Ga (where “a” and “b” stand for individuals and “F” and “G” stand for properties). Both Fodor’s (1975, 1987) Language of Thought hypothesis and Evans’ (1982) Generality constraint are designed to account for the productivity and the systematicity of thoughts, i.e., conceptual representations. It is constitutive of thoughts that they are structured and that they involve conceptual constituents that can be combined and recombined to generate indefinitely many new structured thoughts. Thus, concepts are building blocks with inferential roles.

            Because they are productive and systematic, conceptual thoughts can rise above the limitations imposed to perceptual representations by the constraints inherent to perception. Unlike thought, visual perception requires some causal interaction between a source of information and some sensory organs. For example, by combining the concepts horse and horn, one may form the complex concept unicorn, even though no unicorn has or ever will be visually perceived (except in visual works of art). Although no unicorn has ever been perceived, within a fictional context, on the basis of the inferential role of its constituents, one can draw the inference that if something is a unicorn, then it has four legs, it eats grass and it is a mammal.

            Hence, possessing concepts is to master inferential relations: only a creature with conceptual abilities can draw consequences from her perceptual processing of a visual stimulus. Thought and visual perception are clearly different cognitive processes. One can think about numbers and one can form negative, disjunctive, conjunctive and general thoughts involving multiple quantifiers. Although one can visually perceive numerals, one cannot visually perceive numbers. Nor can one visually perceive negative, disjunctive, conjunctive or general facts (corresponding to e.g., universally quantified thoughts).

            As Crane (1992: 152) puts it, “there is no such thing as deductive inference between perceptions”. Upon seeing a brown dog, one can see at once that the animal one faces is a dog and that it is brown. If one perceives a brown animal and one is told that it is a dog, then one can certainly come to believe that the brown animal is a dog or that the dog is brown. But on this hybrid epistemic basis, one can think or believe, but one cannot see that the dog is brown. One came to know that the dog is brown by seeing it. But one did not come to know that what is brown is a dog by seeing it. Unlike the content of concepts, the content of visual percepts is not a matter of inferential role. As emphasized by Crane (ibid.), this is not to say that the content of visual percepts is amorphous or unstructured. One proposal for capturing the nonconceptual structure of visual percepts is Peacocke’s (1992) notion of a scenario content, i.e., a visual way of filling in space. As we shall see momentarily, one can think or believe of an animal that it is dog without thinking or believing that it has a particular color. But one cannot see a dog in good daylight conditions without seeing its particular color (or colors). I shall momentarily discuss this feature of the content of visual percepts, which is part of their distinctive informational richness, as an analog encoding of information.

            In section I.3, I considered the contrast between the pictorial content of a visual percept of a glass to the left of a telephone and the conceptual content expressible by means of the English sentence: ‘The glass is to the left of the telephone’. I noticed that, unlike the English sentence, the visual percept cannot represent the glass to the left of the telephone unless it depicts the shape, size, texture, color and orientation of both the glass and the telephone. I concluded that an utterance of this sentence conveys only part of the pictorial content of the visual percept since the utterance is mute about any visual attribute of the pair of objects other than their relative locations. But, further conceptual processing of the conceptual content conveyed by the utterance of the sentence may yield a more complex representation involving, not just a two-place relation, but a three-place relation also expressible by the English predicate ‘left of’. Thus, one may think that the glass is to the left of the telephone for someone standing in front of the window, not for someone sitting at the opposite side of the table. In other words, one can think that the glass is to the left of the telephone from one’s own egocentric perspective and that the same glass is to the right of the telephone from a different perspective. Although one can form the thought involving the ternary relation ‘left of’, one cannot see the glass as being to the left of the telephone from one’s own egocentric perspective because one cannot see one’s own egocentric perspective. Perspectives are not things that one can see. This is an example of a conceptual contrast that could not be drawn by visual perception. Thus, unlike a thought, a visual percept is, in one sense of the word, “informationally encapsulated”. Thought, not perception, can, as Perry (1993) puts it, increase the arity of a predicate. Notice that percepts can cause thoughts. This is one way thoughts arise. Thoughts can also cause other thoughts. But presumably, thoughts do not cause percepts.

II. 2. The finegrainedness and informational richness of visual percepts

            Unlike thought, visual perception has a spatial, perspectival, iconic and/or pictorial structure not shared by conceptual thought. The content of visual perception has a spatial perspectival structure that pure thoughts lack. In order to apply the concept of a dog, one does not have to occupy a particular spatial perspective relative to any dog. But one cannot see a dog unless one occupies some spatial standpoint or other relative to it: one cannot e.g., see a dog simultaneously from the top and from below, from the front and from the back. The concept of a dog applies indiscriminately to poodles, alsatians, dalmatians or bulldogs. One can think that all dogs bark. But one cannot see all dogs bark. Nor can on see a generic dog bark. One must see some particular dog: a poodle, an alsatian, a dalmatian or a bulldog, as it might be. Although one and the same concept — the concept of a dog — may apply to a poodle, an alsatian, a dalmatian or a bulldog, seeing one of them is a very different visual experience than seeing another. One can think that a dog barks without thinking of any other properties of the dog. One cannot, however, see a dog unless one sees its shape and the colors and texture of its hairs.

            Thus, the content of visual perceptual representations turns out to be both more finegrained and informationally richer than the conceptual contents of thoughts. There are three paradigmatic cases in which the need to distinguish between conceptual content and the nonconceptual content of visual perceptions may arise. First, a creature may be perceptually sensitive to objective differences for which she has no concepts. Secondly, two creatures may enjoy one and the same visual experience, which they may be inclined to conceptualize differently. Finally, two different persons may enjoy two distinct visual experiences in the presence of one and the same distal stimulus to which they may be inclined to apply one and the same concept.

            Peacocke (1992: 67-8) considers, for example, a person’s visual experience of a range of mountains. As he notices, one might want to conceptualize one’s visual experience with the help of concepts of shapes expressible in English with such predicates as ‘round’ and ‘jagged’. But these concepts of shapes could apply to the nonconceptual contents of several different visual experiences prompted by the distinct shapes of several distinct mountains. Arguably, although a human being might not possess any concept of shape whose finegrainedness could match that of her visual experience of the shape of the mountain, her visual experience of the shape is nonetheless distinctive and it may differ from the visual experience of the distinct shape of a different mountain to which she would apply the very same concept. Similarly, human beings are perceptually sensitive to far more colors than they have color concepts and color names to apply. Although a human being might lack two distinct concepts for two distinct shades of color, she might well enjoy a visual experience of one shade that is distinct from her visual experience of the other shade. As Raffman (1995: 295) puts it, “discriminations along perceptual dimensions surpasses identification […] our ability ro judge whether two or more stimuli are the same or different surpasses our ability to type-identify them”.

            Against this kind of argument in favor of the nonconceptual content of visual experiences, McDowell (1994, 1998) has argued that demonstrative concepts expressible by e.g., ‘that shade of color’ are perfectly suited to capture the finegrainedness of the visual percept of color. I am willing to concede to McDowell that such demonstrative concepts do exist. But I agree with Bermudez (1998: 55-7) and Dokic & Pacherie (2000) that such demonstrative concepts would seem to be too weak to perform one of the fundamental jobs that color concepts and shape concepts must be able to perform — namely recognition. Color concepts and shape concepts stored in a creature’s memory must allow recognition and reidentification of colors and shapes over long periods of time. Although pure demonstrative color concepts may allow comparison of simultaneously presented samples of color, it is unlikely that they can be used to reliably reidentify one and the same sample over time. Nor presumably could pairs of demonstrative color concepts be used to reliably discriminate pairs of color samples over time. Just as one can track the spatio-temporal evolution of a perceived object, one can store in a temporary object file information about its visual properties in a purely indexical or demonstrative format. If, however, information about an object’s visual properties is to be stored in episodic memory, for future reidentification, then it cannot be stored in a purely demonstrative or indexical format, which is linked to a particular perceptual context. Presumably, the demonstrative must be fleshed with some descriptive content. One can refer to a perceptible object as ‘that sofa’ or even as ‘that’ (followed.by no sortal). But presumably when one does not stand in a perceptual relation to the object, information about it cannot be stored in episodic memory in such a pure demonstrative format. Rather, it must be stored using a more descriptive symbol such as ‘the (or that) red sofa that used to face the fire-place’. This is presumably part of what Raffman (1995: 297) calls “the memory constrainst”. As Raffman (1995: 296) puts it:

the coarse grained character of perceptual memory explains why we can recognize ‘determinable’ colors like red and blue and even scarlet and indigo as such, but not ‘determinate’ shades of those determinables […] Because we cannot recognize determinate shades as such, ostension is our only means of communicating our knowledge of them. If I want to convey to you the precise shade of an object I see, I must point to it, or perhaps paint you a picture of it […] I must present you with an instance of that shade. You must have the experience yourself .

            Two persons might enjoy one and the same kind of visual experience prompted by one and the same shape or one and the same color, to which they would be inclined to apply pairs of distinct concepts, such as ‘red’ vs ‘crimson’ or ‘polygon’ vs ‘square’. If so, it would be justified to distinguish the nonconceptual content of their common visual experience from the different concepts that each would be willing to apply. Conversely, as argued by Peacocke (1998), presented with one and the same geometrical object, two persons might be inclined to apply one and the same generic shape concept e.g., ‘that polygon’ and still enjoy different perceptual experiences or see the same object as having different shapes. For example, as Peacocke (1998: 381) points out, “one and the same shape may be perceived as square, or as diamond-shaped […] the difference between these ways is a matter of which symmetries of the shape are perceived; though of course the subject himself does not need to know that this is the nature of the difference”. If one mentally partitions a square by bisecting its right angles, one sees it as a diamond. If one mentally partitions it by bisecting its sides, one sees it as a square. Presumably, one does not need to master the concept of an axis of symmetry to perform mentally these two bisections and enjoy two distinct visual experiences.

            The distinctive informational richness of the content of visual percepts has been discussed by Dretske (1981) in terms of what he calls the analogical coding of information.[1]  One and the same piece of information — one and the same fact — may be coded analogically or digitally. In Dretske’s sense, a signal carries the information that e.g., a is F in a digital form iff the signal carries no additional information about a that is not already nested in the fact that a is F. If the signal does carry additional information about a that is not nested in the fact that a is F, then the information that a is F is carried by the signal in an analogical (or analog) form. For example, the information that a designated cup contains coffee may be carried in a digital form by the utterance of the English sentence ‘There is some coffee in the cup’. The same information can also be carried in an analog form by a picture or by a photograph. Unlike the utterance of the sentence, the picture cannot carry the information that the cup contains coffee without carrying additional information about the shape, size, orientation of the cup and the color and the amount of coffee in it. As I pointed out above, unlike the concept of a dog, the visual percept of a dog carries information about which dog one sees, its spatial position, the color and texture of its hairs, etc. The contents of visual percepts are informationally rich in the sense of being analog. A thought involving several concepts in a hierarchically structured order might carry the same informational richness as a visual percept. But it does not have to. As the slogan goes, a picture is worth a thousand words. Unlike a thought, a visual percept of a cup cannot convey the information that the cup contains coffee without conveying additional information about several visual attributes of the cup.

            The arguments by philosophers of mind and by perceptual psychologists in favor of the distinction between the conceptual content of thought and the nonconceptual content of visual percepts is based on the finegrainedness and the informational richness of visual percepts. Thus, it turns on the phenomenology of visual experience. In section I, I provided some evidence from psychophysical experiments performed on normal human subjects and from the neuropsychological examination of brain lesioned human patients that point to a different kind of nonconceptual content, which I labelled “visuomotor” content. Unlike the arguments in favor of the nonconceptual content of visual percepts, the arguments for the distinction between the nonconceptual content of visual percepts and the nonconceptual content of visuomotor representations do not rely on phenomenology at all. Rather, they rely on the need to postulate mental representations with visuomotor content in order to provide a causal explanation of visually guided actions towards objects. Thus, on the assumption that such behaviors as grasping objects can be actions (based on mental representations), I submit that the nonconceptual content of visual representation ought to be bifurcated into perceptual and visuomotor content as in Figure 7:

conceptual content                                                           nonconceptual content

                                                            perceptual content                            visuomotor content

Figure 7

II. 3. The interaction between visual and non-visual knowledge

            Traditional epistemology has focused on the problem of sorting out genuine instances of propositional knowledge from cases of mere opinion or guessing. Propositional factual knowledge is to be distinguished from both nonpropositional knowledge of individual objects (or what Russell called “knowledge by acquaintance”) and from tacit knowledge of the kind illustrated by a native speaker’s implicit knowledge of the grammatical rules of her language. According to epistemologists, in the relevant propositional sense, what one knows are facts. In the propositional sense, one cannot know a fact unless one believes that the corresponding proposition is true, one’s belief is indeed true, and the belief was not formed by mere fantasy. On the one hand, one cannot know that the cup contains coffee unless one believes it. One cannot have this belief unless one knows what a cup is and what coffee is. On the other hand, one cannot know what is not the case: one can falsely believe that e.g., the cup contains coffee. But one cannot know it, unless a designated cup does indeed contain some coffee. True belief, however, is not sufficient for knowledge. If a true belief happens to be a mere guess or whim, then it will not qualify as knowledge. What else must be added to true belief to turn it into knowledge?

            Broadly speaking, epistemologists divide into two groups. According to externalists, a true belief counts as knowledge if it results from a reliable process, i.e., a process that generates counterfactually supporting connexions between states of a believer and facts in her environment. According to internalists, for a true belief to count as knowledge, it must be justified and the believer must in addition justifiably believe that her first-order belief is justified.  Since I am willing to claim that, in appropriate conditions, the way a red triangle visually looks to a person having the relevant concepts and located at a suitable distance from it provides grounds for the person to know that the object in front of her is a red triangle, I am attracted to an externalist reliabilist view of perceptual knowledge.

            Although the issue is controversial and is by no means settled in the philosophical literature, externalist intuitions suit my purposes better than internalist intuitions. Arguably, one thing is to be justified or to have a reason for believing something. Another thing is to use a reason in order to offer a justification for one’s beliefs. Arguably, if a perceptual (e.g., visual) process is reliable, then the visual appearances of things may constitute a reason for forming a belief. However, one cannot use a reason unless one can explicitly engage in a reasoning process of justification, i..e., unless one can distinguish one’s premisses from one’s conclusion. Presumably, a creature with perceptual abilities and relevant conceptual resources can have reasons and form justified beliefs even if she lacks the concept of reason or justification. However, she could not use her reasons and provide justifications unless she had language and metarepresentational resources. Internalism derives most of its appeal from reflection on instances of mathematical and scientific knowledge that result from the conscious application of explicit principles of inquiry by teams of individuals in the context of special institutions. In such special settings, it can be safely assumed that the justification of a believer’s higher-order beliefs do indeed contribute to the formation and reliability of his or her first-order beliefs. Externalism fits perceptual knowledge better than internalism and, unlike internalism, it does not rule out the possibility of crediting non-human animals and human infants with knowledge of the world — a possibility made more and more vivid by the development of cognitive science.

            On my view, human visual perceptual abilities are at the service of thought and conceptualisation. At the most elementary level, by seeing an object (or a sequence of objects) one can see a fact involving that object (or sequence of objects). By seeing my neighbor’s car in her driveway, I can see the fact that my neighbor’s car is parked in her driveway. I thereby come to believe that my neighbor’s car is parked in her driveway and this belief, which is a conceptually loaded mental state, is arrived at by visual perception. Hence, my term “visual knowledge”. If one’s visual system is — as I claimed it is — reliable, then by seeing my neighbor’s car — an object — in her driveway, I thereby come to know that my neighbor’s car is parked in her driveway — a fact. Hence, I come to know a fact involving an object that I actually see. This is a fundamental epistemic situation, which Dretske (1969) labels “primary epistemic seeing”: one’s visual ability allows one to know a fact about an object one perceives.

            However, if my neighbor’s car happens to be parked in her driveway if and only if she is at home (and I know this), then I can come to know a different fact: I can come to know that my neighbor is at home. “Seeing” that my neighbor is at home by seeing that her car is parked in her driveway is something different from seeing my neighbor at home (e.g., seeing her in her living-room). Certainly, I can come to know that my neighbor is at home by seeing her car parked in her driveway, i.e., without seeing her. “Seeing” that my neighbor is at home by seeing that her car is parked in her driveway is precisely what Dretske (1969) calls “secondary epistemic seeing”. Secondary epistemic seeing lies at the interface between pure visual knowledge of facts involving a perceived object and non-visual knowledge that can be derived from it.

            This transition from seeing one fact to seeing another displays the hierarchical structure of visual knowledge. In primary epistemic seeing, one sees a fact involving a perceived object. But in moving from primary epistemic seeing to secondary epistemic seeing, one moves from a fact involving a perceived car to a fact involving one’s unperceived neighbor (who happens to own the perceived car). This epistemological hierarchical structure is expressed by the “by” relation: one sees that y is G by seeing that x is F where x ≠ y. Although it may be more or less natural to say that one “sees” a fact involving an unperceived object by seeing a different fact involving a perceived object, the hierarchical structure that gives rise to this possibility is ubiquitous in human knowledge.

            One can see that a horse has walked on the snow by seeing hoof prints in the snow. One sees the hoof prints, not the horse. But if hoof prints would not be visible in the snow at time t unless a horse had walked on that very snow at time t – 1, then one can see that a horse has walked on the snow just by seeing hoof prints in the snow. One can see that a tennis player has just hit an ace at Flushing Meadows by seeing images on a television screen located in Paris. Now, does one really see the tennis player hit an ace at Flushing Meadows while sitting in Paris and watching television? Does one see a person on a television screen? Or does one see an electronic image of a person relayed by a television? Whether one sees a tennis player or her image on a television screen, it is quite natural to say that one “sees that” a tennis player hit an ace by seeing her (or her image) do it on a television screen. Even though, strictly speaking, one perhaps did not see her do it — one merely saw pictures of her doing it —, nonetheless seeing the pictures comes quite close to seeing the real thing. By contrast, one can “see” that the gas-tank in one’s car is half-full by seeing, not the tank itself, but the dial of the gas-gauge on the dashboard of the car. If one is sitting by the steering wheel inside one’s car so that one can comfortably see the gas-gauge, then one cannot see the gas-tank. Nonetheless, if the gauge is reliable and properly connected to the gas-tank, then one can (perhaps in some loose sense) “see” what the condition of the gas-tank is by seeing the dial of the gauge.

            One could wonder whether secondary epistemic seeing is really seeing at all. Suppose that one learns that the New York Twin Towers collapsed by reading about it in a French newspaper in Paris. One could not see the New York Twin Towers — let alone their collapse — from Paris. What one sees when one reads a newspaper are letters printed in black ink on a white sheet of paper. But if the French newspaper would not report the collapse of the New York Twin Towers unless the New York Twin Towers had indeed collapsed, then one can come to know that the New York Twin Towers have collapsed by reading about it in a French newspaper. There is a significant difference between seeing that the New York Twin Towers have collapsed by seeing it happen on a television screen and by reading about it in a newspaper. Even if seeing an electronic picture of the New York Twin Towers is not seeing the Twin Towers themselves, still the visual experience of seeing an electronic picture of them and the visual experience of seeing them have a lot in common. The pictorial content of the experience of seeing an electronically produced color-picture of the Towers is very similar to the pictorial content of the experience of seeing them. Unlike a picture, however, a verbal description of an event has conceptual content, not pictorial content. The visual experience of reading an article reporting the collapse of the New York Twin Towers in a French newspaper is very different from the experience of seeing them collapse. This is the reason why it may be a little awkward to say that one “saw” that the New York Twin Towers collapsed if one read about it in a French newspaper in Paris as opposed to seeing it happen on a television screen.

            Certainly, ordinary usage of the English word ‘see’ is not sacrosanct. We say that we “see” a number of things in circumstances in which what we do owes little — if anything — to our visual abilities. “I see what you mean”, “I see what the problem is” or “I finally saw the solution” report achievements quite independent of visual perception. Such uses of the verb ‘to see’ are loose uses. Such loose uses do not report epistemic accomplishments that depend significantly on one’s visual endowments. By contrast, cases of what Dretske (1969) calls secondary epistemic seeing are epistemic achievements that do depend on one’s visual endowments. True, in cases of secondary epistemic seeing, one comes to know a fact without seeing some of its constituent elements. True, one could not come to learn that one’s neighbor is at home by seeing her car parked in her driveway unless one knew that her car is indeed parked in her driveway when and only when she is at home. Nor could one see that the gas-tank in one’s car is half-full by seeing the dial of the gas-gauge unless one knew that the latter is reliably correlated with the former. So secondary epistemic seeing could not possibly arise in a creature that lacked knowedge of reliable correlations or that lacked the cognitive resources required to come to know them altogether.

            Nonetheless secondary epistemic seeing has indeed a crucial visual component in the sense that visual perception plays a critical role in the context of justifying such an epistemic claim. When one claims to be able to see that one’s neighbor is at home by seeing her car parked in her driveway or when one claims to be able to see that the gas-tank in one’s car is almost empty by seeing the gas-gauge, one relies on one’s visual powers in order to ground one’s state of knowledge. The fact that one claims to know is not seen. But the grounds upon which the knowledge is claimed to rest are visual grounds: the justification for knowing an unseen fact is seeing another fact correlated with the former. Of course, in explaining how one can come to know a fact about one thing by knowing a different fact about a different thing, one cannot hope to meet the philosophical challenge of scepticism. From the standpoint of scepticism, as Stroud (1989) points out, the explanation may seem to beg the question since it takes for granted one’s knowledge of one fact in order to explain one’s knowledge of another fact. But the important thing for present purposes is that — scepticism notwithstanding — one offers a perfectly good explanation of how one comes to know a fact about an object one does not perceive by knowing a different fact about an object one does perceive. The point is that much — if not all — of the burden of the explanation lies in visual perception: seeing one’s neighbor’s car is the crucial step in justifying one’s belief that one’s neighbor is at home. Seeing the gas-gauge is the crucial step in justifying one’s belief that one’s tank is almost empty. The reliability of visual perception is thus critically involved in the justification of one’s knowledge claim. In cases of primary epistemic seeing, the reliability of one’s visual system provides justifications for one’s visual knowledge in the sense that it provides one with reasons for believing that the fact involving an object one perceives obtains. In secondary epistemic seeing, one claims to know a fact that does not involve a perceived object. Still, the reliability of one’s visual system plays an indirect role in cases of secondary epistemic seeing in the sense that it provides grounds for one’s visual knowledge about a fact involving a perceived object, upon which one’s knowledge of a fact not involving a perceived object rests.

            Thus, secondary epistemic seeing lies at the interface between an individual’s visual knowledge (i.e., knowledge formed by visual means) and the rest of her knowledge. In moving from primary epistemic seeing to secondary epistemic seeing, an individual exploits her knowledge of regular connections. Although it is true that unless one knows the relevant correlation, one could not come to know the fact that the gas-tank in one’s car is empty by seeing the gas-gauge, nonetheless one does not consciously or explicitly reason from the perceptually accessible premiss that one’s neighbor’s car is parked in her driveway together with the premiss that one’s neighbor’s car is parked in her driveway when and only when one’s neighbor is at home to the conclusion that one’s neighbor is at home. Arguably, the process from primary to secondary epistemic seeing is inferential. But if it is, then the inference is unconscious and it takes place at the “sub-personal” level.

            What the above discussion of secondary epistemic seeing so far reveals is that the very description and understanding of the hierarchical structure of visual knowledge and its integration with non-visual knowledge requires an epistemological and/or psychological distinction between seeing of objects and seeing facts — a point much emphasized in Dretske’s writings on the subject — or between nonepistemic and epistemic seeing. The neurophysiology of human vision is such that some objects are simply not accessible to human vision. They may be too small or too remote in space and time for a normally sighted person to see them. For more mundane reasons, a human being may be temporarily so positioned as not to be able to see one object — be it her neighbor or the gas-tank in her car. Given the correlations between facts, by seeing a perceptible object, one can get crucial information about a different unseen object. Given the epistemic importance of visual perception in the hirarchical structure of human knowledge, it is important to understand how by seeing one object, one can provide decisive reasons for knowing facts about objects one does not see.


II. 4. The scope and limits of visual knowledge

            I now turn my attention again from what Dretske calls secondary epistemic seeing (i.e., visually based knowledge of facts about objects one does not perceive) back to what he calls primary epistemic seeing, i.e., visual knowledge of facts about objects one does perceive. When one purports to ground one’s claim to know that one’s neighbor is at home by mentioning the fact that one can see that her car is parked in her driveway, clearly one is claiming to be able to see a car, not one’s neighbor herself. Now, let us concentrate on the scope of knowledge claims in primary epistemic seeing, i.e., knowledge about facts involving a perceived object. Let us suppose that someone claims to be able to see that the apple on the table is green. Let us suppose that the person’s visual system is working properly, the table and what is lying on it are visible from where the person stands, and the lighting is suitable for the person to see them from where she stands. In other words, there is a distinctive way the green apple on the table looks to the person who sees it. Under those circumstances, when the person claims that she can see that the apple on the table is green, what are the scope and limits of her epistemic claims?

            Presumably, in so doing, she is claiming that she knows that there is an apple on the table in front of her and that she knows that this apple is green. If she knows both of this, then presumably  she also knows that there is a table under the apple in front of her, she knows that there is a fruit on the table. Hence, she knows what the fruit on the table is (or what is on the table), she knows where the apple is, she knows the color of the apple, and so on. Arguably, the person would then be in a position to make all such claims in response to the following various queries: is there anything on the table? What is on the table? What kind of fruit is on the table? Where is the green apple? What color is the apple on the table? If the person can see that the apple on the table is green, then presumably she is in a position to know all these facts.

            However, when she claims that she can see that the apple on the table is green, she is not thereby claiming that she can see that all of these facts obtain. What she is claiming is more restricted and specific than that: She is indeed claiming that she knows that there is an apple on the table and that the apple in question is green. Furthermore, she is claiming that she learnt the latter fact — the fact about the apple’s color — through visual perception: if someone claims that she can see that the apple on the table is green, then she is claiming that she has achieved her knowledge of the apple’s color by visual means, and not otherwise. But she is not thereby claiming that her knowledge of the location of the apple or her knowledge of what is on the table have been acquired by the very perceptual act (or the very perceptual process) that gave rise to her knowledge of the apple’s color. Of course, the person’s alleged epistemic achievement does not rule out the possibility that she came to know that what is on the table is an apple by seeing it earlier. But if she did, this is not part of the claim that she can see that the apple on the table is green. It is consistent with this claim that the person came to know that what is on the table is an apple by being told, by tasting it or by smelling it. All she is claiming and all we are entitled to conclude from her claim is that the way she learnt about the apple’s color is by visual perception.

            The investigation into the scope and limits of primary visual knowledge is important because it is relevant to the challenge of scepticism. As I already said, my discussion of visual knowledge does not purport to meet the full challenge of scepticism. In discussing secondary epistemic seeing, I noticed that in explaining how one comes to know a fact about an unperceived object by seeing a different fact involving a perceived object, one takes for granted the possibility of knowing the latter fact by perceiving one of its constituent objects. Presumably, in so doing, one cannot hope to meet the full challenge of scepticism that would question the very possibility of coming to know anything by perception. I briefly turn to the sceptical challenge to which claims of primary epistemic seeing are exposed. By scrutinizing the scope and limits of claims of primary visual knowledge, I want to examine briefly the extent to which such claims are indeed vulnerable to the sceptical challenge. Claims of primary visual knowledge are vulnerable to sceptical queries that can be directed backwards and forwards. They are directed backwards when they apply to background knowledge, i.e., knowledge presupposed by a claim of primary visual knowledge. They are directed forward when they apply to consequences of a claim of primary visual knowledge. I turn to the former first.

            Suppose a sceptic were to challenge a person’s commonsensical claim that she can see (and hence know by perception) that the apple on the table in front of her is green by questioning her grounds for knowing that what is on the table is an apple. The sceptic might point out that, given the limits of human visual acuity and given the distance of the apple, the person could not distinguish by visual means alone a genuine green apple — a green fruit — from a fake green apple (e.g., a wax copy of a green apple or a green toy). Perhaps, the person is hallucinating an apple when there is in fact nothing at all on the table. If one cannot visually discriminate a genuine apple from a fake apple, then, it seems, one is not entitled to claim that one can see that the apple on the table is green. Nor is one entitled to claim that one can see that the apple on the table is green if one cannot make sure by visual perception that one is not undergoing a hallucination. Thus, the sceptical challenge is the following: if visual perception itself cannot rule out a number of alternative possibilities to one’s epistemic claim, then the epistemic claim cannot be sustained.

            The proper response to the sceptical challenge here is precisely to appeal to the distinction between claims of visual knowledge and other knowledge claims. When the person claims that she can see that the apple on the table is green, she is claiming that she learnt something new by visual perception: she is claiming that she just gained new knowledge by visual means. This new perceptually-based knowledge is about the apple’s color. The perceiver’s new knowledge — her epistemic “increment”, as Dretske (1969) calls it — must be pitched against what he calls her “proto-knowledge”, i.e., what the person knew about the perceived object prior to her perceptual experience. The reason it is important to distinguish between a person’s prior knowledge and her knowledge gained by visual perception is that primary epistemic seeing (or primary visual knowledge) is a dynamic process. In order to determine the scope and limits of what has been achieved in a perceptual process, we ought to determine a person’s initial epistemic stage (the person’s prior knowledge about an object) and her final epistemic stage (what the person learnt by perception about the object). Thus, the question raised by the sceptical challenge (directed backwards) is a question in cognitive dynamics: How much new knowledge could a person’s visual resources yield, given her prior knowledge? How much has been learnt by visual perception, i.e., in an act of visual perception? What new information has been gained by visual perception?

            So when the person claims that she can see that the apple on the table is green, she no doubt reports that she knows both that there is an apple on the table and that it is green. She commits herself to a number of epistemic claims: she knows what is on the table, she knows that there is a fruit on the table, she knows where the apple is, and so on. But she merely reports one increment of knowledge: she merely claims that she just learnt by visual perception that the apple is green. She is not thereby reporting how she acquired the rest of her knowledge about the object, e.g., that it is an apple and that it is on the table. She claims that she can see of the apple that it is green, not that what is green is an apple, nor that what is on the table is an apple. The claim of primary visual knowledge bears on the object’s color, not on some of its other properties (it’s being e.g., an apple, a fruit or its location). All her epistemic claim entails is that, prior to her perceptual experience, she assumed (as part of her “proto-knowledge” in Dretske’s sense) that there was a apple on the table and then she discovered by visual perception that the apple was green.

            I now turn my attention to the sceptical challenge directed forward — towards the consequences of one’s claims of visual knowledge. The sceptic is right to point out that the person who claims to be able to see the color of an apple is not thereby in a position to see that the object whose color she is seeing is a genuine apple — a fruit — and not a wax apple. Nor is the person able to see that she is not hallucinating. However, since she is neither claiming that she is able to see of the green object that it is a genuine apple nor that she is not hallucinating an apple, it follows that the sceptical challenge cannot hope to defeat the person’s perceptual claim that she can see what she claims that she can see, namely that the apple is green. On the externalist picture of perceptual knowledge which I accept, a person knows a fact when and only when she is appropriately connected to the fact. Visual perception provides a paradigmatic case of such a connexion. Hence, visual knowledge arises from regular correlations between states of the visual system and environmental facts. Given the intricate relationship between a person’s visual knowledge and her higher cognitive functions, she will be able to draw many inferences from her visual knowledge. If a person knows that the apple in front of her is green, then she may infer that there is a colored fruit on the table in front of her. Given that fruits are plants and that plants are physical objects, she may further infer that there are at least some physical objects. Again, the sceptic may direct his challenge forward: the person claims to know by visual means that the apple in front of her is green. But what she claims she knows entails that there are physical objects. Now, the sceptic argues, a person cannot know that there are physical objects — at least, she cannot see that there are. According to the sceptic, failure to see that there are physical objects entails failure to see that the apple on the table is green.

            A person claims that she can know proposition p by visual perception. Logically, proposition p entails proposition q. There could not be a green apple on the table unless there exists at least one physical object. Hence, the proposition that the apple on the table is green could not be true unless there were physical objects. According to the sceptic, a person could not know the former without knowing the latter. Now the sceptic offers grounds for questioning the claim that the person knows proposition q at all — let alone by visual perception. Since it is dubious that she does know the latter, then, according to scepticism, she fails to know the former. Along with Dretske (1969) and Nozick (1981), I think that the sceptic relies on the questionable assumption that visual knowledge is deductively closed. From the fact that a person has perceptual grounds for knowing that p, it does not follow that she has the same grounds for knowing that q, even if q logically follows from p. If visual perception allows one to get connected in the right way to the fact corresponding to proposition p, it does not follow that visual perception ipso facto allows one to get connected in the same way to the fact corresponding to proposition q even if q follows logically from p.

            A person comes to know a fact by visual perception. What she learns by visual perception implies a number of propositions (such as there are physical objects). Although such propositions are logically implied by what the person learnt by visual perception, she does not come to know by visual perception all the consequences of what she learnt by visual perception. She does not know by visual perception that there are physical objects — if she knows it at all. Seeing a green apple in front of one has a distinctive visual phenomenology. Seeing that the apple in front of one is green too has a distinctive visual phenomenology. There is something distinctively visual about what it is like for one to see that the apple in front of one is green. If an apple is green, then it is colored. However, it is dubious whether there is a visual phenomenology to thinking of the apple in front of one that it is colored. A fortiori, it is dubious whether there is a visual phenomenology to thinking that there are physical objects. Hence, contrary to what the sceptic assumes, I want to claim, as Dretske (1969) and Nozick (1981) have, that visual knowledge is not deductively closed.

III. The role of visuomotor representations in the human cognitive architecture

            In the present section, I shall sketch my reasons for thinking that visuomotor representations do not lead to detached knowledge of the world. Rather, they serve as input to intentions in at least two respects: on the one hand, they provide visual guidance to what I shall call “motor intentions”. On the other hand, they provide visual information for “causally indexical” concepts. I will start by laying out the basic distinction between two different kinds of “direction of fit” that can be exemplified by mental representations.

III.1. Direction of fit

            Whereas visual percepts serve as inputs to the “belief box”, visuomotor representations, I now want to argue, serve is inputs to a different kind of mental representations, i.e., intentions. As emphasized by Anscombe (1957) and Searle (1983, 2001), perceptions, beliefs, desires and intentions each have a distinctive kind of intentionality. Beliefs and desires have what Searle calls “opposite direction of fit”. Beliefs have a mind-to-world direction of fit: they can be true or false. A belief is true if and only if the world is as the belief represents it to be. It is the function of beliefs to match facts or actual state of affairs. In forming a belief, it is up for the mind to meet the demands of the world. Unlike beliefs, desires have a world-to-mind direction of fit. Desires are neither true nor false: they are fulfilled or frustrated. The job of a desire is not to represent the world as it is, but rather as the agent would like it to be. Desires are representations of goals, i.e., possible nonactual states of affairs. In entertaining a desire, it is so to speak up for the world to meet the demands of the mind. The agent’s action is supposed to bridge the gap between the mind’s goal and the world.

            As Searle (1983, 2001) has noticed, perceptual experiences and intentions have opposite directions of fit. Perceptual experiences have the same mind-to-world direction of fit as beliefs. Intentions have the same world-to-mind direction of fit as desires. In addition, perceptual experiences and intentions have opposite directions of causation: whereas a perceptual experience represents the state of affairs that causes it, an intention causes the state of affairs that it represents.

            Although intentions and desires share the same world-to-mind direction of fit, intentions are different from desires in a number of important respects, which all flow from the peculiar commitment to action of intentions. Broadly speaking, desires are relevant to the process of deliberation that precedes one’s engagement into a course of action. Once an intention is formed, however, the process of deliberation comes to an end. To intend is to have made up one’s mind about whether to act. Once an intention is formed, one has taken the decision whether to act. I shall mention four main differences between desires and intentions.

             First, although desires may be about anything or anybody, intentions are always about the self. One can only intend oneself to do something. Second, unlike desires, intentions are tied to the present or the future: one cannot intend to do something in the past. Third, unlike the contents of desires, the contents of intentions must be about possible nonactual states of affairs. An agent cannot intend to achieve a state of affairs that she knows to be impossible at the time when she forms her intention. Finally, although one may entertain desires whose contents are inconsistent, one cannot have two intentions whose contents are inconsistent.

         Reaching and grasping objects are visually guided actions directed towards objects. I assume that all actions are caused by intentions. Intentions are psychological states with a distinctive intentionality. As I said earlier, intentions derive their peculiar commitment to action from the combination of their distinctive world-to-mind direction of fit and their distinctive mind-to-world direction of causation. I shall now argue that visuomotor representations have a dual function in the human cognitive architecture: they serve as inputs to “motor intentions” and they serve as input to a special class of indexical concepts, the “causally indexical” concepts.

III.2. Visuomotor representations serve as inputs to motor intentions

            Not all actions, I assume, are caused by what Searle (1983, 2001) calls prior intentions, but all actions are caused by what he calls intentions in action, which, following Jeannerod (1994), I will call motor intentions. Unlike prior intentions, motor intentions are directed towards immediately accessible goals. Hence, they play a crucial role, not so much in the planning of action as in the execution, the monitoring and the control of the ongoing action. Arguably, prior intentions may have conceptual content. Motor intentions do not. For example, one intends to climb a visually perceptible mountain. The content of this prior intention involves e.g., the action concept of climbing and a visual percept of the distance, shape and color of the mountain. In order to climb the mountain, however, one must intentionally perform an enormous variety of postural and limb movements in response to the slant, orientation and the shape of the surface of the slope. Human beings automatically assume the right postures and perform the required flexions and extensions of their feet and legs. Since they do not possess concepts matching each and every such movements, their non-deliberate intentional behavioral responses to the slant, orientation and shape of the surface of slope is monitored by the nonconceptual nonperceptual content of motor intentions.

            Not any sensory representation can match the peculiar commitment to action of motor intentions. Visuomotor representations can. Percepts are informationally richer and more fine-grained than either concepts or visuomotor representations. As I claimed above, visual percepts have the same mind-to-world direction of fit as beliefs. This is why visual percepts are suitable inputs to a process of selective elimination of information, whose ultimate conceptual output can be stored in the belief box.

            I shall presently argue that visuomotor representations have a different function: they provide the relevant visual information about the properties of a target to an agent’s motor intentions. Indeed, I want to think of the role of the visuomotor representation of a target for action as Gibson (1979) thought of an affordance. However, unlike Gibson (1979), who did not make a distinction between perceptual and visuomotor processing, I do not think of the visuomotor processing of a target as a “direct pick up of information”. I think that visuomotor representations are genuine representations. My main reason for thinking of the output of the visuomotor processing of a target as a genuine mental representation — and for thinking of grasping as a genuine action, not a behavioral reflex — is that Haddenden, Schiff & Goodale’s (2001) experiment suggests that the visuomotor processing of a target can be fooled by features of the visual display: it can be led to process two dimensional cues as if they were three dimensional obstacles. If the output of the visuomotor processing of a display can misrepresent it, then it represents it.

            Unlike visual percepts whose single role is to present visual information for further processing the output of which will be stored in the belief box, visuomotor representations are hybrid: as Millikan (1996), who calls them “pushmi-pullyu representations” has perceptively recognized, they have a dual role. I slightly depart from Millikan (1996), however, in that, unlike her, I assume that visuomotor representations, not motor intentions, have a double direction of fit. Visuomotor representations present states of affairs as both facts and goals for immediate action. On the one hand, they provide visual information for the benefit of motor intentions. On the other hand, their content can be conceptualized with the help of a special class of indexical concepts: causal indexicals. Whereas visual percepts must be stripped of much of their informational richness to be conceptualized, visuomotor representations can directly provide relevant visual information about the target of an action to motor intentions. To put it crudely, it follows from the work summarized in Jeannerod (1994, 1997) that the content of a motor intention has two sides: a subjective side and an objective side. On the subjective side, a motor intention represents the agent’s body in action. On the objective side, it represents the target of the action. Visuomotor representations contribute to the latter. Their ‘motoric’ informational encapsulation makes them suitable for this role. The nonconceptual nonperceptual content of a visuomotor representation matches that of a motor intention.

            Borrowing from the study of language processing, Jeannerod (1994, 1997) has drawn a distinction between the semantic and the pragmatic processing of visual stimuli. The view I want to put forward has been well expressed by Jeannerod (1997: 77): “at variance with the […] semantic processing, the representation involved in sensorimotor transformation has a predominantly ‘pragmatic’ function, in that it relates to the object as a goal for action, not as a member of a perceptual category. The object attributes are represented therein to the extent that they trigger specific motor patterns for the hand to achieve the proper grasp”. Thus, the crucial feature of the pragmatic processing of visual information is that its output is a suitable input to the nonconceptual content of motor intentions.

III.3. Visuomotor representations serve as inputs to causal indexicals

            I have just argued that what underlies the contrast between the pragmatic and the semantic processing of visual information is that, whereas the output of the latter is designed to serve as input to further conceptual processing with a mind-to-world direction of fit, the output of the former is designed to match the nonconceptual content of motor intentions with a world-to-mind direction of fit and a mind-to-world direction of causation. The special features of the nonconceptual contents of visuomotor representations can be inferred from the behavioral responses which they underlie, as in patient DF. They can also be deduced from the structure and content of elementary action concepts with the help of which they can be categorized.

            I shall presently consider a subset of elementary action concepts, which, following Campbell (1994), I shall call “causally indexical” concepts. Indexical concepts are shallow but indispensable concepts, whose references change as the perceptual context changes and whose function is to encode temporary information. Indexical concepts respectively expressed by ‘I’, ‘today’ and ‘here’ are personal, temporal and spatial indexicals. Arguably, their highly contextual content cannot be replaced by pure definite descriptions without loss. Campbell (1994: 41-51) recognizes the existence of causally indexical concepts whose references may vary according to the causal powers of the agent who uses them. Such concepts are involved in judgments having, as Campbell (1994: 43) puts it, “immediate implications for [the agent’s] action”. Concepts such as “too heavy”, “out of reach”, “within my reach”, “too large”, “fit for grasping between index and thumb” are causally indexical concepts in Campbell’s sense.

            Campbell’s idea of causal indexicality does capture a kind of judgment that is characteristically based upon the output of the pragmatic (or motor) processing of visual stimuli in Jeannerod’s (1994, 1997) sense. Unlike the content of the direct output of the pragmatic processing of visual stimuli or that of motor intentions, the contents of judgments involving causal indexicals is conceptual. Judgments involving causally indexical concepts have low conceptual content, but they have conceptual content nonetheless. For example, if something is categorized as “too heavy”, then it follows that it is not light enough. The nonconceptual contents of either visuomotor representations or motor intentions is better compared with that of an affordance in Gibson’s sense.

            Causally indexical concepts differ in one crucial respect from other indexical concepts, i.e., personal, temporal and spatial indexical concepts. Thoughts involving personal, temporal and spatial indexical concepts are “egocentric” thoughts in the sense that they they are perception-based thoughts. This is obvious enough for thoughts expressible with either the first-person pronouns ‘I’ or ‘you’. To refer to a location as ‘here’ or ‘there’ and to refer to a day as ‘today’, ‘yesterday’ or ‘tomorrow’ is to refer respectively to a spatial and a temporal region from within some egocentric perspective: a location can only be referred to as ‘here’ or ‘there’ from some particular spatial egocentric perspective. A temporal region can only be referred to by ‘today’, ‘yesterday’ or ‘tomorrow’ from some particular temporal egocentric perspective. In this sense, personal, temporal and spatial indexical concepts are egocentric concepts.[2] Arguably, egocentric indexicals lie at the interface between visual percepts and an individual’s conceptual repertoire about objects, times and locations.

            Many philosophers (see e.g., Kaplan, 1989 and Perry, 1993) have argued that personal, temporal and spatial indexical and/or demonstrative concepts play a special “essential” and ineliminable role in the explanation of action. And so they do. As Perry (1993: 33) insightfully writes: “I once followed a trail of sugar on a supermarket floor, pushing my cart down the aisle on one side of a tall counter and back the aisle on the other, seeking the shopper with the torn sack to tell him he was making a mess. With each trip around the counter, the trail became thicker. But I seemed unable to catch up. Finally it dawned on me. I was the shopper I was trying to catch”. To believe that the shopper with a torn sack is making a mess is one thing. To believe that oneself is making a mess is something else. Only upon forming the thought expressible by ‘I am making a mess’ is it at all likely that one may take appropriate measures to change one’s course of action. It is one thing to believe that the meeting starts at 10:00 AM. It is another thing to believe that the meeting starts now, even if now is 10:00 AM. Not until one thinks that the meeting starts now will one get up and run. Consider someone standing still at an intersection, lost in a foreign city. One thing is for that person to intend to go to her hotel. Something else is to intend to go this way, not that way. Only after she has formed the latter intention with a demonstrative locational content, will she get up and walk.

            Thus, such egocentric concepts as personal, temporal and spatial indexicals and/or demonstratives derive their ineliminable role in the explanation of action from the fact that their recognitional role cannot be played by any purely descriptive concept. Recognition involves a contrast but it can be achieved without recourse to a uniquely specifying definite description. Indexicals and demonstratives are mental pointers that can be used to refer to objects, places and times. Personal indexicals are involved in the recognition of persons. Temporal indexicals are involved in the recognition of temporal regions or instants. Spatial indexicals are involved in the recognition of locations. To recognize oneself as the reference of ‘I’ is to make a contrast with the recognition of the person one addresses in verbal communication as ‘you’. To identify a day as ‘today’ is to contrast it with other days that might be identified as ‘yesterday’, ‘the day before yesterday’, ‘tomorrow’, etc. To identify a place as ‘here’ is to contrast it with other places referred to as ‘there’.

            Although indexicals and demonstratives are concepts, they have non-descriptive conceptual content. The conceptual system needs such indexical concepts because it lacks the resources to supply a purely descriptive symbol, i.e., a symbol that could uniquely identify a person, a time or a place. A purely descriptive concept would be a concept that a unique person, a unique time or a unique place would satisfy by uniquely exemplifying each and every of its constituent features. We cannot specify the references of our concepts all the way down by using uniquely identifying descriptions on pain of circularity. If, as Pylyshyn (2000: 129) points out, concepts need to be “grounded”, then on pain of circularity, “the grounding [must] begin at the point where something is picked out directly by a mechanism that works like a demonstrative” (or an indexical). If concepts are to be hooked to or locked onto objects, times and places, then on pain of circularity, definite descriptions will not supply the locking mechanism.

            Personal, temporal and spatial indexicals owe their special explanatory role to the fact that they cannot be replaced by purely descriptive concepts. Although they allow recognition by nondescriptive means, their direction of application is mind-to-world. Causally indexical concepts, however, play a different role altogether. Unlike personal, temporal and spatial indexical concepts, causally indexical concepts have a distinctive quasi-deontic or quasi-evaluative content. I want to say that, unlike that of other indexicals, the direction of fit of causal indexicals is hybrid: it is partly mind-to-world, partly world-to-mind. To categorize a target as “too heavy”, “within reach” or “fit for grasping between index and thumb” is to judge or evaluate the parameters of the target as conducive to a successful action upon the target. Unlike the contents of other indexicals, the content of a causally indexical concept results from the combination of an action predicate and an evaluative operator. What makes it indexical is that the result of the application of the latter onto the former is relative to the agent who makes the application. Thus, the job of causally indexical concepts is not just to match the world but to play an action guiding role. If it is, then presumably causal indexicals have at best a hybrid direction of fit, not a pure mind-to-world direction of fit.

            In the previous section, I have argued that, unlike visual percepts, visuomotor representations provide visual information to motor intentions, which have nonconceptual content, a world-to-mind direction of fit and a mind-to-world direction of causation. I am presently arguing that the visual information of visuomotor representations can also serve as input to causally indexical concepts, which are elementary contextually dependent action concepts. Judgments involving causally indexical concepts have at best a hybrid direction of fit. When an agent makes such a judgment, he is not merely stating a fact: he is not thereby coming to know a fact that holds independently of his causal powers. Rather, he is settling, accepting or making his mind on an action plan. The function of causally indexical concepts is precisely to allow an agent to make action plans. Whereas personal, temporal and spatial indexicals lie at the interface between visual percepts and an individual’s conceptual repertoire about objects, times and places, causally indexical concepts lie at the interface between visuomotor representations, motor intentions and what Searle calls prior intentions. Prior intentions have conceptual content: they involve action concepts. Thus, after conceptual processing via the channel of causally indexical concepts, the visual information contained in visuomotor representations can be stored in a conceptual format adapted to the content and the direction of fit of one’s intentions — if not one’s motor intentions, then perhaps one’s prior intentions. Hence, the output of the motor processing of visual inputs can serve as input to further conceptual processing whose output will be stored in the ‘intention box’.


  1. Aglioti, S., De Souza, J.F.X. and Goodale, M.A. (1995) “Size-contrast illusions deceive the eye but not the hand”, Current Biology, 5, 6, 679-85.
  2. Anscombe, G.E.M. (1957) Intention, Ithaca: Cornell University Press.
  3. Austin, J. L. (1962) Sense and Sensibilia, Oxford: Clarendon Press.
  4. Bermudez, J. (1998) The Paradox of Self-Consciousness, Cambridge, Mass.: MIT Press.
  5. Bridgeman, B., Hendry, D. & Stark, L. (1975) “Failure to detect displacement of the visual world during saccadic eye movement”, Vision Research, 15, 719-22.
  6. Campbell, J. (1994) Past, Space and Self, Cambridge, Mass.: MIT Press.
  7. Carey, D.P., Harvey, M. & Milner, A.D. (1996) “Visuomotor sensitivity for shape and orientation in a patient with visual form agnosia”, Neuropsychologia, 34, 329-37.
  8. Castiello, U., Paulignan, Y. & Jeannerod, M. (1991) “Temporal dissociation of motor responses and subjective awareness. A study in normal subjects”, Brain, 114, 2639-2655.
  9. Crane, T. (1992) “The nonconceptual content of experience” in Crane, T. (ed.)(1992) The Contents of Experience, Cambridge: Cambridge University Press.
  10. Dokic, J. & Pacherie, E. (2001) “Shades and concepts”, Analysis, 61, 3, 193-202.
  11. Dretske, F. (1981) Knowledge and the Flow of Information, Cambridge, Mass.: MIT Press.
  12. Dretske, F. (1995) Naturalizing the Mind, Cambridge, Mass.: MIT Press.
  13. Evans, G. (1982) The Varieties of Reference, Oxford: Oxford University Press.
  14. Farah, M. (1990) Visual Agnosia: Disorders of Object Recognition and What They Will Tell Us About Normal Vision, Cambridge, Mass.: MIT Press.
  15. Fodor, J.A. (1987) Psychosemantics, Cambridge, Mass.: MIT Press.
  16. Franz, V.H., Gegenfurtner, K.R., Bülthoff and Fahle, M. (2000) “Grasping visual illusions: no evidence for a dissociation between perception and action”, Psychological Science, 11, 1, 20-25.
  17. Gibson, J.J. (1979) The Ecological Approach to Visual Perception, Boston: Houghton-Miffin.
  18. Goodale, M. A. (1995) “The cortical organization of visual perception and visuomotor control”, in Osherson, D. (1995)(ed.) An Invitation to Cognitive Science, Visual Cognition, vol. 2, Cambridge, Mass.: MIT Press.
  19. Goodale, M.A., Pélisson, D., Prablanc, C. (1986) “Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement”, Nature, 320, 748-50.
  20. Goodale, M. A., Milner, A.D., Jakobson I.S. and Carey, D.P. (1991) “A neurological dissociation between perceiving objects and grasping them”, Nature, 349, 154-56.
  21. Haffenden, A. M. & Goodale, M. (1998) “The effect of pictorial illusion on prehension and perception”, Journal of Cognitive Neuroscience, 10, 1, 122-36.
  22. Haffenden, A.M. Schiff, K.C. & Goodale, M.A. (2001) “The dissociation between perception and action in the Ebbinghaus illusion: non-illusory effects of pictorial cues on grasp”, Current Biology, 11, 177-181.
  23. Jacob, P. (1997) What minds can do, Cambridge: Cambridge University Press.
  24. Jeannerod, M. (1984) “The timing of natural prehension movements”, Journal of Motor Behavior, 16, 235-54.
  25. Jeannerod, M. (1994) “The representing brain: neural correlates of motor intentions”, Behavioral and Brain Sciences,
  26. Jeannerod, M. (1997) The Cognitive Neuroscience of Action, Oxford: Blackwell.
  27. Jeannerod, M., Decety, J. and Michel, F. (1994) “Impairment of grasping movements following bilateral posterior parietal lesions”, Neuropsychologia, 32, 369-80.
  28. Kaplan, D. (1989) “Demonstratives”, in Almog, J., Perry, J. & Wettstein, H. (eds.)(1989) Themes from Kaplan, New York: Blackwell.
  29. McDowell, J. (1994) Mind and the World, Cambridge, Mass.: Harvard University Press.
  30. McDowell, J. (1998) Précis of Mind and World, and Reply to Commentators, Philosophy and Phenomenological Research, LVIII, 2, 365-68, 403-31.
  31. Millikan, R.M. (199 ) “Pushmi-pullyu Representations”, in J. Tomberlin (ed.) Philosophical Perspectives, vol. IX, Atascadero, CA.
  32. Milner, D. & Goodale, M.A. (1995) The Visual Brain in Action, Oxford: Oxford University Press.
  33. Milner, D., Paulignan, Y., Dijkerman, H.C., Michel, F. and Jeannerod, M. (1999) “A paradoxical improvement of misreaching in optic ataxia: new evidence for two separate neural systems for visual localization”, Proc. of the Royal Society, 266, 2225-9.
  34. Nozick, R. (1981) “Knowledge and scepticism”, in Bernecker, S. & Dretske, F. (ed.)(2000) Knowledge, Readings in Contemporary Epistemology, Oxford: Oxford University Press.
  35. Pavani, F., Boscagli, I, Benvenuti, F., Rabuffetti & Farné, A. (1999) “Are perception and action affected differently by the Titchener circles illusion”, Experimental Brain Research, 127, 95-101.
  36. Peacocke, C. (1992) A Study of Concepts, Cambridge, Mass.: MIT Press.
  37. Peacocke, C. (1998) “Nonconceptual content defended”, Philosophy and Phenomenological Research, LVIII, 2, 381-88.
  38. Perry, J. (1979) “The essential indexical”, in Perry, J. (1993).
  39. Perry, J. (1986a) “Perception, action and the structure of believing”, in Perry, J. (1993).
  40. Perry, J. (1986b) “Thought without representation”, in Perry, J. (1993).
  41. Perry, J. (1993) The Problem of the Essential Indexical and Other Essays, Oxford: Oxford University Press.
  42. Pisella, L et al. (2000) “An ‘automatic pilot’ for the hand in human posterior parietal cortex: toward reinterpreting optic ataxia”, Nature Neuroscience, 3, 7, 729-36.
  43. Pylyshyn, Z. (2000) “Visual indexes, preconceptual objects and situated vision”, Cognition, 80, 127-58.
  44. Rossetti, Y. & Pisella, L. (2000) “Common mechanisms in perception and action”, in Prinz, W. & Hommel, B. (eds.)(2000) Attention and Performance, XIX, Oxford: Oxford University Press.
  45. Searle, J. (1983) Intentionality, Cambridge, Cambridge University Press.
  46. Searle, J. (2001) Rationality in Action, Cambridge, Mass.: MIT Press.
  47. Stroud, B. (1989) “Understanding human knowledge in general”, Bernecker, S. & Dretske, F. (ed.)(2000) Knowledge, Readings in Contemporary Epistemology, Oxford: Oxford University Press.
  48. Tye, M. (1995) Ten Problems about Consciousness, Cambridge, Mass.: MIT Press.
  49. Ungerleider, L.G. & Mishkin, M. (1982) “Two cortical visual systems”, in Ingle, D.J., Goodale, M.A. & Mansfield, R.J.W. (eds.) Analysis of visual behavior, MIT Press.
  50. Weiskrantz, L. (1986) Blindsight. A Case Study and Implications, Oxford: Oxford University Press.
  51. Weiskrantz, L. (1997) Consciousness Lost and Found, Oxford: Oxford University Press.
  52. Zeki, S. (1993) A Vision of the Brain, Oxford: Blackwell.


[1]  For discussion, see Jacob (1997, ch. 2).

[2] The egocentricity of indexical concepts should not be confused with the egocentricity of an egocentric frame of reference in which the visual system codes e.g., the location of a target. The former is a property of concepts. The latter is a property of visual representations. One crucial difference between the egocentricity of indexical concepts and the ecogentricity of an egocetnric frame of reference for coding the spatial location of a target is that, unlike the latter, the former involves a contrast: if e.g., something is here, it is not there. 


Paper for the Summer School in Analytic Philosophy, on Knowledge and Cognition, July 1-7, 2002. Seeing, Perceiving and Knowing, Pierre Jacob, jacob@ehess.fr


Neurobiology & Faith

The Humanizing Brain: Where Religion and Neuroscience Meet By James B. Ashbrook and Carol Rausch Albright. Pilgrim, 233 pp., $20.95.

In late 1997, an unusual story about the discovery of a”God-spot” in the brain began to appear in newspapers and newsmagazines. In a series of tests, epileptic patients with heightened brain activity in the temporal lobe showed hypersensitivity to religious words and phrases. Some news services announced that scientists had discovered the source of religious experiences. On Internet discussion groups, atheists crowed that religion had been proven to be nothing more than a dysfunction of the brain. Some theists countered, equally glibly, that God had designed our brains to be receptive to the divine; consequently, atheists seemed to be missing a vital piece of equipment.

Researchers had indeed found a region of the brain that could be linked to religious experience, but they neither claimed that this region was the cause of all such experiences nor sought to disparage or “reduce” religion or religious experience. What they had discovered, rather, was that what goes on in the brain is profoundly connected to what goes on in the mind, even in the most sublime of all experiences. They also demonstrated that neuroscience is becoming increasingly important for thinking about some of the basic claims of religion.

James Ashbrook and Carol Rausch Albright seek to break new ground in the dialogue between religion and science. They also hope to demonstrate that neuroscience is not only the appropriate but the preferred partner in that dialogue.

There has never been a better time to make this argument. President George Bush and the U.S. Congress declared the 1990s the decade of the brain, and it has lived up to that declaration. Spurred by the development of advanced scanning techniques such as PET (Positron Emission Tomography) and MRI (Magnetic Resonance Imaging), neuroscientists are getting glimpses of the brain in action. These maps allow them to observe the brain as it never has been seen before.

This culmination of more than 100 years of serious brain research is finally allowing us to ask some truly interesting questions: Where do emotions come from and why do we have them? How do we think and learn? How does the three-pound, gelatinous mass that we call the brain produce our identities? Though final answers are still a long way off, it is significant that we can now begin to frame such questions in a scientific way. In some cases, the answers seem startling. Far from endorsing a simple reduction of mind to mere neurons, many neuroscientists are embracing paradigms that emphasize the holistic character of brain function and the ways that reason and emotion interplay to make up a self.

This book is neither a neuroscience textbook nor a systematic theology. Rather, it is a working-out of theology through the lens of the neurosciences. Ashbrook, who before his recent death was a pastoral theologian and professor emeritus of religion and personality at Garrett Evangelical Theological Seminary, and Albright, executive editor of Zygon: Journal of Religion and Science, seek to develop a “neurobiology of faith.” To do so is possible because the brain holds a peculiar place in the universe–and, more specifically, in our universe. We ourselves, in a sense, are brains. To study the brain is to study ourselves, but in a way that makes us both subject and object. It is as if we were trying to look both in and out of the window at the same time.

Furthermore, to study ourselves, the authors claim, is to study God. Ashbrook and Albright’s introduction states that “God-talk is really human-talk, since it is we who are conversing.” That is, because we can experience God only as human beings, in the process of learning about human life we will necessarily learn something about God as well. Even more than this, understanding the human brain can be the key to understanding God.

It is worth taking this startling claim seriously. Asked to name the most exotic thing in the universe, most of us would mention either the very large (black holes and supernovas) or the very small (all those spooky little particles). But the most incredible structure in the entire universe may be what is sitting behind our eyeballs. Inside our heads is the most complex and sophisticated device in creation.

Every brain contains approximately 100 billion cells called neurons. Neurons connect with one another to form complex communication networks that, among other things, enable us to walk, talk and breath without thinking about it. There are a staggering 100 trillion neuron connections in the brain. As anyone who uses a comparatively simple desktop computer can testify, it seems a miracle that such a complex system could work without crashing. Yet the brain smoothly, day in and day out, enables us to perceive objects in color, distinguish the year and place of a wine by taste, and (sometimes) understand calculus. Black holes seem boring by comparison.

The Humanizing Brain: Where Religion and Neuroscience Meet – Review, Christian Century,  Jan 27, 1999  by Greg Peterson



What Good is Consciousness?

If consciousness is good for something, conscious things must differ in some causally relevant way from unconscious things. If they do not, then, as Davies and Humphrey (1993: 4-5) conclude, too bad for consciousness: “psychological theory need not be concerned with this topic.”

Davies and Humphrey are applying a respectable metaphysical idea–the idea, namely, that if X’s having C does make a difference to what X does, if X’s causal powers are in no way altered by its possession of C, then nothing X does can be explained by its being C. A science dedicated to explaining the behavior of X need not, therefore, concern itself with C. That is why being an uncle is of no concern to the psychology (let alone the physics) of uncles. I am an uncle, yes, but my being so does not (causally speaking[1]) enable me to do anything I would not otherwise be able to do. The fact that I am an uncle (to be distinguished, of course, from my believing I am an uncle) does not explain anything I do. From the point of view of understanding human behavior, then, the fact that some humans are uncles is epiphenomenal. If consciousness is like that–if it is like being an uncle–then, for the same reason, psychological theory need not be concerned with it. It has no purpose, no function. No good comes from being conscious.

Is this really a worry? Should it be a worry? The journals and books, I know, are full of concern these days about the role of consciousness.[2] Much of this concern is generated by startling results in neuropsychology (more of this later). But is there a real problem here? Can there be a serious question about the advantages, the benefits, the good, of being conscious? I don’t think so. It seems to me that the flurry of interest in the biological function of consciousness betrays a confusion about several quite elementary distinctions. Once the distinctions are in place–and there is nothing especially arcane or tricky about them–the advantages (and, therefore, the good) of consciousness is obvious.

1. The First Distinction: Conscious Beings vs. Conscious States.

Stones are not conscious, but we are.[3] And so are many animals. We are not only conscious (full stop), we are conscious of things–of objects (the bug in my soup), events (the commotion in the hall), properties (the color of his tie), and facts (that he is following me). Following Rosenthal (1990), I call all these creature consciousness. In this sense the word is applied to beings who can lose and regain consciousness and be conscious of things and that things are so.
Creature consciousness is to be distinguished from what Rosenthal calls state consciousness–the sense in which certain mental states, processes, events and activities (in or of conscious beings) are said to be either conscious. or unconscious. When we describe desires, fears, and experiences as being conscious or unconscious we attribute or deny consciousness, not to a being, but to some state, condition or process in that being. States (processes, etc.), unlike the creatures in whom they occur, are not conscious of anything or that anything is so although we can be conscious of them and their occurrence in a creature may make that creature conscious of something.

That is the distinction. How does it help with our question? I’ll say how in a moment, but before I do, I need to make a few things explicit about my use of relevant terms. Not everyone (I’ve discovered) talks the way I do when they talk about consciousness. So let me say how I talk. My language is, I think, entirely standard (I use no technical terms), but just in case my readers talk funny, I want them to know how ordinary folk talk about these matters.

For purposes of this discussion and in accordance with most dictionaries I regard “conscious” and “aware” as synonyms. Being conscious of a thing (or fact) is being aware of it. Alan White (1964) describes interesting differences between the ordinary use of “aware” and “conscious”. He also describes the different liaisons they have to noticing, attending, and realizing. Though my use of these expressions as synonymous for present purposes blurs some of these ordinary distinctions, I think nothing essential to this topic is lost by ignoring the nuances.

I assume, furthermore, that seeing, hearing, smelling, tasting and feeling are specific forms–sensory forms–of consciousness. Consciousness is the genus; seeing, hearing, and smelling are species (the traditional five sense modalities are not, of course, the only species of consciousness). Seeing is visual awareness. Hearing is auditory awareness. Smelling burning toast is becoming aware–in an olfactory way–of burning toast. One might also see the burning toast. And feel it. These are other modalities of awareness, other ways of being conscious of the toast.[4] You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it.

This is important. I say that if you see (hear, etc.) it, you are conscious of it. The “it” refers to what you are aware of (the burning toast), not that you are aware of it. There are two ways one might, while being aware of burning toast, fail to be aware that one is aware of it. First, one might know one is aware of something, but not know what it is. “What is that I smell,” is the remark of a person who might well be aware of (i.e., smell) burning toast without being aware that he is aware of burning toast. Second, even if one knows what it is one is aware of–knows that it is burning toast–one might not understand what it means to be aware of it, might not, therefore, be aware that one is aware of it. A small child or an animal–creatures who lack the concept of awareness–can be conscious of (i.e., smell) burning toast without ever being aware that they are aware of something. Even if they happen to know that what they are aware of is burning toast, they do not know–are not, therefore, aware–that they are aware of it.

The language here is a bit tricky, so let me give another example. One can be aware of (hear) a french horn without being aware that that is what it is. One might think it is a trombone or (deeply absorbed in one’s work) not be paying much attention at all (but later remember hearing it). If asked whether you hear a french horn, you might well think and say (falsely) that you are not. Not being aware that you are aware of a french horn does not mean you are not aware of a french horn. Hearing a french horn is being conscious of a french horn. It is not–not necessarily anyway–to be aware that it is a french horn or aware that you are aware of it (or, indeed, anything). Mice who hear–and thereby become auditorily aware of–french horns never become aware that they are aware of anything–much less of french horns.[5]

So, once again, when I say that if you see, hear, or smell something you must be conscious of it , the “it” refers to what you are aware of (burning toast, a french horn), not what it is you are aware of or that you are aware of it . To be conscious of an F is not the same as being conscious that it is an F and certainly not the same as being conscious that one is conscious of an F. Animals (not to mention human infants) are presumably aware of a great many things (they see, smell, and feel the things around them). Nonetheless, without the concept of awareness, and without concepts for most of the things they are aware of, they are not aware of what they are aware of nor that they are aware of it. What they are conscious of is burning toast. They are not aware that it is burning toast nor that they are aware of it.

So much for terminological preliminaries. I have not yet said anything that is controversial. Still, with only these meagre resources, we are in a position to usefully divide our original question into two more manageable parts. Questions about the good of consciousness, about its purpose or function, can either be questions about creature consciousness or about state consciousness. I will, for the rest of this section, take them to be questions about creature consciousness. I return to state consciousness in the next section.

If, then, we take our question about the purpose of consciousness as a question about creature consciousness, about the benefits that consciousness affords the animals who are conscious, the answer would appear to be obvious. If animals could not see, hear, smell and taste the objects in their environment–if they were not (in these ways) conscious–how could they find food and mates, avoid predators, build nests, spin webs, get around obstacles, and, in general, do the thousand things that have to be done in order to survive and reproduce?

Let an animal–a gazelle, say–who is aware of prowling lions–where they are and what they are doing–compete with one who is not and the outcome is predictable. The one who is conscious will win hands down. Reproductive prospects, needless to say, are greatly enhanced by being able to see and smell predators. That , surely, is an evolutionary answer to questions about the benefits of creature consciousness.[6] Take away perception–as you do, when you remove consciousness–and you are left with a vegetable. You are left with an eatee, not an eater. That is why the eaters of the world (most of them anyway) are conscious.

This answer is so easy I expect to be told that I’m not really answering the question everyone is asking. I will surely be told that questions about the function of consciousness are not questions about why we–conscious beings–are conscious. It is not a question about the biological advantage of being able to see, hear, smell, and feel (thus, being conscious of) the things around us. It is, rather, a question about state consciousness, a question about why there are conscious states, processes, and activities in conscious creatures. Why, for instance, do conscious beings have conscious experiences and thoughts?

2. The Second Distinction: Objects vs. Acts of Awareness.

If our question is a question about the benefits of state consciousness, then, of course, we have preliminary work to do before we start answering it. We have to get clear about what a conscious state (process, activity) is. What, for instance, makes an experience, a thought, a desire, conscious? We all have a pretty good grip on what a conscious animal is. It is one that is–via some perceptual modality–aware of things going on around (or in) it. There are, no doubt, modes of awareness, ways of being conscious, which we do not know about and will never ourselves experience. We do not, perhaps, understand bat phenomenology or what it is like for dogfish to electrically sense their prey. But we do understand the familiar modalities–seeing, hearing, tasting and so on– and these, surely, qualify as ways of being conscious. So I understand, at a rough and ready level, what someone is talking about when they talk about a creature’s being conscious in one of these ways. But what does it mean to speak, not of an animal being conscious in one of these ways, but of some state, process, or activity in the animal as being conscious? States, remember, aren’t conscious of anything. They are just conscious (or unconscious) full stop. So what kind of property is this? And what makes a state conscious? Until we understand this, we won’t be in a position to even speculate about what the function of a conscious state is.
There are, as far as I can see, only two options for making sense out of state consciousness. Either a state is made conscious by its being an object or by its being an act of creature consciousness. A state of creature S is an object of creature consciousness by S being conscious of it. A state of creature S is an act of creature consciousness, on the other hand, not by S being aware of it, but by S being made aware (so to speak) with it–by its occurrence in S making (i.e., constituting) S’s awareness and, therefore, if there is an object that stands in the appropriate relation to this awareness, S’s awareness of some object. When state-consciousness is identified with a creature’s acts of awareness, the creature need not be aware of these states for them to be conscious. What makes them conscious is not S’s awareness of them, but their role in making S conscious–typically (in the case of sense perception), of some (external) object.

Consider the second possibility first. On this option, a conscious state (e.g., an experience) is one that makes an animal conscious. When a gazelle sees a lion, its visual experience of the lion qualifies as a conscious experience, a conscious state, because it makes the gazelle visually conscious of the lion. Without this experience, the gazelle would not be visually aware of anything–much less a lion.

There are, to be sure, states of (processes and activities in) the gazelle which are not themselves conscious but which are necessary to make the animal (visually) aware of the lion. Without eyes and the assorted events occurring therein, the animal would not see anything–would not, therefore, be visually conscious of lions or any other external object. This is true enough, but it is irrelevant to the act conception of state-consciousness. According to the act conception of state-consciousness, a conscious visual state is one without which the creature would not be visually conscious of anything–not just external objects. The eyes may be necessary for the gazelle to be conscious of (i.e., to see) the lion, but they are not necessary for the animal to be conscious, to have the sort of visual experiences that, when things are working right, are normally caused by lions and are, therefore, experiences of lions. A conscious visual state is one that is essential not just to a creature’s visual awareness of this or that kind of thing (e.g., external objects), but to its visual awareness of anything–including the sorts of “things” (properties) one is aware of in hallucinations and dreams. That is why, on an act account of state consciousness, the processes in early vision, those occurring in the retina and optic nerve, are not conscious. They may be necessary to a creature’s visual awareness of external objects, but they are not essential to visual awareness. Even without them, the creature can still dream about or hallucinate the things it can no longer see. The same acts of awareness can still occur. They just don’t have the same (according to some, they don’t have any) objects

If we agree about this–agree, that is, that conscious states are states that constitute creature consciousness (typically, of things), then the function, the good, of state consciousness is evident. It is to make creatures conscious, and if (see above) there is no problem about why animals are conscious, then, on the act conception of what a conscious state is, there is no problem about why states are conscious. Their function is to make creatures conscious. Without state consciousness, there is no creature consciousness. If there is a biological advantage in gazelles being aware of prowling lions, then there is a purpose in gazelles having conscious experiences. The experiences are necessary to make the gazelle conscious of the lions.

I do not expect many people to be impressed with this result. I expect to be told that the states, activities, and processes occurring in an animal are conscious not (as I have suggested) if the animal is conscious with them, but, rather, if the animal (in whom they occur) is conscious of them. A conscious state is conscious in virtue of being an object, not an act, of creature awareness. A state becomes conscious, according to this orthodox line of thinking, when it becomes the object of some higher-order thought or experience. Conscious states are not states that make the creatures in whom they occur conscious; it is the other way around: creatures make the states that occur in them conscious by becoming conscious of them.

Since the only way states can become an object of consciousness is if there are higher order acts which have them as their objects, this account of state consciousness has come to be called a HO (for Higher Order ) theory of consciousness. It has several distinct forms, but all versions agree that an animal’s experience (of lions, say) remains unconscious (or, perhaps, non-conscious) until the animal becomes aware of it. A higher order awareness of one’s lion-experience can take the form of a thought (a HOT theory)–in which case one is aware that (i.e., one thinks that) one is experiencing a lion–or the form of an experience (a HOE theory)–in which case one is aware of the lion-experience in something like the way one is aware of the lion: one experiences one’s lion-experience (thus becoming aware of one’s lion-experience) in the way one is aware of (experiences) the lion.

I have elsewhere (Dretske 1993, 1995) criticized HO theories of consciousness, and I will not repeat myself here. I am more concerned with what HO theories have to say–if, indeed, they have anything to say–about the good of consciousness. If conscious states are states we are, in some way, conscious of, why have conscious states? What do conscious states do that unconscious states don’t do? According to HO theory, we (i.e., creatures) could be conscious of (i.e., see, hear, and smell) most of the objects and events we are now conscious of (and this includes whatever bodily conditions we are proprioceptively aware of) without ever occupying a conscious state. To be in a conscious state is to be conscious of the state, and since the gazelle, for example, can be conscious of a lion without being conscious of the internal states that make it conscious of the lion, it can be conscious of the lion–i.e., see, smell, feel and hear the lion–while occupying no conscious states at all. This being so, what is the purpose, the biological point, of conscious states? It is awareness of the lion that is useful, not awareness of one’s lion experiences. It is the lions, not the lion-experiences, that are dangerous.

On an object conception of state-consciousness, it is difficult to imagine how conscious states could have a function. To suppose that conscious states have a function would be like supposing that conscious ball bearings–i.e., ball bearings we are conscious of–have a function. If a conscious ball bearing is a ball bearing we are conscious of, then conscious ball bearings have exactly the same causal powers as do the unconscious ones. The causal powers of a ball bearing (as opposed to the causal powers of the observer of the ball bearing) are in no way altered by being observed or thought about. The same is true of mental states like thoughts and experiences. If what makes an experience or a thought conscious is the fact that S (the person in whom it occurs) is, somehow, aware of it, then it is clear that the causal powers of the thought or experience (as opposed to the causal powers of the thinker or experiencer) are unaffected by its being conscious. Mental states and processes would be no less effective in doing their job–whatever, exactly, we take that job to be–if they were all unconscious. According to HO theories of consciousness, then, asking about the function of conscious states in mental affairs would be like asking about the function of conscious ball bearings in mechanical affairs.

David Rosenthal (a practising HOT theorist) has pointed out to me in correspondence that though experiences do not acquire causal powers by being conscious, there may nonetheless be a purpose served by their being conscious. The purpose might be served, not by the beneficial effects of a conscious experience (conscious and unconscious experiences have exactly the same effects acccording to HO theories), but by the effects of the higher-order thoughts that makes the experience conscious. Although the conscious experiences don’t do anything the unconscious experiences don’t do, the creatures in which conscious experiences occur are different as a result of having the higher order thoughts that make their (lower order) experiences conscious. The animal having conscious experiences is therefore in a position to do things that animals having unconscious experiences are not. They can, for instance, run from the lion they (consciously) experience–something they might not do by having an unconscious experience of the lion. They can do this because they are (let us say) aware that they are aware of a lion–aware that they are having a lion experience.[7] Animals in which the experience of the lion is unconscious, animals in which there is no higher-order awareness that they are aware of a lion, will not do this (at least not deliberately) This, then, is an advantage of conscious experience; perhaps–who knows?–it is the function of conscious experiences.

I concede the point. But I concede it about ball bearings too. I cannot imagine conscious ball bearings having a function–simply because conscious ball bearings don’t do anything non-conscious ball bearings don’t do–but I can imagine their being some purpose served by our being aware of ball bearings. If we are aware of them, we can, for instance, point at them, refer to them, talk about them. Perhaps, then, we can replace defective ones, something we wouldn’t do if we were not aware of them, and this sounds like a useful thing to do. But this is something we can do by being aware of them, not something they can do by our being aware of them. If a conscious experience was an experience we were aware of, then there would be no difference between conscious and unconscious experiences–anymore than there would be a difference between conscious and unconscious ball bearings. There would simply be a difference in the creatures in whom such experiences occurred, a difference in what they were aware of.

The fact that some people who have cancer are aware of having it while others who have it are not aware of having it does not mean there are two types of cancer–conscious and unconscious cancers. For exactly the same reason, the fact that some people (you and me, for instance) are conscious of having visual and auditory experiences of lions while others (parrots and gazelles, for example) are not, does not mean that there are two sorts of visual and auditory experiences–conscious and unconscious. It just means that we are different from parrots and gazelles. We know things about ourselves that they don’t, and it is sometimes useful to know these things. It does not show that what we know about–our conscious experiences–are any different from theirs. We both have experiences–conscious experiences–only we are aware of having them, they are not. Both experiences–those of the gazelle and those of a human–are conscious because, I submit, they make the creature in which they occur aware of things–whatever objects and conditions are perceived (lions, for instance). Being aware that you are having such experiences is as relevant–which is to say, totally irrelevant–to the nature of the experiences one has as it is to the nature of observed ball bearings.[8]

3. The Third Distinction: Object vs. Fact Awareness.

Once again, I expect to hear that this is all too quick. Even if one should grant that conscious states are to be identified with acts, not objects, of creature awareness, the question is not what the evolutionary advantage of perceptual belief is, but what the advantage of perceptual (i.e., phenomenal) experience is. What is the point of having conscious experiences of lions (lion-qualia) as well as conscious beliefs about lions? Why are we aware of objects (lions) as well as various facts about them (that they are lions, that they are headed this way)? After all, in the business of avoiding predators and finding mates, what is important is not experiencing (e.g., seeing, hearing) objects, but knowing certain facts about these objects. What is important is not seeing a hungry lion but knowing (seeing) that it is a lion, hungry, or whatever (with all that this entails about the appropriate response on the part of lion-edible objects). Being aware of (i.e., seeing) hungry lions and being aware of them, simply, as tawny objects or as large shaggy cats (something a two-year old child might do) isn’t much use to someone on the lion’s dinner menu. It isn’t the objects you are aware of, the objects you see–and, therefore, the qualia you experience–that is important in the struggle for survival, it is the facts you are aware of, what you know about what you see. Being aware of (seeing) poisonous mushrooms (these objects) is no help to an animal who is not aware of the fact that they are poisonous. It is the representation of the fact that another animal is a receptive mate, not simply the perception of a receptive mate, that is important in the game of reproduction. As we all know from long experience, it is no trick at all to see sexually willing (or, as the case may be, unwilling) members of the opposite sex. The trick is to see which is which–to know that the willing are willing and the others are not. That is the skill–and it is a cognitive skill, a skill involving knowledge of facts–that gives one a competitive edge in sexual affairs. Good eyesight, a discriminating ear, and a sensitive nose (and the qualia associated with these sense modalities) are of no help in the struggle for survival if such experiences always (or often) yield false beliefs about the objects perceived. It is the conclusions, the beliefs, the knowledge, that is important, not the qualia-laden experiences that normally give rise to such knowledge. So why do we have phenomenal experience of objects as well as beliefs about them? Or, to put the same question differently: Why are we conscious of the objects we have knowledge about?
Still another way of putting this question is to ask why we aren’t all, in each sense modality, the equivalent of blindsighters who appear able to get information about nearby objects without experiencing (seeing) the objects.[9] In one way of describing this baffling phenomenon, blindsighters seem able to “see” the facts (at least they receive information about what the facts are–that there is, say, an X (not an O) on the right—without being able to see the objects (the X’s) on the right. No qualia. No phenomenal experience. If, therefore, a person can receive the information needed to determine appropriate action without experience, why don’t we?[10] Of what use is phenomenal experience in the game of cognition if the job can be done without it?

These are respectable questions. They deserve answers–scientific, not philosophical, answers. But the answers–at least in a preliminary way–would appear to be available. There are a great many important facts that we cannot be made aware of unless we are, via phenomenal experience, made aware of objects these facts are facts about. There are also striking behavioral deficits–e.g., an inability to initiate intentional action with respect to those parts of the world one does not experience (Marcel 1988a). Humphrey (1970, 1972, 1974), worked for many years with a single monkey, Helen, whose capacity for normal vision was destroyed by surgical removal of her entire visual cortex. Although Helen originally gave up even looking at things, she regained certain visual capacities.

She improved so greatly over the next few years that eventually she could move deftly through a room full of obstacles and pick up tiny currants from the floor. She could even reach out and catch a passing fly. Her 3-D spatial vision and her ability to discriminate between objects that differed in size or brightness became almost perfect. (Humphrey 1992: 88).
Nonetheless, after six years she remained unable to identify even those things most familiar to her (e.g., a carrot). She did not recover the ability to recognize shapes or colors. As Humphrey described Helen in 1977 (Humphrey 1992: 89),

She never regained what we–you and I–would call the sensations of sight. I am not suggesting that Helen did not eventually discover that she could after all use her eyes to obtain information about the environment. She was a clever monkey and I have little doubt that , as her training progressed, it began to dawn on her that she was indeed picking up ‘visual’ information from somewhere–and that her eyes had something to do with it. But I do want to suggest that, even if she did come to realize that she could use her eyes to obtain visual information, she no longer knew how that information came to her: if there was a currant before her eyes she would find that she knew its position but, lacking visual sensation, she no longer saw it as being there. . . . The information she obtained through her eyes was ‘pure perceptual knowledge’ for which she was aware of no substantiating evidence in the form of visual sensation . . .
If we follow Humphrey and suppose that Helen, though still able to see where objects were (conceptually represent them as there), was unable to see them there, had no (visual) experience of them, we have a suggestion (at least) of what the function of phenomenal experience is: we experience (i.e., see, hear, and smell) them to help in our identification and recognition of them. Remove visual sensations of X and S might still be able to tell where X is, but S will not be able to tell what X is. Helen couldn’t. That is–or may be–a reasonable empirical conjecture for the purpose of experience–for why animals (including humans) are, via perceptual experience, made aware of objects. It seems to be the only way–or at least a way–of being made aware of pertinent facts about them.
Despite the attention generated by dissociation phenomena, it remains clear that people afflicted with these syndromes are always “deeply disabled” (Weiskrantz 1991: 8). Unlike Helen, human patients never recover their vision to anything like the same degree that the monkey did. Though they do much better than they “should” be able to do, they are still not very good Humphrey (1992: 89). Blindsight subjects cannot avoid bumping into lamp-posts, even if they can guess their presence or absence in a forced-choice situation. Furthermore,

All these subjects lack the ability to think about or to image the objects that they can respond to in another mode, or to inter-relate them in space and in time; and this deficiency can be crippling (Weiskrantz, 1991: 8).
This being so, there seems to be no real empirical problem about the function (or at least a function) of phenomenal experience. The function of experience, the reason animals are conscious of objects and their properties, is to enable them to do all those things that those who do not have it cannot do. This is a great deal indeed. If we assume (as it seems clear from these studies we have a right to assume) that there are many things people with experience can do that people without experience cannot do, then that is a perfectly good answer to questions about what the function of experience is. That is why we, and a great many other animals, are conscious of things and, thus, why, on an act conception of state consciousness, we have conscious experiences. Maybe something else besides experience would enable us to do the same things, but this would not show that experience didn’t have a function. All it would show is that there was more than one way to skin a cat–more than one way to get the job done. It would not show that the mechanism that did the job wasn’t good for something.


Davies, M. and G. W. Humphreys (1993). Introduction. In Davies and Humphreys (1993), eds. Consciousness. Oxford; Blackwell, 1-39.

Dretske, F. (1993). Conscious experience. Mind, vol 102.406, 1-21.

Dretske, F. (1995). Naturalizing the Mind. Cambridge, Ma.; MIT Press, A Bradford Book.

Humphrey, N. (1970). What the frog’s eye tells the monkey’s brain. Brain, Beh. Evol, 3: 324-37.

Humphrey, N. (1972). Seeing and nothingness. New Scientist 53: 682-4.

Humphrey, N. (1974). Vision in a monkey without striate cortex: a case study. Perception 3: 241-55.

Humphrey, N. (1992). A History of the Mind: Evolution and the Birth of Consciousness. New York: Simon and Schuster.

Milner, A. D. (1992). Disorders of perceptual awareness– commentary. In Milner & Rugg (1992), 139-158.

Milner, A. D. & M. D. Rugg, eds. (1992). The Neuropsychology of Consciousness. London: Academic Press.

Rosenthal, D. (1990). A theory of consciousness. Report No. 40,Research Group on Mind and Brain, ZiF, University of Bielefeld.

Rosenthal, D. (1991). The independence of consciousness and sensory quality. In Villanueva 1991, 15-36.

Rey, G. (1988). A question about consciousness, in H. Otto and J.Tuedio, eds., Perspectives on Mind. Dordrecht: Reidel.

van Gulick, R. (1985). Conscious wants and self awareness. The Behavioral and Brain Sciences, 8.4, 555-556.

van Gulick, R. (1989). What difference does consciousness make? Philosophical Topics, 17: 211-30.

Velmans, M. (1991). Is human information processing conscious? Behavioral and Brain Sciences 14.4,651-668.

Villanueva, E., ed. (1991). Consciousness. Atascadero, CA; Ridgeview Publishing Co.

Walker, S. (1983). Animal Thought. London: Routledge and Kegan Paul.

White, A. R. (1964). Attention. Oxford: Basil Blackwell

Weiskrantz, L., ed. (1986). Blindsight: A Case Study and Implications. Oxford: Oxford University Press.

Weiskrantz, L. (1991). Introduction: Dissociated Issues. In Milner and Rugg (1991): 1-10.

1. There is a sense in which it enables me to do things I would not otherwise be able to do–e.g., bequeath my books to my nephews and nieces–but this, clearly, is a constitutive, not a causal, sense of “enable.” Spelling out this difference in a precise way is difficult. I will not try to do it. I’m not sure I can. I hope the intuitive distinction will be enough for my purposes.
2. For recent expressions of interest, see Velmans 1991, Rey 1988, and Van Gulick 1989.

3. I here ignore dispositional senses of the relevant terms–the sense in which we say of someone or something that it is a conscious being even if, at the time we describe it this way, it is not (in any occurrent sense) conscious. So, for example, in the dispositional sense, I am a conscious being even during dreamless sleep.

4. I here ignore disputes about whether, in some strict sense, we are really aware of objects or only (in smell) odors emanating from them or (in hearing) voices or noises they make. I shall always take the perceptual object–what it is we see, hear, or smell (if there is such an object)–to be some external physical object or condition. I will not be concerned with just what object or condition this is.

5. In saying this I assume two things, both of which strike me as reasonably obvious: (1) to be aware that you are aware of a french horn requires some understanding of what awareness is (not to mention an understanding of what a french horn is); and (2) mice (even if we give them some understanding of french horns) do not understand what awareness is (they do not have this concept).

6. This is not to say that consciousness is always advantageous. As Georges Rey reminds me, some tasks–playing the piano, pronouncing language, and playing sports–are best performed when the agent is largely unaware of the performatory details. Nonetheless, even when one is unconscious of the means, consciousness of the end (e.g., the basket into which one is trying to put the ball, the net into which one is trying to hit the puck, the teammate to whom one is trying to throw the ball) is essential. You don’t have to be aware of just how you manage to backhand the shot to do it skillfully, but, if you are going to be successful in backhanding the puck into the net, you have to be aware of where the net is.

7. I assume here that, according to HOT theories, the higher order thought one has about a lion experience that makes that experience conscious is that it is a lion experience (an experience of a lion). This needn’t be so (Rosenthal 1991denies that it is so), but if it isn’t so, it is even harder to see what the good of conscious experiences might be. What good would be a thought about a lion experience that it was . . . what? . . . a (generic) experience?

8. I’m skipping over a difficulty that I should at least acknowledge here. There are a variety of mental states–urges, desires, intentions, purposes, etc.–which we speak of as conscious (and unconscious) whose consciousness cannot be analyzed in terms of their being acts (instead of objects ) of awareness since, unlike the sensory states associated with perceptual awareness (seeing, hearing, and smelling), they are not, or do not seem to be, states of awareness. If these states are conscious, they seem to be made so by being objects, not acts of consciousness (see, e.g., Van Gulick 1985). I don’t here have the space to discuss this alleged difference with the care it deserves. I nonetheless acknowledge its relevance to my present thesis by restricting my claims about state-consciousness to experiences–more particularly, perceptual experiences. Whatever it is that makes a desire for an apple, or an intention to eat one, conscious, experiences of apples are made conscious not by the creature in whom they occur being conscious of them, but by making the creature in whom they occur conscious (of apples).

9. For more on blindsight see Weiskrantz 1986 and Milner & Rugg 1992. I here assume that a subject’s (professed) absence of visual experience is tantamount to a claim that they cannot see objects, that they have no visual experience. The question that blindsight raises is why one has to see objects (or anything else, for that matter) in order to see facts pertaining to those objects–what (who, where, etc.) they are. If blindsighters can see where an object is, the fact that it is there (where they point), without seeing it (the object at which they point), what purpose is served by seeing it?

10. There are a good many reflexive “sensings” (Walker 1983: 240) that involve no awareness of the stimulus that is controlling behavior–e.g., accommodation of the lens of the eye to objects at different distances, reactions of the digestive system to internal forms of stimulation, direction of gaze toward peripherally seen objects. Milner (1992:143) suggests that these “perceptions” are probably accomplished by the same midbrain visuomotor systems as mediate prey catching in frogs and orienting reactions in rats and monkeys. What is puzzling about blindsight is not that we get information we are not aware of (these reflexive sensings are all instances of that), but that in the case of blindsight one appears able to use this information in the control and guidance of deliberate, intentional, action (when put in certain forced choice situations)–the sort of action which normally requires awareness.


Ontology and Perception

The ontological question of what there is, from the perspective of common sense, is intricately bound to what can be perceived. The above observation, when combined with the fact that nouns within language can be divided between nouns that admit counting, such as ‘pen’ or ‘human’, and those that do not, such as ‘water’ or ‘gold’, provides the starting point for the following investigation into the foundations of our linguistic and conceptual phenomena. The purpose of this paper is to claim that such phenomena are facilitated by, on the one hand, an intricate cognitive capacity, and on the other by the complex environment within which we live. We are, in a sense, cognitively equipped to perceive discrete instances of matter such as bodies of water. This equipment is related to, but also differs from, that devoted to the perception of objects such as this computer. Behind this difference in cognitive equipment underlies a rich ontology, the beginnings of which lies in the distinction between matter and objects. The following paper is an attempt to make explicit the relationship between matter and objects and also provide a window to our cognition of such entities.

General Introduction

Lying at the center of this article is the claim that the study of ontology ought to begin with what is perceived rather than what is said. Researchers who are interested in ontology should take as their starting point what is given in the perceptual field rather than what nouns are present in a given language. Some ontological research begins and ends with an analysis of the relationship between a language and its speakers (see for example see Lutz, Riedemann, and Probst (2003), Kayed and Colomb (2002), and Wielinga, Schreiber, Wielemaker, and Sandberg (2001)). There are, however, general problems associated with language that should warn us from investing too much in the implications of what nouns appear in a given language. One such general problem is found in the seemingly simple distinction between mass nouns, such as ‘water’ or ‘gold’, and count nouns, such as ‘human’ or ‘pen’. Some nouns are difficult to place on one side or the other of this distinction. One such noun is ‘glass’, which can be used to refer to an object capable of containing liquids or as a material that composes several such objects. The mass-count distinction is the subject of the section that immediately follows. Fortunately, for those who are interested in the ultimate source of the distinctions drawn within any ontology, we have recourse to some provocative research into infant object perception. Such research indicates that there is a primitive distinction to be made between objects, entities which are coherent wholes, and materials, entities which lack coherence. This is a primitive distinction for the very fact that the infants who seem to make it do so without any significant understanding of the linguistic distinction between mass and count nouns. Next, attention will be given to what is needed on the side of human perceivers in order to not only draw such a distinction, but to use it during daily interaction with the world. It will be argued that we need two types of concepts in order to negotiate our way through the world. On the one hand we need concepts in order to track real world instances, such as particular buses, walls, and drops of water. On the other hand, we need concepts that are general in the sense that they can be used to recognize that a completely new instance, as when occurs when I am introduced to a new person, belongs to the same class as previously encountered instances. In addition to concepts, we also need to investigate what sorts of rules must be present in order for human subjects to so readily discriminate between objects and materials at such an early age. Such rules must be receptive to surface properties such as color, shape, and texture in order to begin to explain the discriminatory behavior of the infants in the psychological tests cited below. Subsequent to this discussion, the relations among rules and concepts, respectively, will be explored.

The Mass-Count Distinction

It is important to keep in mind that the mass-count distinction is first and foremost a linguistic one. Quite simply, there are mass nouns, such as ‘water’, which refer to matter, or more colloquially, stuff, while count nouns, such as ‘car’, refer to objects. We may be asked to count the number of cars in the parking lot and understand just what this task means. But can we count the number of waters on the table? Are we to count the water in the glass as a unified whole and the small area of water that collected beside it as another? In order to make linguistic sense of the task of counting waters, we would have to add some sort of count term in front of the mass noun ‘water.’ So we may be asked to count the areas or puddles of water on the table which admit counting.
However there is a series of issues which can result in the dissolution of the mass-count distinction. First of all, how are we to determine mass from count nouns? It is clear that whether a noun can be made plural and still make grammatical sense is not an adequate criterion of differentiation. Consider words such as ‘news’ or ‘woods’ and immediately one obtains a grasp of the difficulty of maintaining the distinction. Despite the ‘s’ at the end of these words, in English they function as singular nouns. For example, to relay bad news we say, “The news is bad.”. Further, words that at first glance appear to be of the mass variety also seem to be able to be readily counted. For instance, I may pass two separate ‘woods’ on my way to grandma’s house. In addition, in a restaurant I may easily order two ‘waters’ and have my order understood by the server who has heard it. One may claim that with respect to the latter example that ‘waters’ is an abbreviated form of ‘glasses of water.’ Ware suggests that we define the distinction according to the type of quantifiers and determiners that are used in front of the two types of noun (1979). This seems plausible but undesirable in that we want the distinction to be applicable to nouns, not to noun phrases. The issue is whether mass nouns divide the reference in a different way than count nouns. In order to attempt to answer this question we need to set aside quantifiers and determiners and deal with them separately.

Another possible criterion that could be used to maintain the mass-count distinction is to hold that the two differ according to what they refer to. So, as Cartwright explains, count nouns refer to individuals while mass nouns such as ‘water’ refer to stuff (1979). This suggests that there is a genuine ontological distinction that corresponds to the linguistic one. But, we may certainly ask whether mass nouns use a different referring mechanism than count nouns. In other words, when we say ‘milk is a good source of calcium’, it appears as though we are not intending to refer to a discrete mass of milk but rather to a type of matter. The question seems to be whether milk in this sentence refers to a different kind of entity than ‘man’ does in this sentence: ‘man is an animal.’ And if so where does this difference rest? Is it an ontological difference having to do with the nature of being of that which is referred to? Or is the difference found in the way we think about what they name?

In addition, what can we say of cases where it is not clear whether we are referring to a type of matter or a particular collection of matter? When a person who is gasping for breath says quite simply, “water”, is he referring to a type of matter or an instance?

Quine’s Body-Mindedness

There are several issues that could be raised based on the observation that mass nouns can be classified according to whether they refer to an instance of matter or they refer to a type of matter. First and foremost, we may ask how we come to form types of anything. A concomitant topic consists in whether types exist in the world or only in the minds of human observers.
Perhaps all we have is a world of individuals. If so, it is unclear whether such a world includes discrete instances of matter and what individuating criteria can be applied to matter in such a way as to result in discrete instances. Moreover, matter has the additional problem of not being a body in the sense of Quine (1974). At stake is the relationship between the cognitive representation of matter and Quine’s observation that human beings are instinctively body-minded. If we accept the claim that there are representational advantages bestowed upon bodies, what does this mean for the representation of matter?

Let us begin to address this latter question by first attempting to describe the presence of the matter concept and its role within human cognition. What I suggest is that we look upon the matter concept in much the same way as the object concept. However, there is a fundamental difference between the two. This difference is first noticed in psychological experiments conducted on infants, and it is appropriate for us to note the results from such experiments as they shed light upon how basic the distinction between matter and object actually is.

There are many experiments (for example, Baillargeon et al. 1985; Chiang and Wynn, 1997; and Huntley-Fenner et al., 2001) which show that while infants readily track objects, such as toy cars and rubber ducks, they fail to track discrete instances of matter, such as sand or gel. The literature regarding infant object recognition (Spelke 1994) suggests that the reason for this is that instances of matter lack certain principles or properties that objects possess. According to Carey and Xu (2001, p. 207) infant experiments on object recognition point to the following conclusion:

These infant studies suggest that the object tracking system is just that: an object tracking system, where object means 3D, bounded, coherent physical object. It fails to track perceptually specified figures that have a history of non-cohesion. (Emphasis in original work).
An example of this is found in two experiments (Baillargeon et al. 1985, Chiang and Wynn 1997). In each experiment, infants were presented with one of two trials. In one, a coherent, bounded object was dropped behind a screen that was placed in front of the infant (object trial). In the other, sand was poured behind a screen (material trial). In both cases the screen was removed after the initial presentation to test the subject’s response to the disappearance of the item in question. The results were that the infant subjects showed surprise (as measured in amount of time the infant spent gazing at the area where the object or material was supposed to lie) upon the outcome of the object trial, but did not show surprise in the material trial. These experiments lend support to there being an object tracking system within our cognitive repertoire, but not a material tracking system.
What are we to make of the fact that infants routinely fail to track instances of matter? Furthermore we can ask a more fundamental question: when presented with a non-solid instance of matter, does the infant perceive a non-solid instance of matter? I want to claim that this is indeed what the infant perceives. Notice how this is a more detailed claim than that found in the literature pertaining to infant object perception. The claim made by Carey and Xu above seems to suggest that what the infant perceives is primarily a non-object in the standard sense of objects being three-dimensional, coherent entities. This is an important claim, but it does not tell us much with respect to the infant’s perception of matter instances. What I would like to do below is to attempt to fill in the details with respect to the perception of discrete instances of matter.

The broader point to be made is that perception alone cannot account for the fact that there is a fundamental difference between objects and non-objects, the latter of which includes materials such as clay or sand. Rather, perception must be linked to more advanced cognitive systems which are both flexible and specific enough to be sensitive to incoming perceptual information, yet rigid and general such that the information that comes in is properly classified and tracked (or not tracked in the case of a discrete instance of matter). Seen in this way, perception is not a lower level cognitive activity that is divorced from higher level activities such as categorization. It is, instead, embedded within cognition. Furthermore, without perception, there would be little need for categories or concepts as well.
The Matter Concept

Here, let us take stock of what kinds of entities we need in order to recognize, re-identify, and track collections of matter. The view taken up here is that there is a hierarchy of concepts which we must describe in order to begin to speak about recognizing matter. We will begin by describing our more general concept of matter.
First, there is matter which is a broad superordinate category that stands as a contrast-class to object. When enumerating the principles that determine whether an entity falls under the matter category, we want the principle to be sufficiently flexible to handle a wide variety of types of matter. In addition, we must keep in mind that at least in infancy, the matter concept seems to be underdeveloped in contrast to the object concept.

What determines matter from objects are certain irregularities of shape which are present in the instances of the former. Matter, the concept, is attuned, in the way that our perceptual system is attuned to perceiving objects and matter instances and not molecules, to discontinuities in shape, which objects, as a general rule, do not present. However, there are always exceptions to general rules which we will observe in a moment. The fact that matter corresponds to shape irregularities means that there are different criteria which are used to recognize, re-identify, and track matter instances.

This is an important point which Keil, Kim, and Greif develop within their chapter in Forde and Humphreys (Eds., 2002). In their chapter, Keil et al. speak of the perceptual shunt as key to cognitive processing of low-level perceptual information. The shunt is claimed to be a mechanism that channels perceptual information to different parts of the brain for subsequent higher level cognitive processing. The idea is that in order for this process to work, our cognitive structure must be sensitive to salient perceptual information. In Keil’s et al. (2001, p.13) words, “data can only enter the system if it sets off primary perceptual triggers.” The attempt for us is to apply these ideas to the perception of instances of matter.

The experiments conducted on infants point to the conclusion that objects, in the standard sense, are assimilated according to shape. However, matter instances are assimilated according to the material which composes them. Soja, Carey, and Spelke’s (1991) experiment included presenting infants (2 year olds) with a named object with a T-shape and a named non-solid matter instance of a novel shape. After the infants were habituated to the two items they were shown two more sets of items. After being shown the T-shaped object, infants tended to apply the stimulus name to a T-shaped object made of a different material as being more similar to the original object than a collection of separate objects made of the same material, but having a non-T-shape. However, when presented with a matter instance of a novel shape, the infants applied the stimulus name to a differently shaped entity made of the same material rather than a similarly shaped entity made of a different material. This experiment shows that there is an interesting dynamic between shape and material which is applied to differentiate matter instances from objects.

There are several questions involved in interpreting the results of this experiment. First and foremost is the question of how the subjects are receptive to the fact that material composition is salient in one trial and not the other. In other words, just what are they using to apply the stimulus word and how are they using it?

I would suggest that what the subjects perceive are discrete matter instances. However, in order to assimilate the stimulus with the target, the child must perform two different tasks at two different levels of cognitive processing. First, she must somehow mentally extract the material composition of the named stimulus. Also, she must have some notion that the material composition in the material experiment is salient. Yet it is not so in the object trial. At issue is how this is performed. The answer must be found at the top-level of conceptual formation rather than in bottom-level perceptual experience. There are constraints which guide the mind in order to perceive instances of matter. One such constraint was mentioned above. Shape irregularity seems to be a good candidate for matter instance recognition. Allow me to elaborate why I single out shape irregularity as being salient to the classification of matter.

There is, according to my interpretation of the above experiment, an important distinction to be made between perceiving shape irregularity and using it to assimilate two matter instances to a single name. The recognition of the fact that the named stimulus is of a novel shape signals to the infant that the substance of which it is composed is of primary import. (Of course there are other such signals found in surface properties, for instance texture and color distribution, which we will set aside here.) This leads the infant to overlook differences in shape upon being asked to assimilate names to different instances of matter. There are obvious objections to this interpretation. For example, just how do we define a novel or irregular shape? A man is irregularly shaped in a sense. Are we to classify a man as an instance of matter?

My first response would be to say that shape irregularity refers to asymmetry in shape. But this will not due for we can certainly think of counter examples. For instance a symmetrical portion of gold is called gold and this holds irrespective of its symmetrical shape. It is interesting to consider the following. If a semi-solid matter instance such as clay were molded into a T-shape and presented as a named stimulus, would infants subsequently assimilate the name to targets of the same shape as in the object trial? The current literature on infant perception (Huntley-Fenner 2001) seems to predict that the result of such an experiment would depend upon how the named stimulus was presented. So, if we formed the semi-solid material into a T shape prior to presentation, then infants would assimilate the name according to shape. However, if we perhaps fashioned the material into a T-shape before the infant’s eyes, then assimilation would take place based upon material composition. In any event, it seems to me that it is entirely possible or even very likely that an irregular shape marks a matter instance, however difficult it is to theoretically define.

There are two more constraints placed on the cognitive processing of matter instances which I want to touch upon. The first is what I refer to as the uniformity constraint which tells us that there is something peculiar about perceiving matter instances. Uniformity says that instances of matter are in general composed of a uniform material throughout the instance. This applies especially to solid, opaque masses such as a nugget of gold. Of course, this assumption could be dead wrong. There could be a mass of some other mineral or metal concentrated in the center or scattered throughout. Nevertheless, we tend to apply matter instance uniformity based upon surface uniformity. This applies equally to translucent, non-solids such as water, even when it has been mixed with, say, salt. The tendency is to view the mixture as being uniform throughout the instance.

Moreover, the last constraint, which delineates our perception of matter instances, is that in general they do not present to us any significant surface divisions. In other words, objects present us with parts at the mesoscopic level, which is the level at which we perceive. For example, cups have handles and humans have arms. Of course, there is not a clear boundary between the handle and the remainder of the cup, and the arm from the remainder of the human. Nevertheless, instances of matter lack this phenomenon. From this, we seem to be much more able to mentally parse the cup into parts than the contents that the cup contains.

What we are left with then are three constraining principles – the principle of irregular shape, uniformity, and lack of perceptible surface divisions – which interact to give us the matter concept. There are significant, outstanding questions that one could ask of these constraints. For instance, are we to look upon them as necessary or sufficient conditions for the matter concept? And further, how do they interact?

What I would like to do is to briefly articulate some of the relationships that exist among these principles. First, let me make this observation regarding the uniformity principle. Whether an instance of matter is uniform is not discoverable upon immediate visual perception. This sets uniformity apart from shape irregularity and the lack of perceptible surface divisions, which are perceived upon immediate visual inspection. What this means is that uniformity is something which the subject derives on the basis of the other two principles. This derivation is important to the perception of an instance of matter. This is because the perception of matter instances includes depth information, information upon the physical properties of the instance at points that are hidden from visual inspection. This is what distinguishes it from the perception of animals or artifacts whose inner, physical properties are much more intricate and are available only to those with specialized knowledge, such as biologists. So, the perception of portions of matter proceeds from shape irregularities and a lack of perceptible surface divisions to material uniformity throughout the portion in question. Of course, a lack of perceptible surface divisions is more strongly connected to material uniformity than shape irregularity is. When perceiving an instance of matter, a lack of surface divisions implies uniformity of material throughout the instance.

What I would like to do now is to contemplate whether we have left something out of our analysis of perceiving portions of matter. Perhaps the three principles weighed above are aspects of a more fundamental principle that explains the perception of matter instances. I offer the following for consideration. There is a difference of degree in what I call the three-dimensional definiteness between instances of matter and standard objects. When perceiving a particular portion of matter from a particular angle, it is much more difficult for the subject to mentally construct the perceptual properties of the part that is obstructed from visual inspection than is the case with standard objects. This is due in large part to the three principles described above. Irregular shape and the lack of perceptible surface divisions make the determination of what is on the other side difficult. Yet, with, for example, a T-shape object, it is not very challenging to surmise what the object would look like if rotated. However, the uniformity principle does at least tell us what type of matter the occluded side is made of. So, in a sense, uniformity reduces the amount of indefiniteness we have concerning the three-dimensional view of the matter instance in question. It counteracts to a certain extent the uncertainty with which irregular shape and lack of perceptible surface divisions leave us. For instance, when presented with a large portion of gold, we cannot properly imagine what the occluded section looks like based on visual perception alone. However, we do assume that gold composes the unseen section whatever particular shape it may have.

Of course there are problems with these remarks as well. It seems as though the lesser degree of three-dimensional definiteness applies well to certain types of matter instances. But can the same be said of a portion of water, which given its transparency is perhaps more three-dimensionally definite than a T-shaped object? This objection I intend the reader to consider. But it contains a point on which I would like to focus next: the notion of types of matter.

Types of Matter

There are different types of matter, many of which we lack specific names for. First of all, there is the heap or collection of objects at close spatial quarters such as an archipelago. This type can be further divided into heaps which have parts of uniform shape and size such as piles of sand and those composed of parts of all shapes and sizes, for instance a heap of garbage. Next we have semi-solid types of matter, which include peanut butter and clay. Also, there are kinds of fluids such as gases and parcels of air in physical geographic parlance. Furthermore, we have liquids such as water, and finally solid kinds, for instance, gold.
What I want to suggest is that the reason we need a type classification of matter is that different types of matter are inductively rich. This is to say that their formation facilitates important inferences pertaining to how instances of matter behave. This is perhaps a point that is worth emphasizing. Knowing that an instance of matter belongs to a particular type tells us something about its physical composition. So a semi-solid mass, or to stick with our terminology, a semi-solid instance of matter, can be divided into smaller portions that are composed of the same semi-solid material. In addition, knowledge that a matter instance belongs to a particular type tells us something about the behavior of the instance of matter. For example, a semi-solid instance would provide a certain amount of resistance upon surface contact. In addition, if we placed a semi-solid instance on the top of an inclined plane we would not expect it to move toward the bottom. We would expect a liquid such as water to do so, however. There are two implications to be drawn from this observation.

I am intrigued by Pascal Boyer’s (Millikan, 1998) argument for category-specific tracking processes. Tracking instances of a different matter type would, I theorize, involve different processes due to different motions. Tracking a cloud of gas moving through the air is a lot different than tracking a slab of bronze as it is being fashioned into a statue. I want to say it is different because there seems to be a difference in the degree but not the kind of cohesion amongst the respective types of matter to which these instances belong. In fact, we can establish a continuum of cohesion among kinds of matter ranging from the least cohesive fluids such as smoke to the most cohesive solid masses such as gold with semi-solids in the middle.

I also think that since different kinds of matter have different behaviors, then it can also be claimed that discrete matter instances are more than just collections of parts. For instance, an area of water is more than a collection of water molecules. This is because although significant chunks of material can be added to or taken away from an instance of matter, we still view that matter instance the same as before the change. What this seems to imply is that instances of matter bear a significant resemblance to Aristotelian substances, at least as they are described in Book VIII of the Metaphysics.

What bothers me about this claim is that there is a problem with respect to Aristotelian substantial change. Aristotle recognized that although an entity may change, we still acknowledge the changed entity as being identical to the entity which existed before the change. There is something out there in the world and in the entity in question which causes this phenomenon. For Aristotle, it is this something that is matter. In Book VII, Chapter 1 of the Metaphysics, line 1042 a 32, we read the following: “But clearly matter also is substance; for in all the opposite changes that occur there is something which underlies the changes.”

The problem of applying this principle to an instance of matter rests in the following. Imagine a scenario in which we have a quantity of water in a glass. We then take the glass and pour some amount of the water out, leaving us a smaller quantity of water (which we will designate Q1) remaining in the glass. Next, some new water is poured into the same glass. Finally, we pour another amount of water out leaving us another quantity of water (designated as Q2), which is exactly the same amount as (Q1). The question arises whether the water that remains in the glass at (Q2) is the same as that at (Q1). We really cannot be certain that the water at (Q1) stayed at the bottom during the course of the second out-pouring, or that some molecules somehow moved into the amount which was poured out. In sum, we cannot determine whether the whole or any part of (Q2) is identical to (Q1) in which case there seems to be no causal foundation for us to call the (Q2) water the same as the (Q1) water.

But we would be correct in calling the (Q2) water identical to the (Q1) water and this not because they have identical quantities. Rather, their identity is based on two considerations. First of all, they are uniform wholes and this is true regardless of how many molecules of (Q2) are different from (Q1). Secondly, the two quantities display identical behaviors, for example, each react in the same way upon coming into contact with something else. The glass holds both in the same way. This is in addition to the other perceptible physical properties such as color, which are used in identification. They are the same type of matter, but are the instances the same? I want to claim that they are because they are of the same type in addition to being the same quantity. Whether or not they have the exact same molecules we leave to the scientist to determine.

But one obvious objection to these observations is that we could be completely wrong about calling the two quantities the same. Suppose it was not water which was poured into the glass but some chemical that bears a striking resemblance to it. In which case at (Q2) we have some sort of water-chemical mixture that differs from (Q1) which was entirely composed of water. Here we have two different types without even knowing it.

This is certainly a serious objection. But the key is to notice that it misses the point that I wanted to make. The point is to discover how we re-identify matter instances across change. The objection cites the fallibility of our knowledge. The fact that our knowledge can be wrong is a separate issue from how, in fact, we do come to identify and assimilate. It is the case that we do identify entities as being the same. We do track entities across time. We need to do so in order to survive. How we perform these tasks is a different observation than the observation that we could be wrong. Besides, I could be equally wrong in believing that I am a philosopher. Perhaps I am a victim of a deception. To which I reply that perhaps this is true, but it is highly unlikely.

The Material Object Concept and Material Objects

Before we proceed, I call attention to the fact that thus far we have been careful to speak of matter instances. This is because it is important to notice the differences between discrete instances of matter and what is known in the literature on infant perception as standard objects. These differences are ontological. What I wish to do now is to speak of the conceptualization of matter instances. For this reason, I will use the term material object instead of matter instance. This is meant to reflect that just as we need an object concept to track objects, we also need a material object concept to track material objects. However, let me make clear that a material object refers to a discrete instance of matter. From now on I will use the terms ‘material object’ and ‘matter instance’ or some variation of the latter interchangeably to refer to particular portions of matter.
With this in mind what remains is a description of the material object concept and material objects. Specifically, we should ask just what are material object concepts, why do we need them, and how are they formed?

Basically, what is meant by the material object concept is the cognitive representation of material objects that is utilized during cognitive processing. They are collections of special properties that distinguish material objects from standard objects. Surface texture and color seem to be two such properties to which material object concepts must be receptive. We need material object concepts in order to re-identify and track an enormous range of possible individual material objects.

What is of particular difficulty when devising a strategy to deal with a theory of matter is the amazing array of possible material objects that could exist in the world. Somehow a comprehensive theory must be able not to explain them all, but to accommodate a large majority of them. Take for example a heap of similar objects such as tennis balls. According to what was said above, this would be a material object. After all it resembles a heap of sand. But even if it is a material object how can we know this? Texture and color don’t seem to tell us anything different in the case of a heap of tennis balls than a single tennis ball does.

First, we know it is a different object from a single tennis ball not because of color or texture. The salient feature seems to be the irregular pattern of edges which the heap presents to the observer. As the visual system builds the primal sketch of the entity in the sense of Marr (1982), the viewer is presented with an irregular collection of edges which outline it. Contrast this with a single tennis ball, which presents a comparatively regular set of edges. Second, we know the heap is a material object because it has a certain behavior; it reacts a certain way upon surface contact. We know, for instance, that taking a ball from the bottom of the heap will probably have consequences for those balls above it, even those balls that are not in immediate contact with it.

But we may ask why we need material object concepts at all? There are two reasons for this. First our material object concepts must be able to capture and preserve properties that distinguish among the different types of matter. Secondly a material object concept must also capture specific properties of particular matter instances.


What I would like to do now is to connect the entities that we have discussed in order to form a coherent explanation of material object perception. The strategy is what I call convergence, which is one that attempts to combine top-down and bottom-up approaches to explaining the mystery of material cognition. The mystery is how people can easily recognize and re-identify material objects given the infinite variety of shapes and sizes they may have. To begin, I will attempt to clarify the explanatory strategy of convergence and attempt to situate such a strategy within the literature on cognition and perception.
Traditionally there are two approaches regarding human conceptual development which are referred to above by the terms ‘top-down’ and ‘bottom-up’. Top-down approaches are committed to the assumption that our concepts are formed independent of human-world interaction. One variant of a top-down approach is conceptual nativism which holds that we are born with at least some, if not all, of the concepts we have during the course of our respective lifetimes. On the other hand, bottom-up approaches are committed to the assumption that our concepts develop out of human-world interaction. The term ‘bottom-up approach’ is an umbrella term for all varieties of empiricism. The strategy of convergence is meant to simultaneously acknowledge such assumptions and also to set them aside. I would argue that setting these assumptions aside is important for two reasons. First of all, the debate between empiricism and nativism, despite its rich philosophical history which is too long to report in this article, may in fact be a diversion from what we should be seeking an explanation for. Namely, we should be seeking an explanation for human-world interaction, for without such an explanation the debate between empiricists and nativists would be incomprehensible. Secondly, and along such lines, we should be working toward building an ontology which exists independently of any assumptions like those made by the empiricists and nativists. The strategy offered below, that of convergence, marks the very start of such an endeavor.

I will focus on our formation of material object concepts, which are used to track specific instances of matter. We may ask how material object concepts are formed. My view is that they are formed through a union of conceptual constraints and perceptual information.

First of all, the matter concept is a collection of properties to which the perception of matter instances must be attuned. We have tried to find these properties above. These properties play a major role in determining which objects are to be placed into the material object class. However important these properties are in the recognition of material objects, they do not provide us with the ability to single out specific instances of matter.

In order to have specific material object concepts that track specific matter instances we also need lower level perceptual information. What is meant here by lower level perceptual information is information about shape, size, location, etc. that is specific to this area of water, for example.

Thus the claim is that our material object concepts, which are used to track specific instances of matter, are formed where the properties of the matter concept and lower-level perceptual information converge.

The Material-Object Concept – A Test Case

Now let us take the time to test some of the ideas included above in a test case. The case is designed to be difficult in order to see how well the ideas discussed so far withstand some significant pressure.
Let us imagine we have in front of us two entities. One entity is a rock of a particular shape and color. The second entity has the exact same shape and size as the first, but it is obvious due to its distinct physical properties that it is gold. So, the first object we refer to as ‘a rock’, a count noun; while the second we refer to as ‘gold’, a mass noun.

What is the true difference between an object and a material object? In this instance we have two entities with the same surface properties of shape and texture. Further, they can be said to have the same behaviors in that both react identically to surface contact and both are coherent in the same way. In addition, if one is irregularly shaped, then so is the other. Moreover, let us suppose that their shape is such that it does not indicate any distinct mesoscopic parts. In sum, the constraints meant to distinguish between matter and object seem to apply to both entities. So, the distinction seems to break down at this point.

However, I argue that there is indeed a difference between the material object, gold, and the object, rock. The difference rests in the composition of the entities and the uniformity assumption discussed above. In the case of gold, we assume that the color gives us information about what the entity is composed of throughout its extension in space. In fact, we would be truly shocked if we found out that the gold on the surface was just a patina. Notice that if we did make this discovery the piece of gold would become instead a gold covered rock. This applies even in cases not involving precious metals where there is a desire for material uniformity, such as coal.

Further, it is uninteresting to apply the uniformity constraint to the rock. Instead, we would be more interested if we were told that the rock was not uniform throughout and contained sections of gold scattered within it; in which case it would still be called a rock, but it would be one with scattered portions of gold within it. The point is that speculating about and coming to know of the innards of the rock does not affect its status as a rock as much as in the case of gold. Also, we have different names for both types of objects. We do not call gold a ‘gold rock’ but we have special names, such as ‘gold nugget’ or ‘piece of gold’ for a significant portion of gold.

Next, we may inquire as to what degree this object/material object distinction is captured by the linguistic mass/count distinction. The problem is that language does not encapsulate this distinction in its entirety. The difficulty is that ‘gold’ can be used to refer to a type of matter which is defined by the properties that gold has. However, it can also be used to refer to a particular material object composed of gold. Somehow we know that when a miner shouts, “I’ve found gold”, he is referring to some particular portion of gold with boundaries as yet to be discovered. Yet, similarly, we know that when a milk-drinker says that, “milk is a good source of calcium”, he is referring to a type of matter and not to some particular material object. It is my contention that a major part of how we can decipher these different referents is that we have an order or hierarchy of concepts which help us to understand our world. And just as there are higher-order constraints that aid our understanding, these same constraints help us to decode and make sense of our language.

A Final Objection

Before we conclude, let us consider yet one more objection. The objection is provided by Millikan in her 1998 article “A Common Structure for Concepts of Individuals, Stuffs and Real Kinds: More Mama, More Milk, and More Mouse.” Millikan’s argument challenges some of the core assumptions underlying the claims found in this paper.

First, according to Millikan, concepts are not constructed by attending to properties. Millikan’s aim is to provide a nondescriptionist account of concept formation. Concepts are not formed through the listing of specific properties. This is because properties cannot be used as the basis of individuation. Rather, the extensions of concepts, the instances that fall under the concept, are determined much more primitively through a process along the lines of what philosophers of language would call rigid designation. In other words, concepts do not describe entities; instead, they point to or enumerate entities. Indeed Millikan’s analysis of concepts proceeds along the lines of an analysis of how the nouns we use in our language refer. This relates to her view on how the use of language comes to influence our concepts. She claims, “Having substance concepts need not depend on knowing words, but language interacts with substance concepts, completely transforming the conceptual repertoire” (Millikan 1998, p. 55).

Secondly, it is important to realize what Millikan classifies as substances. Substances include “stuffs” such as milk and gold, individuals such as Bill Clinton, Mama, and the Empire State Building, and real kinds. Examples of real kinds include Rosch’s (1975) basic level categories such as mouse and house which children learn first (Millikan 1998).

There is a reason why Millikan includes such various items under the substance category. Specifically she wants to claim that there is not a genuine ontological distinction to be made between material objects, or in her terminology, stuff, such as milk, and objects, such as mouse. Here is Millikan (1998, p.56) describing the relationship between concepts and ontology:

My claim will be that these apparently quite different types of concepts have an identical root structure and that this is possible because the various kinds of “substances” I have listed have an identical ontological structure when considered at a suitably abstract level.

The concepts mouse and milk have the same structure, so Millikan claims, as concepts of individuals like Mama and Bill Clinton. The claim is that stuff concepts, such as gold, are rooted in our cognitive structure because they are conceptually and ontologically similar to individual objects. Millikan makes another point that is worth mentioning here. She claims that there is a distinction to be made between a substance concept and the properties that a substance is known to possess. She states:

It is because knowledge of the properties of substances is often used in the process of identifying them that it is easy to confuse having a concept of a substance with having knowledge of properties that would identify it (Millikan 1998, p. 63).

So, in sum, the acquisition of substance concepts involves storing information about substances and associating this information with the correct set of properties.

A Brief Response

Allow me to respond to Millikan by noting some of the consequences of her position. First of all her position seems to be much more complex than the one offered in the body of this paper. Further, this complexity is found in the way the mind perceives the world, not in the world itself.

Millikan’s view seems to contradict the empirical findings relating to infant perception cited above. This is a point that Paul Bloom emphasizes in his Open Peer Commentary response to Millikan (1998). Infants applied names for objects very differently from names for stuff or material objects as seen in the experiment conducted by Soja et al. (1991).

Secondly, it is not clear to me how we are to link our information of substances to the correct list of properties. In order to do so, it would seem we would have to think of an additional cognitive mechanism. This would be in addition to the perceptual shunt talked about above which is necessary to pick out the salient properties of objects and material objects alike. Under Millikan’s view we would need some sort of structure to connect the important properties to our information of substances. Further this structure, it would seem, must translate our perception of properties and our information on substances into a uniform format or perhaps language. It appears as though Millikan is committed to some form of the position that the mind is a general processor, which holds that the mind employs a general strategy and/or language across tasks.

The consequence of this view is that online processing, the kind of cognitive processing that operates on perceptual information, becomes inordinately difficult and slow. This is because perceptual information, our knowledge of properties, and our substance concepts must be joined together and subsequently processed. Again, this contradicts the fact that infants readily and easily distinguish between objects and material objects. In addition, if perceiving substances occurs the way Millikan describes, then it is hard to see how readily we can make distinctions that are relevant to our survival. When I cross a street and notice that there is a bus rapidly moving toward me, I do not link bus properties with bus substance. Instead, I do know quite early in my perception of the bus that it is an object and furthermore has a likely trajectory which, if I don’t take immediate action, will threaten my survival. Millikan overlooks the fact that perception, to be of any use to us, must not only be accurate and consistent more often than not, but also must be agile and quick enough to deliver real-time information to cognitive systems of more sophistication.

A simpler explanation is available if we recognize that there are different entities in the world, ontologically speaking. Two such entities include objects and material objects. The world is complex. However, the way ordinary people conceptualize the world is much less so.


In sum, we have attempted to notice what place perception has, not only within our cognitive capacity, but within our daily interaction with the world around us. Our concepts must be amenable to perceptual information in order for us to make sense of the world in which we live.

Thus, the claim is that we must utilize both a top-down and bottom-up processing mechanism in order to classify matter into types. Also we need this explanation to build material object concepts which are used to track particular material objects located within the visual field. Also, we’ve proposed three general constraints: uniformity, shape irregularity, and absence of perceptible surface division on the mesoscopic scale. We also considered whether these three may be aspects of another general constraint, which we referred to as three-dimensional definiteness which limits material objects to having a uniform material composition throughout. These filter down to the material object concept level and facilitate the classification of matter into types. However, they don’t provide us with specific types. Rather, specificity comes from the perception of material objects. The representation of material objects is sensitive to texture, color, and irregularities in shape which material objects possess.

There are two reasons for treating material object concepts. First, they are inductively rich and their processing is sufficiently complex. Different types of material object behave differently upon surface contact. There seems to be a continuum of coherence, which explains this. Second, this richness is not entirely captured by language, specifically by the mass-count distinction.

I would like to thank Roberto Casati, Randall Dipert, Gerald Erion, Barry Smith, and an anonymous reviewer for helpful comments. All remaining errors belong to the author. I would also like to acknowledge the National Science Foundation for supporting this research under the IGERT program at the State University of New York at Buffalo under award number DGE-9870668.



  1. Ayers, Michael. (1997). Is Physical Object a Sortal Concept? A Reply to Xu. Mind & Language, 3/4, pp. 393-405.
  2. Baillargeon, Renee, Spelke, Elizabeth S., and Wasserman, Stanley. (1985). Object Permanence in Five-Month-Old Infants. Cognition, 20, pp. 191- 208.
  3. Barker, Roger G. (1968). Ecological Psychology: Concepts and Methods for Studying the Environment of Human Behavior. Stanford: Stanford University Press.
  4. Bunt, H.C. (1979). ‘Ensembles and the Formal Semantic Properties of Mass Terms.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 249-277.
  5. Cartwright, H. (1979). ‘Some Remarks about Mass Nouns and Plurality.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 249-277.
  6. Carey, Susan, and Xu, Fei. (2001). Infants’ Knowledge of Objects: Beyond Object Files and Object Tracking. Cognition, 80, pp. 179-213.
  7. Chiang, W.-C., and Wynn, K. (1997). Eight-Month-Olds Reasoning about Collections. Poster presented at the meeting of the Society in Child Development, Washington, D.C., April 4.
  8. Forde, E.M.E., and Humphreys, G.W. (Eds.). (2002). Category Specificity in Brain and Mind. New York: Psychology Press.
  9. Frege, Gottlob. (1892/1966). On Sense and Reference. In Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (Eds.), Blackwell: Harvard University Press.
  10. Gathercole, Virginia C. (1986). Evaluating Competing Linguistic Theories with Child Language Data: The Case of the Mass-Count Distinction. Linguistics and Philosophy, 9, pp. 151-190.
  11. Gibson, James J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin Company.
  12. Harnad, S. (1990). The Symbol Grounding Problem. Physica D 42, pp. 335- 346.
  13. Hirschfeld, Lawrence A. (1996). Race in the Making: Cognition, Culture, and the Child’s Construction of Human Kinds. Cambridge, Massachusetts: MIT Press.
  14. Huntley-Fenner, Gavin. (2001). Children’s Understanding of Number is Similar to Adults’ and Rats’: Numerical Estimation by 5-7 Year Olds. Cognition, 78 (3) B27-B40.
  15. Kayed, Ahmad, and Colomb, Robert M. (2002). Using Ontologies to Index Conceptual Structures for Tendering Automation. Australian Computer Science Communications, 24 (2), pp. 95-101.
  16. Kripke, Saul A. (1980). Naming and Necessity. Blackwell: Harvard University Press.
  17. Laycock, H. (1979). ‘Theories of Matter.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 89-120.
  18. Lutz, Michael, Riedemann, Catharina, and Probst, Florian. (2003). ‘A Classification Framework for Approaches to Achieving Semantic Interoperability between GI Web Services.’ In W. Kuhn, M.F Worboys, and S. Timpf (Eds.), Conference on Spatial Information Theory, LNCS 2825. Berlin, Heidelberg: Springer-Verlag, pp. 186-203.
  19. Marr, David. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman and Company.
  20. McKeon, Richard. (1941). The Basic Works of Aristotle. New York: Random House.
  21. Millikan, Ruth Garrett. (1998). A Common Structure for Concepts of Individuals, Stuffs, and Real Kinds: More Mama, More Milk, and More Mice. Behavioral and Brain Sciences, 21, pp. 55-100.
  22. Pelletier, Francis Jeffry. (1979). ‘Non-Singular Reference: Some Preliminaries.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 1- 14.
  23. Putnam, Hilary. (1975). The Meaning of “Meaning.” In: Language, Mind and Knowledge, Keith Gunderson (Ed.) Vol. 7 of Minnesota Studies in the Philosophy of Science. Minnesota: University of Minnesota Press.
  24. Pylyshyn, Z.W., and Storm, R.W. (1988). Tracking of Multiple Independent Targets: Evidence for a Parallel Tracking Mechanism. Spatial Vision, 3, pp. 179- 197.
  25. Quine, W.V.O. (1974). Methods of Logic. London: Routledge.
  26. Robinson, Denis. (1982). Re-Identifying Matter. The Philosophical Review, 91,(3), pp. 317-341.
  27. Soja, N.N., Carey, S., and Spelke, E.S. (1991). Ontological Categories Guide Young Children’s Inductions of Word Meaning: Object Terms and Substance Terms. Cognition, 38, pp. 179-211.
  28. Talmy, Leonard. (2000). Toward a Cognitive Semantics: Volume I: Concept Structuring Systems. Cambridge, Massachusetts: MIT Press.
  29. Ware, R. (1979). ‘Some Bits and Pieces.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 15-29.
  30. Wielinga, B.J., Schreiber, A.T., Wielemaker, J., and Sandberg, J.A.C. (2001). From Thesaurus to Ontology. Proceedings of the International Conference on Knowledge Capture. New York: ACM Press, pp. 194- 201.
  31. Zemach, E. (1979). ‘Four Ontologies.’ In F.J. Pelletier (Ed.), Mass Terms: Some Philosophical Problems. Dordrecht, Holland: D. Reidel Publishing Company, pp. 63-80.



Jeffrey S. Galko,
State University of New York at Buffalo, Ontology and Perception, Essays in Philosophy, A Biannual Journal, Vol. 5 No. 1, January 2004

Consciousness & Existence

What is your being perceptually conscious, your being aware of your surroundings?

  1. It cannot be a string of only neural or electrochemical events. This old materialist answer, the eliminative kind, although still with us, surely leaves something out, a lot.
  2. Does it help to say instead that your being perceptually conscious is a string of events such that each of them is one thing with both neural and other properties? This different and lenient sort of doctrine is spoken of as physicalism, the mind-brain identity theory and monism, and, more enlighteningly, as event-monism. It calls out for completion by an adequate account of the other properties, the ones in which your perceptual consciousness evidently consists. Let us suppose, however, that event-monism when completed will still be somehow physicalistic. 
  3. Is there an answer to the question of the nature of your perceptual consciousness in the varieties of neural functionalism — functionalism as limited to our species? Here, perceptual consciousness consists in causal relata that are only neural. Despite intending to preserve rather than eliminate consciousness, by way of the idea of variable realization, neural functionalism seems like eliminative materialism in leaving a lot out. Certainly an event of your perceptual consciousness, that event, does not acquire more than its neural properties by the consideration that causally speaking it might instead have had other than neural properties.
  4. Finally, it now seems desperate to say that perceptual consciousness is physical properties in your head, properties not now known in neuroscience but somehow different properties that may be discovered in some future science. It is all too possible to anticipate that such discoveries would be taken as still leaving a lot out.(See Note 1)    

These four seemingly failing accounts all have the good recommendation of naturalism. This cannot be philosophically well understood as belief in only whatever things are allowed to exist in science, along with a commitment to scientific method. This characterization of naturalism is uninformative — it gives only a signpost to the things rather than their nature. It is also an uncertain signpost, if only for the reason that psychology is within science and there is uncertainty about what is within psychology. Also, at best, the characterization is vulnerably tied to a current time-slice of science.    

A better philosophical understanding of naturalism is belief in only physical things, and in fitting methods of inquiry, of which scientific method is at least the dominant one. As for the physical things, given what has just been said, they must not be weakly identified as the things allowed in science. Let us understand them in something like a standard and traditional way. There are two categories of them, those taking up space and time and having perceived properties, and those unperceived but taking up space and time and standing in causal or other lawlike connection with the perceived things. Sofas go into the first category, atoms and a good deal else into the second.(2)    

There is another account of perceptual consciousness, a speculation that looks at the thing very differently.(3) Something new certainly seems necessary in the face of our persistent philosophical failure to get agreement, outside of groups and coteries, on the nature of perceptual and other consciousness. We need a change.4 This particular different account is close to naturalism but not within it — which particular shortcoming, if such it is, we can leave unconsidered for a while. The spur to the account, as much as the need for a change, is the so-called phenomenology of perceptual consciousness, what is called the way it seems. Here we find the first of what may be taken as four principal constraints on an adequate account of perceptual consciousness.    

1. Phenomenology, So-Called

Think again. What is it for you now to be aware of your surroundings? It’s for things somehow to exist, isn’t it? To speak a little grandly, what it is for you to be perceptually conscious now is for a world somehow to exist, a certain changing totality of things. Mine now consists in things in this room and outside the window. This seems to be the right answer to the question of the so-called phenomenology or seeming nature or appearance of perceptual consciousness. Something more is worth thinking about — that we here have the most promising conception of actual perceptual consciousness, that we here begin to get hold of its real nature. What it is for you to be aware of your surroundings is for, in a certain sense, there to be certain things with various properties in space and time.

This rough idea is not, anyway in intention, what can be conveyed by saying in a certain manner that what it is for you to be perceptually conscious is for things to exist for you. That is just saying, if somehow evocatively or mysteriously, that standard physical things are in your particular awareness — in fact are perceived by you. No analytic light would be shed on your being perceptually conscious by saying in this way that things exist for you. To do so, despite the puzzling implications, would be to use the ordinary pre-analytic content of talk about perceptual consciousness. The rough idea, rather, although a person is certainly part of it, is that a claim as to a person’s being perceptually conscious needs to be regarded as no more than a kind of claim as to the existence of things, not exactly standard physical things.    

Should this view be put aside without further ado as a mistake about perceptual consciousness as it is, the reality of it? A mistake manufactured out of the truth of phenomenology that your perceptual consciousness seems to you to consist in the existence of things outside you in space and time? On the contrary, it still strikes me as plain that perceptual consciousness, like consciousness generally, is something to which the distinction between appearance and reality, and thus talk of phenomenology, does not apply.    

Consciousness itself, whatever it is, as many have said in several ways, is what we can non-inferentially report. That much, easily distinguished from additional and weakening suppositions about introspection, infallibility and so on, is hardly disputable. What cannot in this sense be reported is not within consciousness. It must then be merely audaciously inconsistent to speak of something as within or a part of consciousness and also hidden. There are of course bases and structures under consciousness, and causes of it, and theories pertaining to these and other related things, but none of this is to the point. Perceptual consciousness, in short, has only the parts it seems to have and no more. That is not to say, of course, that it does not raise philosophical problems. There is a need for interpretation of what we are given.    

Is it the case that all that is reportable of perceptual consciousness is the existence of a world in the way so far suggested? To claim this will seem to be to claim too much, maybe merely audaciously. There has been much talk of a subjective aspect of one’s consciousness. There is the role in one’s consciousness of what seems half-reportable — a subject, oneself.5 Let us keep this in mind, but go on with the rough idea we have. It may be that it itself, the idea of a world or totality, will clarify talk of subjectivity.    

So — for you to be perceptually conscious is for things in a way to exist in space and time. I do not take what has been said of phenomenology to have proved the claim, but rather to have redistributed the burden of proof a little. Other defences or virtues of the idea, additional to the need for a new beginning and the consideration having to do with phenomenology, can be brought into view by clarifying the claim of existence. It can be clarified by comparing it with something else.    

2. Etherealizing Consciousness, and the Reality of It

The claim of existence is both like and unlike one part of what more or less standardly or traditionally is meant by saying that there are physical things or that there is a physical world. This part, as already noted, is that there are things occupying space and time, and thereby having shape, size and so on, and also having at least one perceived property. These things are distinct from the second category of physical things. Those have no perceived properties but are spatio-temporal occupants and are in causal or like connection with physical things in the first category.    
Part of the likeness between a world of perceptual consciousness and the perceived part of the physical world has to do with the fact that those physical things — the sofas and the like — have some dependencies.    

Firstly, they depend on or have a necessary condition in something like the first category of physical things, atoms and the like, sometimes itself called the scientific world. This necessary condition is some kind of constitutive condition, although not the only one. The physical things in the perceived part of the physical world, secondly, have a well-known dependency on perceivers in general or some perceivers or other, as distinct from any particular perceiver. The dependency has to do with our neural and perceptual apparatus and more than that. Since, whether or not reportable by a perceiver, the idea of a subject, or anyway some idea of subjectivity, maybe of a point of view, seems necessary to the idea of a perceiver, the physical things in question have a third dependency, on subjects in general or some few subjects, but none in particular.    

There are related dependencies of the particular world or totality of things in which, according to my story, your perceptual consciousness now consists. Firstly, it too has a kind of constitutive dependency on entities in the scientific world. Secondly, it has a kind of necessary condition in your own ongoing neural history. Further, your world of perceptual consciousness somehow depends on what we are inclined to call a particular subject, yourself, whether or not it gets into your reports.    

We have various impulses about consciousness. One, although not the strongest, is somehow to etherealize it. Partly as a result of this impulse, a predictable objection will be made to the present account of perceptual consciousness. In the objection, the dependencies on particular persons will be relied on in order to try to demote the contents of perceptual consciousness, say your world of perceptual consciousness, from being propertied things out in space and time. The attempt will be made to turn them into a mental world, some totality of thoughts and feelings of an ethereal and thus uncertain nature. A mental world in this sense, further, will likely be located within a cranium.    

There is a defence against this demotion of a world of perceptual consciousness in the similarities just noted. Part of the defence is that a dependency on human perceivers in general, including perceivers somehow taken as subjects, does not demote the perceived part of the physical world to anything less than propertied things out in space and time. Also, what somehow stands in the way of any such demotion, a somehow constitutive dependence on the scientific world, also has a counterpart with a world of perceptual consciousness. In the case of each item in a world of perceptual consciousness too, there is a standing necessary condition in the scientific world.    

Does demotion threaten again when a further truth about your world of perceptual consciousness is noted, one bound up with the dependencies or necessary conditions already noted? The further truth is that your world has not only a kind of necessary condition in your neural activity, but also a kind of sufficient condition or guarantee.    

Well, should a rush to demote your world into a mental world on this account not be be restrained by a certain fact? The other necessary condition of your world of perceptual consciousness, the somehow constitutive one in the scientific world, is a necessary condition of your world of perceptual consciousness in virtue of being a necessary condition of exactly the sufficient condition or guarantee of your world which is your neural activity. It is a little too much to say, but not much, that the chair in your perceptual consciousness is formed, via your neural activity, by the scientific chair.    

There is something else to be said. Return to the perceived part of the physical world. A thing in it, as we know, is dependent on a necessary condition in the scientific world, and also on perceivers in general. But here too, although we have not been inclined to notice it, there is a further truth. The two necessary conditions are related in a certain way. The perceived part of the physical world also has a kind of sufficient condition in the perceivers in general — which sufficient condition is dependent on the scientific world.6    

All of which leads to at least a question of why your world of perceptual consciousness should be demoted to the standing of being a mental world. The supposed reasons given have counterparts with the perceived part of the physical world, and those counterparts do not demote the physical things in question.    

If the etherealizing and cranializing impulse exists, and can latch onto the personal dependencies, but perhaps can be resisted, there is another stronger and indeed more or less opposite impulse about consciousness that needs attention. It is better called a conviction. It can hardly be resisted, and so the question arises of how it is suited by the account of perceptual consciousness under consideration.    

This is the conviction had by most of us that the four naturalistic accounts of perceptual consciousness mentioned at the start leave out a lot. In fact that is to understate the conviction. We are inclined to say that the four accounts leave out a reality, indeed the reality of perceptual consciousness. If they left out only something ethereal, something gossamer, they would not be so unsatisfactory. They would not be so resisted. What they leave out, we say, is not something diaphanous, elusive, peripheral or inessential. When we lose consciousness we do not lose just some gossamer. It’s not that you lose touch with what has the reality, but that to lose consciousness is for a reality to end, with luck only for a while.    

This talk about reality, loose as it is, seems to me to express a constraint that must be satisfied by any adequate conception of perceptual consciousness. It is the second of four principal constraints. It is satisfied by the conception under consideration. The account suits it down to the ground. Indeed, it is unique in satisfying it, despite some complications that need a moment’s attention.    

The chair in your perceptual world, although a careless mistake about it is more than possible,7 is not identical with the chair in the perceived part of the physical world. It is not as real as that. Rather, to speak generally, the first chair is in a part-whole relationship with the second chair. As greater and lesser philosophers have said before now, although often after demoting the chair in your perceptual world, it is a constituent, element, side, facet or the like of the chair in the perceived part of the physical world. The latter chair is constructed, so to speak, out of such chairs as yours.8    

This does not affect the fact that the chair in your perceptual world is itself in space and time, otherwise propertied, and, as may be added, in causal relations with other such things. In short, the conception of perceptual consciousness under consideration, as a certain totality of existing things, fully accords with our conviction, fully explains the reality we are moved to accord to that consciousness. That reality is constituted by the things. Could anything else do the job?    

3. Subjectivity

The conception of your perceptual consciousness as a certain totality of existing things has as great a virtue not in the similarities but in the differences between such a totality and the perceived part of the physical world. Certainly this virtue, the satisfaction of a third principal constraint, will be of the greatest importance in any attempt to build on the conception of perceptual consciousness in order to come to a satisfactory conception of consciousness generally.    
This virtue has to do with what has already been mentioned and has long been called subjectivity. Consciousness and perceptual consciousness surely have a subjective character. What this conviction of ours comes to, in very general terms, and to stick to perceptual consciousness, is that there is a fundamental difference between my perceptual consciousness and yours, and also, more important, between either mine or yours and, in Nagel’s phrase, a view from nowhere.9 There is, in terms of the account of perceptual consciousness as existence, a fundamental difference between your world of perceptual consciousness and mine, and, more important, between either of them and the perceived part of the physical world, not to mention the unperceived part.    

That the four naturalist ideas of perceptual consciousness noticed at the start fail to give an adequate account of subjectivity is as large an objection to them as that they offend against the so-called phenomenology of consciousness and leave out a reality. They are not alone in failing to give an adequate account of subjectivity, by the way. It transpires that an admirable account of consciousness officially opposed to them, Searle’s, although naturalistic in inclination itself, shares the failing. Various particular facts of subjectivity enumerated by him, such as the causal dependency of events of perceptual consciousness on a particular perceiver, are facts consistent with neural functionalism, and insufficient as an account of our conviction of fundamental subjectivity.10    

The conception of perceptual consciousness under consideration does of course recognize the insufficient facts of subjectivity just mentioned, say the dependency of your perceptual consciousness on your neural states. Also, it gives literal sense not only to one of these ideas, that perceptual consciousness involves oneself as a subject or a point of view, but to the related and more definite idea that a point of view is not merely involved in but is one thing that is constitutive of perceptual consciousness. Your world of perceptual consciousness is a world from a point of view, the latter being where your head is.    

There is more to subjectivity, something more fundamental, a larger distinction of perceptual consciousness. This conviction of ours, as it seems to me, is satisfied by the account under consideration of perceptual consciousness as existence. By this account, simply, what each of us has in his or her consciousness is other than the physical world, the world that is objective in not having a dependency on anyone in particular. No world of perceptual consciousness is identical in its contents with the perceived part of the physical world — or of course the other part. Your world of perceptual consciousness is exactly not the physical world. What it is, to repeat, is a totality of different things in space and time. It is prior to and a constituent or the like of the physical world.    

In short, the fundamental fact of subjectivity is the existence of subjective worlds, no less subjective and no less distinct from the physical world for being spatio-temporal and propertied.    

4. The Mind-Body Problem

A fourth constraint on accounts of perceptual consciousness, as of consciousness generally, has to do with the mind-body problem, the problem of the relationship of consciousness to the brain, and, more particularly, of events of consciousness to neural and other physical events. We think of the problem in two ways, (1) in terms of physical events in our environments and also neural events giving rise to or contributing to or being in lawlike or nomic connection with consciousness, and (2) in terms of conscious events giving rise to or contributing to our behaviour, this being physical.11    
Accounts of the nature of perceptual consciousness at least have a bearing on the mind-body problem and may actually entail attempts to solve it. To state in one way the constraint on the accounts of the nature of perceptual consciousness, they must at least not worsen the mind-body problem.    

To glance back at the first of the four naturalist accounts at the start, eliminative materialism assigns to conscious events only neural properties. In that sense it can be said to identify conscious and neural events. From anything but the perspective of the doctrine itself, of course, what it does is to eliminate conscious events and thereby eliminate the problem of the relationship of such events to physical events. This way with the mind-body problem seems to be another nail in the coffin of eliminative materialism as an account of consciousness, since surely one thing we know of consciousness is that it is such as to raise the mind-body problem.    

The second account, event-monism, is to the effect that there is one thing that possesses a first property somehow consistent with what is called physicalism and a second property that is neural. In this event-monism, which is also a property-dualism, a conscious property is related to a neural and physical property by being of the same single thing. The worth of such a view is unsettled. Judgement must wait on adequate accounts of the conscious properties themselves.12 In the absence of these accounts, by the way, there is no real barrier to thinking of the third and fourth naturalistic accounts as instances of this second one, event-monism.    

The third account, neural functionalism, regarded as a proposal about the mind-body problem, and of course limited to human minds and bodies, is that a conscious event is a certain causal relatum or effect-cause whose other properties are only neural. It cannot be, as loose talk of realization sometimes suggests, that there are two distinct events in question, a neural one `realizing’ the other one. The relationship between the event as neural and the event as conscious, further described, is that the event as conscious is an effect-cause that it might still have been if it were not neural. The fourth account of perceptual consciousness, finally, was that it consists in properties in the head that are physical but not neural — not part of current neuroscience and not certain to fit into it. They can be regarded as in lawlike connection with known neural properties.    

The four accounts, considered in terms of the mind-body problem, share a certain recommendation. It is that the relations into which they put conscious events as they conceive them, on the one hand, and, on the other hand, physical events, are not baffling. More precisely, to come to the fundamental point, causal and other lawlike relations between these two categories of events, which relations indubitably exist, are not left or made baffling.    

In the four accounts, these relations hold between (i) neural events (mistakenly supposed by non-materialists to have a distinct property of consciousness) and other neural or otherwise physical events, (ii) events somehow consistent with what is called physicalism and neural or otherwise physical events, (iii) replaceable causal relata and either those same neural events or other neural or otherwise physical events, (iv) physical events yet to be discovered and other physical, perhaps neural, events.    

This is of course the strength of naturalism, a matter of the comparison with mental-world or other ethereal accounts of consciousness. These latter accounts, given their vagueness, certainly do not make conscious events into things of which it can be seen that they can be causes or effects of, or in other lawlike connection with, physical events. But the success of the four naturalistic accounts is also their failure. What they do, in the course of making conscious events causally and nomically acceptable, is to divest them of their seeming and actual nature, their reality, and, above all, their subjectivity. Conscious events are, so to speak, left solidly neural or physical, but not solidly conscious.    

What of the proposal about the mind-body problem that comes with the account of perceptual consciousness we are considering? It is a proposal, of course, having to do only with perceptual consciousness and what it is causally or nomically related to. The question is that of whether a world of perceptual consciousness can be unbafflingly in causal or other lawlike connection with physical things. We know it has other recommendations, but can it be an unproblematic in this essential way?    

To reflect on this is to come to a crucial question. Does an unproblematic cause have to be spatio-temporary and somehow propertied, or does it have to be physical according to the definition with which we have been working? I take it that it is clear only the former is required. To revert to an original crux in the philosophy of mind, the great problem of Descartes’ account of the mind was that he put it out of space. The account of perceptual consciousness as existence is certainly different.    

I allow that we have an inclination to require our causes and effects to be physical, but what is this requirement? It seems to me that it is an epistemological requirement having to do with certain contemplated or confirmed or true causal statements — all those having to do with other things than consciousness. What is taken as needed is other and more than a subjective basis. What is needed is a basis having to do with perceivers generally, not just one of them. But that is not a requirement on causal and related connection itself. All that is needed for such connection, really, is something in space and time and somehow propertied.    

The final recommendation of taking perceptual consciousness as existence, then, is that it is unique in allowing both for comprehensible causal relations and also for subjectivity, etc. The near-naturalism13 of the account allows for the causal relations, along with them, other things we have to have.    

5. Historical Theories, Brains in Vats

So much for an impression of a possible account of perceptual consciousness. Evidently it is different from each of two historical theories of perception, these being analyses of seeing, hearing and so on which give little attention to the matter of consciousness, or indeed by-pass it entirely. One historical theory, direct or `naive’ realism, is to the effect that in perception we are aware of only physical objects. The other is the representative theory of perception or phenomenalism, to the effect that what we are aware of is objects internal to the perceiver — ideas, sense-data, percepts or the like. Certainly they are elusive, true to the ethereal impulse.    
The account of perceptual consciousness under consideration has to do with neither physical objects exactly nor of course with ethereal and cranial objects. Rather, it has to do with spatio-temporal and propertied constituents of ordinary physical objects, these constituents being subjective in the fundamental sense that they are not physical objects — they have a dependency on or are related to one perceiver in particular, etc. If these are important distinctions between the account in hand and the historical theories, there is a yet more fundamental distinction.    

It is implicit or explicit in the tradition of direct realism that consciousness or awareness of physical objects consists in a perceiver’s baffling relation to them. One thing is clear, however. This relation or fact of perceptual consciousness is not itself taken as being the existence of the physical objects or, more relevantly, the existence of spatio-temporal and propertied constituents of such objects. Rather, the assumption or story is roughly that a chair satisfies conditions of being a physical object, and thereafter it may or may not be perceived by you, within your awareness. Little or nothing is said of this conscious awareness. It is harder to be clear in this connection about the internal objects of awareness in the representative theory of perception. They too, however, seem to be taken as distinct from awareness of them. The fundamental distinction of the account of perceptual consciousness being considered, then, as against the two historical theories, is that for you to be perceptually conscious is for a world or totality of things in a way to exist.    

The account is in part similar to direct realism — in the account, perceptual consciousness is intrinsically made a matter of something not in the head. It will thus be apparent that it is open to something very like the long-running objection to direct realism, this objection also being an argument for a representative theory of perception. The long-running objection and argument is essentially that in perception we cannot be aware of physical objects, since hallucination, where there are no such objects, is indistinguishable from perception. What we must therefore be aware of in both cases is objects internal to ourselves.    

The related objection to the account of perceptual experience as existence will be that such experience cannot consist in the existence of propertied things in space and time because something indistinguishable from such experience could be had in the absence of such things. It is conceivable that you, or a brain in a vat, thanks to the ministrations of neuroscientists, could have an experience indistinguishable from one you are having now, but in the absence of the right propertied things in space and time. There are similar objections to other different tendencies in the philosophy of mind to get consciousness out of the cranium.14    

In my opinion, the best defence against all such objections, which certainly are troublesome, is an attack on the views being argued for, representative theories of perception. The objections, as must not be forgotten, are commitments to, or at least contemplations of, representative theories. It seems that a certain attack on such views can now have more substance, if not more logic, given our greater knowledge of the processes issuing in consciousness. Also, we may now have a better grasp on the question being answered by representative theories and direct realism, and its distinction as a philosophical question.    

At the heart of representative theories of perception is the idea of an inference, from some premise or other to a conclusion about a physical thing. We begin with an internal object of awareness, a sense-datum or the like, and we end up with belief or the like in a chair. But of course there is no sign of any such carry-on in what is called the phenomenology of consciousness. To which truth, of course, it is replied, by defenders of the representative theory, that the inference, and of course also the awareness of the inner thing, are not conscious. That necessary reply must give rise to a fatal rejoinder.    

We are now much better informed of the process which issues in your perceptual consciousness of the chair. To actual retinal images we add much about neural structure and activity in the visual cortex, etc. No direct realist and no advocate of perceptual consciousness as existence is committed in the slightest degree to any scepticism about the science, of course. Rather, we draw on it to make what is surely the fatal rejoinder that the representative theory of perception, having taken its subject-matter out of consciousness in order to defend itself, is no more than a kind of impressionistic version of this scientific story or some last part of it.    

The scientific story, and the philosophical impression of it, are evidently not an answer to the philosophical question asked, historically or now, about perceptual consciousness. That question, as now seems clear, is the question of the so-called phenomenology of consciousness, better expressed as being of the real nature of consciousness. True to a deep impulse of philosophy, it is a question of at least an epistemological cast, about our conscious acquisition of belief and knowledge. The answer to it cannot be anything like the representative theory of perception, which necessarily removes itself from the discussion. The answer can be the account of perceptual consciousness under consideration. But, whether or not it is, and to stick to the point, the objection from hallucination needs to be regarded for what it is, advocacy of an impossible theory. It is thus not a true objection but a difficulty to be dealt with.    

6. Chairs in Minds? Something Left Out?

Does the account, whatever can be said for it, nonetheless founder because it puts your perceptual consciousness into space around you, and locates chairs of a kind, not representations of chairs, within your perceptual consciousness? The acount would be more revisionary than it needs to be if it did so, no doubt a disaster for some philosophers. In fact it can be so understood as to respect our resistance to spatializing consciousness in the given way and to putting chairs of a kind within it.    
What the account asserts is that for you to be perceptually conscious is for a certain world to exist. For you to fall under a certain description, of being perceptually conscious, it is necessary and enough for something to be the case, that a certain world exists. You do not thereby contain the world. For a thing to be a particular vertex of a particular triangle as a consequence of other properties of the triangle is not for the thing to contain the triangle. For you to be generous is not for your person to contain your gifts to others.15    

Still, there certainly is room for a further question. It may be that for a person to be perceptually conscious is for a certain world to exist — in part, for certain relations to certain things to hold, in particular the several dependency-relations. One term of these relations is said to be a person or the like. But a more precise and satisfactory identification of that item can be asked for, and indeed is owed and needed. It would be another disaster, certainly, somehow to identify the item in question as being conscious, say a conscious subject. To do so would be to fall into useless circularity, to make no analytic advance in the endeavour of trying to explain the nature of perceptual consciousness.    

It seems that what needs to be said, the short story, is that for a person to be perceptually conscious is for a certain world to exist which is in part dependent on neural structures and events of the person. Quite a lot is contained in the longer story, as we know, about phenomenology, reality, subjectivity and the mind-body problem, but there is no further and independent fact that needs to be mentioned in order to complete the account of the person’s perceptual consciousness.    

Are you then inclined to object that this account must go the way of the four naturalistic accounts with which we began? That consciousness is left out? Well, it is essential to keep in mind that a person’s perceptual consciousness is indeed being conceived as a subjective world. That is, it is precisely not the physical world, despite its being real in the sense of being spatio-temporal and having propertied things in it. Furthermore, this view of a person’s perceptual consciousness takes on strength in a certain way.    

The naturalistic accounts, as already implied, give a place to an idea of a subject and an idea of privacy, and, as might be added, they also fit in the idea, in their way, that what is in perceptual consciousness does not also exist unperceived. These contributions give most of us little satisfaction with the naturalistic accounts for a certain reason. Esentially it is that the ideas are applied to a subject-matter, say events with only neural properties, that makes the ideas in question thin and unsustaining. That is, they do little to satisfy another rooted philosophical impulse. The case is different with the account under consideration. Here, so to speak, we have the thing for which the ideas of subjectivity, privacy and so on were made. Here there is something of the right sort to have such properties — a totality of things.    

Do you persist in objecting, nonetheless, that our pre-philosophical conception of consciousness simply is such that one of my so-called worlds of perceptual consciousness could exist, and the person in question not be conscious, not aware of his or her surroundings? That the account under consideration leaves out consciousness? It is my inclination to deny this, or at any rate to see to what extent and with what effect a denial can be sustained. Being perceptually conscious, according to me, is for such a world to exist.    

Here are several further reasons. There do exist what are being called worlds of perceptual consciousness. That is, a certain conception is consistent and otherwise conceptually adequate, and things fall under it. If worlds of perceptual consciousness are allowed to exist, but denied to be any part of perceptual consciousness, what is to be said of them? How are we think of them?    

Some will say that the idea of a world of perceptual consciousness is a part of what it is to be perceptually conscious. Suppose that much is granted. What could conceivably be the remainder of what it is to be perceptually conscious? Would this be some ethereal stuff, some gossamer, made somehow consistent with a world of perceptual consciousness? Could such a remainder, if ever got clear enough for serious consideration, be other than a peripheral part of the present story?    

It is a story which raises still more questions, evidently, but maybe this fertility is no bad thing.    


  1. For a view of versions of the lenient doctrine, including Donald Davidson’s Anomalous Monism and John Searle’s two-level identity theory, see my A Theory of Determinism (Oxford University Press, 1988), Chs. 2, 3, or Mind and Brain (Oxford University Press, 1990), Chs. 2, 3. For my account of neural functionalism see `Functionalism, Identity Theories, the Union Theory,’ in R. Warner & T. Szubka, eds., The Mind-Body Problem: A Guide to the Current Debate (Blackwell, 1994). The desperate idea that perceptual consciousness consists in non-neural physical properties in the head was floated by me in `Consciousness, Neural Functionalism, Real Subjectivity,’ American Philosophical Quarterly, 32, 4, October 1995. 
  2. Cf. Anthony Quinton, The Nature of Things (Routledge & Kegan Paul, 1973), pp. 46-53.    
  3. `Consciousness as Existence’, in Current Issues in the Philosophy of Mind, Royal Institute of Philosophy Lectures 1996-97, ed. Anthony O’Hear (Cambridge University Press, 1998). For comments on a first draft of the present paper, which is a reworking, development and correction of `Consciousness as Existence,’ my thanks to Murali Ramachandran and others at the University of Sussex for an invigorating  and good discussion and to Kevin Magill for excellent comments. 
  4. Cf. the resolute hope in Thomas Nagel’s fine `Conceiving the Impossible and the Mind-Body Problem,’ Royal Institute of Philosophy Annual Lecture, forthcoming Philosophy, 1998.    
  5. For an earlier and tradition struggle to catch hold of the subjective aspect of consciousness, see `Seeing Things,’ Synthese 98, 1994.    
  6. Alas I readily conceded otherwise in `Consciousness as Existence,’ p. 150.    
  7. More words to swallow. In `Consciousness as Existence’ I said not only that the two chairs are not identical but also, wonderfully, that somehow they are, and also added an identity claim of some sort with respect to the chairs in two worlds of perceptual consciousness. Pp. 151, 152, 154. Identity is impossible, plainly, if only because of different times of existence. Along with the mistaken identity claim went what also cannot be right, that the contents of a world of perceptual consciousness are strictly-speaking physical. Pp. 137, 140, 155. That they are not is a principal recommendation of taking perceptual consciousness as existence — that subjectivity is really explained.    
  8. See for example A. J. Ayer, The Central Questions of Philosophy (Weidenfeld & Nicolson, 1973).    
  9. Nagel, The View From Nowhere (Oxford University Press, 1986)    
  10. `Consciousness, Neural Functionalism, Real Subjectivity,’ American Philosophical Quarterly 32/4, October 1995.    
  11. What is in a way a separable constraint on an adequate account of perceptual consciousness, as of consciousness generally, is bound up with the fourth one. It is that facts of consciousness itself must be be so understood as to be ineliminable in explanations of our behaviour. Epiphenomenalism is false — there is mental causation or mental indispensability, truly so named. Perhaps fortunately, there is little call for a proof of this guiding axiom. As with one or two other bits of the philosophy of mind, there seems no proposition more certain than mental causation, and hence nothing available to be a premise in a proof of it.    
  12. Whatever its conception of consciousness, Davidson’s event-monism is in my view open to objection as epiphenomenalist. For his reply, see Mental Causation, ed. John Heil and Alfred Mele (Oxford University Press, 1993).    
  13. Cf. `Consciousness as Existence,’ p. 153.    
  14. For my different objections to externalism, see `The Union Theory and Anti-Individualism,’ Mental Causation, ed. Heil and Mele.    
  15. Cf. `Consciousness as Existence,’ pp. 141, 145, 148.    

CONSCIOUSNESS AS EXISTENCE AGAIN, by Ted Honderich, This was a paper for the World Congress of Philosophy, Boston, the session with William Lycan and David Rosenthal. It was published in the good journal Theoria (June 2000) and will also be in the proceedings of the World Congress.