Mind & Brain

We all believe that we have minds – and that minds, whatever they may be, are not like other worldly things. What makes us think that thoughts are made of different stuff? Because, it seems, thoughts can’t be things; they have no weights or sounds or shapes, and cannot be touched or heard or seen. In order to explain all this, most thinkers of the past believed that feelings, concepts, and ideas must exist in a separate mental world. But this raises too many questions. What links our concept about, say, a cat with an actual cat in the physical world? How does a cause in either world affect what takes place in the other world? In the physical world we make new things by rearranging other things; is that how new ideas come to be, or were they somewhere all along? Are minds peculiar entities, possessed alone by brains like ours – or could such qualities be shared, to different degrees, by everything? It seems to me that the dual-world scheme creates a maze of mysteries that leads to problems worse than before.

We’ve heard a good deal of discussion about the idea that the brain is the bridge between those worlds. At first this seems appealing but it soon leads to yet worse problems in philosophy. I maintain that all the trouble stems from making a single great mistake. Brains and minds are not different at all; they do not exist in separate worlds; they are simply different points of view–ways of describing the very same things. Once we see how this is so, that famous problem of mind and brain will scarcely seem a problem at all, because …

Minds are simply what brains do.

I don’t mean to say that brains or minds are simple; brains are immensely complex machines-and so are what they do. I merely mean to say that the nature of their relationship is simple. Whenever we speak about a mind, we’re referring to the processes that move our brains from state to state. Naturally, we cannot expect to find any compact description to cover every detail of all the processes in a human brain, because that would involve the details of the architectures of perhaps a hundred different sorts of computers, interconnected by thousands of specialized bundles of connections. It is an immensely complex matter of engineering. Nevertheless, when the mind is regarded, in principle, in terms of what the brain may do, many questions that are usually considered to be philosophical can now be recognized as merely psychological-because the long-sought connections between mind and brain do not involve two separate worlds, but merely relate two points of view.

Memory and Change

What do brains do? Doing means changing. Whenever we learn or ‘change our minds’, our brains are engaged in changing their states. To comprehend the relationship between mind and brain, we must understand the relationship between what things do and what things are; what something does is simply an aspect of that thing considered over some span of time. When we see a ball roll down a hill, we appreciate that the rolling is neither the ball itself, nor something apart in some other world – but merely an aspect of the ball’s extension in space-time; it is a description of the ball, over time, seen from the viewpoint of physical laws. Why is it so much harder to appreciate that thinking is an aspect of the brain, that also could be described, in principle, in terms of the self-same physical laws? The answer is that minds do not seem physical to us because we know so little of the processes inside brains.

We can only describe how something changes by contrast with what remains the same. Consider how we use expressions like “I remember X.” Memories must be involved with a record of changes in our brains, but such changes must be rather small because to undergo too large a change is to lose any sense of identity. This intrusion of a sense of self makes the subject of memory difficult; we like to think of ourselves as remaining unchanged – no matter how much we change what we think. For example, we tend to talk about remembering events (or learning facts, or acquiring skills) as though there were a clear separation between what we call the Self and what we regard as like data that are separate from but accessible to the self. However, it is hard to draw the boundary between a mind and what that mind may think about and this is another aspect of brains that makes them seem different to us from machines. We are used to thinking about machines in terms of how they affect other materials. But it makes little sense to think of brains as though they manufacture thoughts the way that factories makes cars because brains, like computers, are largely engaged in processes that change themselves . Whenever a brain makes a memory, this alters what that brain may later do.

Our experience with computers over the past few decades has helped us to clarify our understanding of such matters. The early applications of computers usually maintained a rather clear distinction between the program and the data on which it operates. But once we started to develop programs that changed themselves, we also began to understand that there is no fundamental difference between acquiring new data and acquiring new processes. Such distinctions turned out to be not absolute, but relative to other issues of perspective and complexity. When we say that minds are what brains do, we must also ask whether every other process has some corresponding sort of mind. One reply might be that this is merely a matter of degree: people have well-developed minds, while bricks or stones have almost none. Another reply might try to insist that only a person can have a mind -and, maybe, certain animals. But neither side would be wrong or right; the issue is not about a fact, but about when to use a certain word. Those who wish to use the term “mind” only for certain processes should specify which processes. The problem with this is that we don’t yet have adequate ways to classify processes. Human brains are uniquely complex, and do things that no other things do – and we must try to learn how brains do those things.

This brings us back to what it means to talk about what something does. Is that different from the thing itself? Again it is a matter of how we describe it. What complicates that problem for common sense psychology is that we feel compelled to think in terms of Selves, and of what those Selves proceed to think about. To make this into a useful technical distinction, we need some basis for dividing the brain into parts that change quickly and parts that change slowly. The trouble is that we don’t yet know enough about the brain to make such distinctions properly. In any case, if we agree that minds are simply what brains do, it makes no further sense to ask how minds do what they do.

Embodiments of Minds

One reason why the mind-brain problem has always seemed mysterious is that minds seem to us so separate from their physical embodiments. Why do we find it so easy to imagine the same mind being moved to a different body or brain – or even existing by itself? One reason could be that concerns about minds are mainly concerns about changes in states – and these do not often have much to do with the natures of those states themselves. From a functional or procedural viewpoint, we often care only about how each agent changes state in response to the actions upon it of other agents. This is why we so often can discuss the organization of a community without much concern for the physical constitution of its members. It is the same inside a computer; it is only signals representing changes that matter, whereas we have no reason to be concerned with properties that do not change. Consider that it is just those properties of physical objects that change the least – such as their colors, sizes, weights, or shapes – that, naturally, are the easiest to sense. Yet these, precisely because they don’t change, are the ones that matter least of all, in computational processes. So naturally minds seem detached from the physical. In regard to mental processes, it matters not what the parts of brains are; it only matters what they do–and what they are connected to.

A related reason why the mind-brain problem seems hard is that we all believe in having a Self – some sort of compact, pointlike entity that somehow knows what’s happening throughout a vast and complex mind. It seems to us that this entity persists through our lives in spite of change. This feeling manifests itself when we say “I think” rather than “thinking is happening”, or when we agree that “I think therefore I am,” instead of “I think, therefore I change”. Even when we recognize that memories must change our minds, we feel that something else stays fixed – the thing that has those memories. In chapter 4 of The Society of Mind[l] I argue that this sense of having a Self is an elaborately constructed illusion – albeit one of great and practical value. Our brains are endowed with machinery destined to develop persistent self-images and to maintain their coherence in the face of continuous change. But those changes are substantial, too; your adult mind is not very like the one mind you had in infancy. To be sure, you may have changed much since childhood – but if one succeeds, in later life, to manage to avoid much growth, that poses no great mystery.

We tend to think about reasoning as though it were something quite apart from the knowledge and memories that it exploits. If we’re told that Tweety is a bird, and that any bird should be able to fly, then it seems to us quite evident that Tweety should be able to fly. This ability to draw conclusions seems (to adults) so separate from the things we learn that it seems inherent in having a mind. Yet over the past half century, research in child psychology has taught us to distrust such beliefs. Very young children do not find adult logic to be so self evident. On the contrary, the experiments of Jean Piaget and others have shown that our reasoning abilities evolve through various stages. Perhaps it is because we forget how hard these were to learn that they now appear so obvious. Why do we have such an amnesia about learning to reason and to remember? Perhaps because those very processes are involved in how we remember in later life. Then, naturally, it would be hard to remember what it was like to be without reason – or what it was like to learn such things. Whether we learn them or are born with them, our reasoning processes somehow become embodied in the structures of our brains. We all know how our logic can fail when the brain is deranged by exhaustion, intoxication or injury; in any case, the more complex situations get, the more we’re prone to making mistakes. If logic were somehow inherent in Mind, it would be hard to explain how things ever go wrong but this is exactly what one would expect from what happens inside any real machine.

Freedom of Will

We all believe in possessing a self from which we choose what we shall do. But this conflicts with the scientific view that all events in the universe depend on either random chance or on deterministic laws. What makes us yearn for a third alternative? There are powerful social advantages in evolving such beliefs. They support our sense of personal responsibility, and thus help us justify moral codes that maintain order among the tribe. Unless we believed in choice-making entities, nothing would bear any credit or blame. Believing in the freedom of will also brings psychological advantages; it helps us to be satisfied with our limited abilities to make predictions about ourselves – without having to take into account all the unknown details of our complex machinery. Indeed, I maintain that our decisions seem “free” at just the times at which what we do depends upon unconscious lower level processes of which our higher levels are unaware – that is, when we do not sense, inside ourselves, any details of the processes that moved us in one direction or the other. We say that this is freedom of will, yet, really, when we make such a choice, it would be better to call it an act of won’t. This is because, as I’ll argue below, it amounts to terminating thought and letting stand whatever choice the rest of the mind already has made.

To see an example of how this works, imagine choosing between two homes, one of which offers a mountain-view, while the other is closer to where you work. There is no particularly natural way to compare such unrelated things. One of the mental processes that are likely to become engaged might be constructing a sort of hallucination of living in that house, and then reacting to that imaginary episode. Another process might imagine a long drive to work, and then reacting to that. Yet one more process might then attempt to compare those two reactions by exploiting some memory traces of those simulations. How, then, might you finally decide? In one type of scenario, the comparison of the two descriptions may seem sufficiently logical or rational that the decision seems to be no mystery. In such a case we might have the sense of having found a “compelling reason”–and feel no need to regard that choice as being peculiarly free.

In another type of scenario, no such compelling reason appears. Then the process can go on to engage more and more mechanisms at increasingly lower levels, until it engages processes involving billions of brain cells. Naturally, your higher level agencies – such as those involved with verbal expressions–will know virtually nothing about such activities, except that they are consuming time. If no compelling basis emerges upon which to base a definite choice, the process might threaten to go on forever. However, that doesn’t happen in a balanced mind because there will always be other, competing demands from other agencies. Eventually some other agency will intervene – perhaps one of a supervisory character[2] whose job it is to be concerned, not with the details of what is being decided, but with some other economic aspect of the other systems’ activities. When this is what terminates the decision process, and the rest is left to adopt whichever alternative presently emerges from their interrupted activities, our higher level agencies will have no reasonable explanation of how the decision was made. In such a case, if we are compelled to explain what was done, then, by default, we usually say something like “I decided to.'[3] This, I submit, is the type of situation in which we speak of freedom of choice. But such expressions refer less to the processes which actually make our decisions than to the systems which intervene to halt those processes. Freedom of will is less to do with how we think than with how we stop thinking.

Uncertainty and Stability

What connects the mind to the world? This problem has always caused conflicts between physics, psychology, and religion. In the world of Newton’s mechanical laws, every event was entirely caused by what had happened earlier. There was simply no room for anything else. Yet common sense psychology said that events in the world were affected by minds: people could decide what occurred by using their freedom of will. Most religions concurred in this, although some preferred to believe in schemes involving divine predestination. Most theories in psychology were designed to support deterministic schemes, but those theories were usually too weak to explain enough of what happens in brains. In any case, neither physical nor psychological determinism left a place for the freedom of will.

The situation appeared to change when, early in this century, some physicists began to speculate that the uncertainty principle of quantum mechanics left room for the freedom of will. What attracted those physicists to such views? As I see it, they still believed in freedom of will as well as in quantum uncertainty–and these subjects had one thing in common: they both confounded those scientists’ conceptions of causality. But I see no merit in that idea because probabilistic uncertainty offers no genuine freedom, but merely adds a capricious master to one that is based on lawful rules.

Nonetheless, quantum uncertainty does indeed play a critical role in the function of brain. However, this role is neither concerned with trans-world connections nor with freedom of will. Instead, and paradoxically, it is just those quantized atomic states that enable us to have certainty! This may surprise those who have heard that Newton’s laws were replaced by ones in which such fundamental quantities as location, speed, and even time, are separately indeterminate. But although those statements are basically right, their implications are not what they seem – but almost exactly the opposite. For it was the planetary orbits of classical mechanics that were truly undependable – whereas the atomic orbits of quantum mechanics are much more predictably reliable. To explain this, let us compare a system of planets orbiting a star, in accord with the laws of classical mechanics, with a system of electrons orbiting an atomic nucleus, in accord with quantum mechanical laws. Each consists of a central mass with a number of orbiting satellites. However, there are fundamental differences. In a solar system, each planet could be initially placed at any point, and with any speed; then those orbits would proceed to change. Each planet would continually interact with all the others by exchanging momentum. Eventually, a large planet like Jupiter might even transfer enough energy to hurl the Earth into outer space. The situation is even less stable when two such systems interact; then all the orbits will so be disturbed that even the largest of planets may leave. It is a great irony that so much chaos was inherent in the old, deterministic laws. No stable structures could have evolved from a universe in which everything was constantly perturbed by everything else. If the particles of our universe were constrained only by Newton’s laws, there could exist no well defined molecules, but only drifting, featureless clouds. Our parents would pass on no precious genes; our bodies would have no separate cells; there would not be any animals at all, with nerves, synapses, and memories.

In contrast, chemical atoms are actually extremely stable because their electrons are constrained by quantum laws to occupy only certain separate levels of energy and momentum. Consequently, except when the temperature is very high, an atomic system can retain the same state for decillions of years, with no change whatever. Furthermore, combinations of atoms can combine to form configurations, called molecules, that are also confined to have definite states. Although those systems can change suddenly and unpredictably, those events may not happen for billions of years during which there is absolutely no change at all. Our stability comes from those quantum fields, by which everything is locked into place, except during moments of clean, sudden change. It is only because of quantum laws that what we call things exist at all, or that we have genes to specify brains in which memories can be maintained – so that we can have our illusions of will.[4]

QUESTIONS

Question: Can you discuss the possible relevance of artificial intelligence in dealing with this conference?
Artificial intelligence and its predecessor, cybernetics, have given us a new view of the world in general and of machines in particular. In previous times, if someone said that a human brain is just a machine, what would that have meant to the average person? It would have seemed to imply that a person must be something like a locomotive or a typewriter. This is because, in earlier days, the word machine was applied only to things that were simple and completely comprehensible. Until the past half century – starting with the work of Kurt Goedel and Alan Turing in the 1930s and of Warren McCulloch and Walter Pitts a decade later – we had never conceived of the possible ranges of computational processes. The situation is different today, not only because of those new theories, but also because we now can actually build and use machines that have thousands of millions of parts. This experience has changed our view. It is only partly that artificial intelligence has produced machines that do things that resemble thinking. It is also that we can see that our old ideas about the limitations of machines were not well founded. We have learned much more about how little we know about such matters.

I recently started to use a personal computer whose memory disk had arrived equipped with millions of words of programs and instructive text. It is not difficult to understand how the basic hardware of this computer works. But it would surely take months, and possibly years, to understand in all detail the huge mass of descriptions recorded in that memory. Every day, while I am typing instructions to this machine, screens full of unfamiliar text appear. The other day, I typed the command “Lisp Explorer”, and on the screen appeared an index to some three hundred pages of lectures about how to use, with this machine, a particular version of LISP, the computer language most used for research in artificial intelligence. The lectures were composed by a former student of mine, Patrick Winston, and I had no idea that they were in there. Suddenly there emerged, from what one might have expected to be nothing more than a reasonably simple machine, an entire heritage of records not only of a quarter century of technical work on the part of many friends and students, but also the unmistakable traces of their personalities.

In the old days, to say that a person is like a machine was like suggesting that a person is like a paper clip. Naturally it was insulting to be called any such simple thing. Today, the concept of machine no longer implies triviality. The genetic machines inside our cells contain billions of units of DNA that embody the accumulated experience of a billion years of evolutionary search. Those are systems we can respect; they are more complex than anything that anyone has ever understood. We need not lose our self-respect when someone describes us as machines; we should consider it wonderful that what we are and what we do depends upon a billion parts. As for more traditional views, I find it demeaning to be told that all the things that I can do depend on some structureless spirit or soul. It seems wrong to attribute very much to anything without enough parts. I feel the same discomfort when being told that virtues depend on the grace of some god, instead of on structures that grew from the honest work of searching, learning, and remembering. I think those tables should be turned; one ought to feel insulted when accused of being not a machine. Rather than depending upon some single, sourceless source, I much prefer the adventurous view of being made of a trillion parts–not working for some single cause, but interminably engaged in resolving old conflicts while engaging new ones. I see such conflicts in Professor Eccles’ view: in his mind are one set of ideas about the mind, and a different set of ideas that have led him to discover wonderful things about how synapses work. But he himself is still in conflict. He cannot believe that billions of cells and trillions of synapses could do enough. He wants to have yet one more part, the mind. What goodness is that extra part for? Why be so greedy that a trillion parts will not suffice? Why must there be a trillion and one?

Notes

Marvin Minsky, The Society of Mind, Simon and Schuster, 1987; Heinemann & Co., 1987.
The idea of supervisory agencies is discussed in section [6.4] of [1].
In 22.7 of [1] I postulate that our brains are genetically predisposed to compel us to try to assign some cause or purpose to every change – including ones that occur inside our brains. This is because the mechanisms (called trans-frames) that are used for representing change are built automatically to assign a cause by default if no explicit one is provided.
This text is not the same as my informal talk at the conference. I revised it to be more consistent with the terminology in [1].

 

MINDS ARE SIMPLY WHAT BRAINS DO, Marvin Minsky, Massachusetts Institute of Technology

Values, Science and Religion

It seems to me that the obligation to expose religious beliefs as nonsensical is an ethical one incumbent upon every anthropological scientist, for the simple reason that the essential ethos of science lies in an unwavering dedication to truth. As Frankel and Trend (1991:182) put it, “the basic demand of science is that we seek and tell the honest truth, insofar as we know it, without fear or favor.” In the pursuit of scientific knowledge, the evidence is the only thing that matters. Emotional, aesthetic, or political considerations are never germane to the truth or falsity of any propositional claim. (There are moons around Jupiter, just as Galileo claimed, even though the Catholic Church and most Christians at the time did not like him for saying it.) In science, there is no room for compromise in the commitment to candor. Scientists cannot allow themselves to be propagandists or apologists touting convenient or comforting myths.

It is not simply our desires for intellectual honesty and disciplinary integrity that compel us to face the truth about religious beliefs; as anthropologists, we are specifically enjoined to do so by our code of ethics. According to the Revised Principles of Professional Responsibility adopted by the American Anthropological Association in 1990, anthropologists have an explicit obligation “to contribute to the formation of informational grounds upon which public policy may be founded” (Fluehr-Lobban 1991:276). When anthropologists fail to publicly proclaim the falsity of religious beliefs, they fail to live up to their ethical responsibilities in this regard. In a debate concerning public policy on population control, for example, anthropologists have an ethical obligation to explain that God does not disapprove of the use of contraceptives because there is no such thing as God.

We also have an obligation not to pick and choose which truths we are willing to tell publicly. I think, for example, that the political threat from the oxymoronic “scientific creationists” would be better met if anthropologists were to debunk the entire range of creationist claims (including the belief that God exists as well as the belief that humans and dinosaurs were contemporaneous); otherwise the creationists will continue to criticize us, with considerable justification, for our arbitrariness and inconsistency in choosing which paranormal claims we will accept or tolerate and which we will attack (see Toumey 1994).

I am convinced that our collective failure to stake out a firm anthropological position on paranormal phenomena has compromised our intellectual integrity, weakened our public credibility, and hampered our political effectiveness. Carlos Castaneda was able to use his anthropological credentials to buttress the credibility (and the sales) of his paranormal fantasies, partly because, as far as the general public knew, the discipline of anthropology accepted the reality of hundred-foot gnats and astral projection (de Mille 1990). While it is true that most individual anthropologists rejected Castaneda’s paranormal claims, few did so publicly or effectively (Murray 1990). In fact, our discipline as a whole has a lamentable record when it comes to public responses to paranormal claims. There have been notable exceptions in archeology and biological anthropology, where a number of scholars have responded forcefully and well to the ancient astronaut and creationist myths (e.g., White 1974; Cole 1978; Rathje 1978; Cazeau and Scott 1979; Godfrey 1983; Stiebing 1984; Cole and Godfrey 1985; Harold and Eve 1987; Feder 1980, 1984, 1990), but cultural anthropologists have been remarkably remiss in responding to the myriad paranormal claims that fall within their domain (see Lett 1991).

Margaret Mead, for example, maintained a lifelong interest in paranormal phenomena and was an ardent champion of irrational beliefs (Gardner 1988). She was apparently persuaded that “some individuals have capacities for certain kinds of communications which we label telepathy and clairvoyance” (Mead 1977:48), even though the most casual scholarship would have revealed that that proposition has been decisively falsified (the evidence comes from more than a century of intensive research that has been thoroughly documented and widely disseminated–see Kurtz 1985; Druckman and Swets 1988; Hansel 1989; Alcock 1990). In 1969, Mead was influential in persuading the American Association for the Advancement of Science to accept the habitually pseudoscientific Parapsychological Association as a constituent member. In all of this, Mead used her considerable talents for popularization to promulgate nonsensical beliefs among the general public. However sincere and well-intentioned, her efforts were irresponsible, unprofessional, and unethical; worse still, they were not atypical of cultural anthropology. (See Note 6)

Even those anthropologists who do not share Mead’s gullibility have been notably reluctant to confront the truth about paranormal beliefs. Anthony Wallace, for example, in all likelihood thought he was being purely objective when he decided to avoid the “extremes of piety and iconoclasm” and to regard religion as “neither a path of truth nor a thicket of superstition” (Wallace 1966:5). In science, however, being objective does not entail being fair to everyone involved; instead, being objective entails being fair to the truth. The simple truth of the matter is that religion is a thicket of superstition, and if we have an ethical obligation to tell the truth, we have an ethical obligation to say so.

I find Wallace’s equivocation on the truth or falsity of religious beliefs to be particularly regrettable, because his Religion: An Anthropological View is one of the justly celebrated classics in the anthropology of religion. Wallace, of course, would not agree that his stance is anything less than fair and appropriate; indeed, he is very forthright in declaring and defending his value position. In the opening pages of his book, for example, he states that “although my own confidence has been given to science rather than to religion, I retain a sympathetic respect and even admiration for religious people and religious behavior” (Wallace 1966:vi).

I suspect that most anthropologists would be inclined to agree with Wallace. Eric Gans (1990:1), who has urged anthropologists to “demonstrate a far greater concern and respect for the form and content of religious experience,” is one who clearly shares Wallace’s sympathy for the religious temperament. Whether Wallace and Gans are justified in according religious people respect and admiration is a debatable question, however. No reasonable person would deny that religious people are entitled to their convictions, but an important distinction must be made between an individual’s right to his or her own opinion (which is always inalienable) and the rightness of that opinion (which is never unchallengeable). With that in mind, it could be argued that individuals who are led by ignorance or timidity to embrace incorrect opinions might deserve empathy and compassion, but they would hardly deserve respect and admiration. Respect and admiration, instead, should be reserved for individuals who exhibit dignity, courage, or nobility in response to the universal challenges of human life.

The philosopher Paul Kurtz (1983) articulates just such a position in a lengthy rebuttal to religious values entitled In Defense of Secular Humanism. From Kurtz’s point of view, religious people live in a world of illusion, unwilling to accept and face reality as it is. In order to maintain their beliefs, they must prostitute their intellectual integrity, denying the abundant contradictory evidence that constantly surrounds them. They exhibit an “immature and unhealthy attitude” that is “out of touch with cognitive reality” and that “has all the hallmarks of pathology” (Kurtz 1983:173). Religious people fail to exhibit the moral courage that is the foundation of a responsible approach to life.

The physicist Victor Stenger (1990) shares Kurtz’s disdain for religious commitment, and he is one of many skeptical rationalists in a variety of fields who do so. Religious people, Stenger argues, fail to accept responsibility for defining the meaning and conduct of their own lives; instead, they lazily and thoughtlessly embrace an inherited set of illogical wish-fulfillment fantasies. By refusing to fully utilize their quintessentially human attributes–the abilities to think, to wonder, to discover, to learn–religious people deny themselves the possibility of human dignity or nobility. It is only those with the courage to reject religious commitment, Stenger (1990:31-32) suggests, who deserve admiration; in his words, “those who have no need to deny the reality they see with their own eyes willingly trade an eternity of slavery to supernatural forces for a lifetime of freedom to think, to create, to be themselves.”

It would be disingenuous of me not to admit that I concur completely with Kurtz and Stenger. Nevertheless, my personal values regarding religion are entirely beside the point; I mention this only to point out the irony of our discipline’s frequent sympathy for religious commitment. In Western culture, the concept of religious “faith” has a generally positive connotation, but there is nothing positive about the reality masked by that obfuscatory term. “Faith” is nothing more than the willingness to reach an unreasonable conclusion–i.e., a conclusion that either lacks confirming evidence or one that contains disconfirming evidence. Willful ignorance, deliberate self-deception, and delusionary thinking are not admirable human attributes. Religion prejudicially regards faith as an exceptional virtue, but science properly recognizes it as a dangerous vice.

In the final analysis, however, it is irrelevant whether religious conviction deserves respect and admiration, as Wallace and Gans propose, or contempt and disdain, as I believe. My point instead is a very basic one: as scientists, we all have an ethical obligation to tell the truth, regardless of whether that truth is attractive or unattractive, diplomatic or undiplomatic, polite or impolite. As anthropologists, we have not been telling the truth about religion, and we should. The issue is just that simple.

 

The Addicted & Hijacked Brain

 

 

People have been using addictive substances for centuries, but only very recently, by using the powerful tools of brain imaging, genetics, and genomics, have scientists begun to understand in detail how the brain becomes addicted. Neuropharmacologists Wilkie A. Wilson, Ph.D., and Cynthia M. Kuhn, Ph.D., explain, for example, that you cannot conclude you are addicted to something because you experience withdrawal symptoms. And calling our love of chocolate or football an “addiction” not only trivializes the devastation wrought by addiction, but misses the point that addiction involves a hijacking of the brain’s circuitry, a reprogramming of the reward system, and lasting, sometimes permanent, brain changes. Any effective treatment must address both addiction’s reorganization of the brain and the power of the addict’s memories. The history of addiction stretches over thousands of years and reveals a persistent pattern: A chemical, often one with medicinal benefits, is discovered and found to be appealing for recreational use. Repeated use, however, leads to compulsive use and destructive consequences. Society then seeks to control use of the chemical. Many well known, problematic drugs have followed this pattern because they are derived from readily available and common plant products. Nicotine, cocaine, and many narcotics come from plants, and alcohol is produced by fermentation of many grains and fruits. These are products humans have known and used for millennia.  

Things began to change in the 19th century. Until then, methods of delivering the active ingredients to the brain were relatively unsophisticated: swallowing and smoking. Swallowing drugs often produces only a slow rise in brain concentrations because plants must be digested and absorbed and the active ingredients must escape destruction in the liver. When people realized that smoking plant products worked better, that became a favored method of delivery. Then we invented even more effective ways of getting drugs to the brain, especially the hypodermic syringe and needle. Now, modern chemistry has enabled us to synthesize potent, highly addictive chemicals, such as amphetamines, that were never available naturally. 

The ability to find new ways to become addicted has raced ahead of public understanding of the addiction process. For example, people often confuse a strong habit with an addiction, asserting that we can be addicted to chocolate, movies, or sports. Most people who are not addiction scientists or treatment professionals fail to understand what happens in the brain as addiction takes hold and how those brain changes may affect us. Yet one need not be an expert to understand how people become addicted, and the benefits of understanding are considerable— not least because to understand addiction is to understand the biological systems that govern our search for pleasure. 

FROM MEDICINE TO DISEASE TO JAIL

First, though, it is worth looking more closely at how addictive substances and their use have made their way into virtually every culture, from the simplest agrarian society to the most advanced technological one, and have provoked rules and sanctions when their power and appeal seemed threatening. 

Fermenting alcohol probably began with agriculture itself, and, by Biblical times, there were prohibitions against misuse of alcohol. During the Middle Ages, the discovery of distillation yielded drinks that were as much as 50 percent alcohol (today’s beer and wine range up to 15 percent alcohol). The enhanced potency, combined with wide availability and decreased social disapproval, caused use of alcohol to spread throughout Europe during the 17th century. The famous painting “Gin Lane” is emblematic of the rise of alcohol use and addiction in England during that time. Today’s worries about binge drinking by college students are but the most recent iteration of an age-old concern. 

Tobacco use followed a similar pattern. The leaves of the tobacco plant contain nicotine, which is both psychoactive and addictive. The plant is native to the Americas and its characteristics were probably known before the arrival of Europeans, although there are no written records by which to verify this. Tobacco first arrived in Europe in the early 16th century with returning Portuguese and Spanish explorers and soon was viewed as a miracle cure for everything from headaches to dysentery—so much so that it helped drive further Portuguese and Spanish colonization in the Americas. As tobacco use spread rapidly, health concerns and public outcry followed. By 1573, the Catholic Church had forbidden smoking in churches. But modern chemical techniques and the Industrial Revolution led to mass production of a perfect nicotine delivery device, the cigarette. The cigarette delivers a single, small dose of inhaled nicotine that enters the brain almost immediately. In the United States, manufactured cigarettes first appeared during the 1860s, and, by 1884, James B. Duke was producing almost a billion cigarettes a year. Protests, such as those by the Women’s Christian Temperance Union, soon followed, with complaints about addiction and other health concerns. The active prosecution of tobacco companies and increased legislation prohibiting smoking during the past decade are but the latest chapter in the history of tobacco use, addiction, and regulation. 

We see the pattern again with cocaine and narcotics. Ancient records indicate that cocaine, from the coca plant, was used by natives in South America to enhance physical endurance. Extracts of the opium poppy were used in South East Asia to relieve pain. During the late 19th century, European scientists purified both cocaine and morphine. What followed was an explosion of patent medicine manufacturing and sales; entrepreneurs founded future drug company giants such as Merck, Parke Davis, and Squibb Chemical Company, all of which marketed cocaine and narcotics as medicines. These drugs became widely used, and eventually abused, in Europe and the United States. Sigmund Freud’s personal research on cocaine helped to popularize the drug, and invention of the hypodermic syringe led to injectable analgesic and anesthetic drugs, increasing the potential for abuse. Public concern led to increased governmental regulation, which first took the form in the United States of the Pure Food and Drug Act of 1906 and the Harrison Narcotic Act in 1914. Today, the cycle of invention, popularity, demonizing, and regulation proceeds apace. Cultural acceptance of the benefits that psychoactive drugs can bring co-exist with condemnation of excess.

Addiction exerts a seemingly fundamental and enduring appeal and power for human beings. Why is this so? 

If there is a lesson, here, it is that addiction exerts a seemingly fundamental and enduring appeal and power for human beings. Why is this so?

 

WHAT IS AN ADDICTION?

Many people have a rather archaic view of the nature of addiction. Their misconceptions and confusion tend to revolve around three issues: What is the difference between addiction and a bad habit? What happens in the brain of an addict? What is involved in healing the addicted brain and the addicted person? 

People often claim to be addicted to chocolate, coffee, football, or some other substance or behavior that brings pleasure. This is not likely. Addiction is an overwhelming compulsion, based in alteration of brain circuits that normally regulate our ability to guide our actions to achieve goals. It overrides our ordinary, unaffected judgment. Addiction leads to the continued use of a substance or continuation of a behavior despite extremely negative consequences. An addict will choose the drug or behavior over family, the normal activities of life, employment, and at times even basic survival. When we call our love of chocolate or football an addiction, we are speaking loosely or misconstruing the intensity of what can be a devastating disorder. It may help to consider, first, what is not an addiction. 

No matter how much you like some drug or activity and how much you choose to involve yourself with it, you are not addicted if you can stop it when the consequences become negative for you. Coffee is an ideal example with which to illustrate this because it contains a powerful drug, caffeine, that can have significant effects on our behavior. Most of us like to drink coffee, but if your doctor told you that the heart attack you just had was precipitated by caffeine and that you would likely have another if you did not stop drinking coffee, what would you do? Most people would miss the buzz, but not so much that they would continue to drink coffee, knowing it would likely kill them. They would stop cold, right then and there. 

Yet people say they are addicted to coffee because they feel bad when they do not use it. This reflects common confusion about two important biological processes: tolerance and withdrawal. Most people, when they abruptly stop drinking coffee, begin to suffer some negative effects within about 24 hours: a nagging headache and general feelings of sleepiness and lethargy. Their experience, however, does not signify addiction. They are suffering from the processes of tolerance and withdrawal. Tolerance occurs when the brain reacts to repeated drug exposure by adapting its own chemistry to offset the effect of the drug—it adjusts itself to tolerate the drug. For example, if the drug inhibits or blocks the activity of a particular brain receptor for a neurotransmitter, the brain will attempt to counteract that inhibition by making more of that particular receptor or by increasing the effectiveness of the receptors that remain. On the other hand, if a drug enhances the activity of a receptor, the brain may make less of the receptor, thus adapting to its over-stimulation. Both conditions represent the process of tolerance, and, in either case, withdrawing the drug quickly leaves the brain with an imbalance because the brain is now dependent on the drug. This is true not only for addictive drugs; many neuroactive drugs from caffeine to antidepressants to sedatives (and even non-neuroactive drugs) cause the adaptation we call tolerance. 

Tolerance occurs when the brain reacts to repeated drug exposure by adapting its own chemistry to offset the effect of the drug—it adjusts itself to tolerate the drug. 

In the case of coffee, the caffeine inhibits the receptors for the neurotransmitter adenosine. When we regularly use caffeine, the brain senses that its adenosine receptors are not working up to par, and it responds by increasing their function, which affects brain cells, blood vessels, and other tissues. Two major functions of adenosine in the brain are to regulate blood flow to the brain and to inhibit the neuronal circuits that control alertness. When the coffee drinker stops his intake of caffeine, he goes into withdrawal, as the receptors for adenosine become less inhibited. With more adenosine receptors functioning, his brain experiences abnormal levels of blood flow in the arteries around it, and he gets a headache. At the same time, the brain centers that keep him alert are suppressed by the excess functioning of adenosine, so he feels sleepy and lethargic. 

Now the former coffee drinker is in caffeine withdrawal, feeling miserable, and wanting a cup of coffee because he is sleepy and has a headache. Is he addicted? No, he is tolerant to the caffeine because his brain chemistry has adapted to it and its proper function is dependent on its presence. This will quickly pass, because caffeine withdrawal symptoms usually disappear after a few days, and, unless he is a very unusual person, he will be able to stop using caffeine and hope to avoid another heart attack. His craving is not overwhelming; for example, it does not override his decision to protect himself from another heart attack. 

The relationship between withdrawal and addiction may confuse people because most genuine addicts do experience withdrawal of some sort when they quit, and most scientists think that avoiding withdrawal is one reason addicts keep using the substance to which they are addicted. Alcohol is a good example of how tolerance and withdrawal contribute to addiction. If a person drinks heavily for a long time, his brain will adapt to the sedative effects of the alcohol. The compensation that happens is like the caffeine example above, only with a different neurotransmitter. Alcohol activates receptors in the brain for the neurotransmitter GABA, which normally inhibits brain activity. After long-term alcohol exposure (weeks to months or years), the brain compensates by diminishing the ability of these receptors to function.  The alcoholic is now tolerant to the alcohol, just as the coffee drinker was tolerant to caffeine. 

If the alcoholic abruptly stops drinking, the neuronal circuits in the brain will suffer from excess excitation, because the opposing inhibitory functions have been diminished. The consequences of acute alcohol withdrawal can be lethal, because the hyperexcitability of the brain can cause epileptic seizures as well as instability of blood pressure and heart functions. Fortunately, however, other sedative drugs can be substituted for the alcohol to keep the brain stable, and withdrawal can proceed over a few days. 

Many addictive drugs like alcohol produce tolerance, and addicts experience withdrawal when they try to stop using them. This withdrawal can range from mild or at most moderate discomfort for a drug like marijuana, to extreme discomfort from opiates, to lethal brain instability from sedative agents like alcohol, barbiturates, and benzodiazepines (such as Valium and Ativan). Still, the key point remains: withdrawal discomfort ends in a matter of days to weeks as the brain chemistry normalizes, and this discomfort alone does not signify addiction. 

Are habits addictions? This is a tough question, because such habits range from mild and innocuous—such as twirling your hair when you are thinking about something —to dangerous, for example, overeating and gambling. Mild habits can be difficult to stop, but if we can stop when we must, we are not addicted. More dangerous habits or compulsions may be different. In fact, as we discuss later, modern neurobiology suggests that there are some strong similarities between drug addictions and compulsive habits. 

THE ADDICTED BRAIN

Scientists now think that the brain changes associated with genuine addiction long outlast the withdrawal phase for any drug. Addiction is characterized by profound craving for a drug (or behavior) that so dominates the life of an addict that virtually nothing can stop the person from engaging in the addictive activity. Addicts will give up anything and everything in their lives for the object of their addiction. They will lose all their money for cocaine, give up loved ones to feed their craving for alcohol, and sometimes give up their lives. The perplexing questions for neuroscientists who study addiction are how the brain learns to crave something so fiercely and how to reverse that craving. 

With new imaging techniques, we can watch the brain function in real time, and we now know that addictive drugs cause the activation of a specific set of neural circuits, called the brain reward system. This system controls much of our motivated behavior, but most people are hardly familiar with it. Our brain’s reward system motivates us to behave in ways such as eating and having sex that tend to help us survive as individuals and as a species. This system organizes the behaviors that are life-sustaining, provides some tools necessary to take the desired actions, and then rewards us with pleasure when we do. Research shows that almost any normal activity we find pleasurable—from hearing great music to seeing a beautiful face—can activate the reward system. When this happens, not only are we stimulated, but these circuits enable our brains to encode and remember the circumstances that led to the pleasure, so that we can repeat the behavior and go back to the reward in the future. 

A critical component of this system is the chemical dopamine, which is released from neurons in the reward system circuits and functions as neurotransmitter. Through a combination of biochemical, electrophysiological, and imaging experiments, scientists have learned that all addictive drugs increase the release of dopamine in the brain. Some increase dopamine much more than any natural stimuli. 

Let us imagine a simplified scenario that illustrates the power of a functioning reward system and our understanding of the role of dopamine. You are at a cocktail party, talking with friends. From time to time, you glance about the room to see who is coming and going, and then you notice an extremely attractive person has entered the room. That person now has your attention. The person is attractive enough that you begin to focus on him or her, and pay less attention to the ongoing conversations among your friends. 

At this point, you have experienced two effects of activating the reward system: attention and focus on the potential reward. Attention is the first of the reward system tools, giving you the ability to recognize a potentially rewarding possibility, be it your grandmother’s chocolate cake or this beautiful person. Next, you focused on the person, tending to ignore other aspects of your environment. The dopamine system is active at this point because it is part of the brain circuits that mediate attention; it helps us ignore peripheral stimuli and focus more on whatever we perceive as our task. Finally, in this first stage, perhaps you felt a little rush as the person indicates a mutual interest. 

Now things get interesting, as your reward system tells you that there is a possibility of a significant rewarding interaction with this person. This is the point where our understanding of what dopamine does has become more sophisticated in the last 10 years. We once held the simplistic view of dopamine as the “pleasure chemical”; when you did something that felt good, the increase in dopamine was the reason. Experimental psychologists now make clear distinctions between “wanting” something and “liking” something, and dopamine seems to be important for the “wanting” but not necessary for the “liking.” This distinction seems to hold in every species in which it has been tested, from rodents to man. “Wanting” turns a set of neutral sensory stimuli (a face, a scent) into a stimulus that is relevant, or has “incentive salience.” In other words, in our scenario above, activation of dopamine neurons helps signal that the person who enters the room is somebody interesting.   

We once held the simplistic view of dopamine as the “pleasure chemical”; when you did something that felt good, the increase in dopamine was the reason. Experimental psychologists now make clear distinctions between “wanting” something and “liking” something, and dopamine seems to be important for the “wanting” but not necessary for the “liking.” 

Studies using animals that were receiving sweet food treats or having sex usually show that dopamine activity increases not as a result of getting the reward, but in anticipation of a reward. Sophisticated mathematical models of this neuronal activity have led some of these scientists to view the dopamine system as an “error detector” that determines whether things are going as predicted. So if a monkey (or rat or person) is anticipating an expected reward (a kind glance from the person in the above scenario, perhaps), dopamine neurons fire in anticipation of this, and shut down their firing if it is not forthcoming. 

The reward system also has the ability to encode cues to help you repeat the experience. You will remember the room where you met this person, the clothes, the food being served, the odor of cologne or perfume, a spoken phrase, and much more. Assuming that things go well, the next time you encounter one of these cues, you will not only remember the encounter, but feel a little craving to repeat it. When a person experiences a positive, pleasurable outcome from an action or event, the release of dopamine and other chemicals alters the brain circuitry, providing tools and encouragement to repeat the event. The memory circuitry stores cues to the rewarding stimulus, so previously neutral cues (a perfume, a line of white powder) become salient. Our brains map the environment in which we experience the rewarding activity by recording the physical space, the people involved, the smells—in fact, all of the sensory experience. In addicts, cues that normally would have no particular importance to survival or pleasure—such as a line of white powder, a cigarette, or a bottle of brown liquid—activate this same reward system. 

But cues alone are not enough; action is necessary to get a reward. The brain’s reward system is organized to engage the areas of the brain that control our ability to take action. The executive area of the brain, located in the prefrontal cortex, enables us to plan and execute complex activities, as well as control our impulses. Humans have a much larger prefrontal cortex and so a greater capacity for planning and executing complex activities than lower animals do, even the nearest primates. When we experience a rewarding event, the executive center of the brain is engaged. It remembers the actions used to achieve the reward and creates the capacity to repeat the experience. Thus, not only does a pleasurable experience result in pleasant memories, but also the executive center of the brain provides motivation, rationalization, and the activation of other brain areas necessary to have the experience again. And each time the experience is repeated all of these brain changes— memories and executive function tasks— become stronger and more ingrained. These planning centers are an important target of dopamine action. 

THE HIJACKED BRAIN

Everything we know about addictive drugs suggests that they work through precisely these mechanisms. All addictive drugs activate the reward system by directly raising the levels of dopamine. Although each addictive drug also has its own unique effects, which is why alcohol feels different from cocaine or heroin, stimulation of the dopamine component of the reward system seems to be a common denominator. When addictive drugs enter the brain they artificially simulate a highly rewarding environment. The feelings provided by the drugs activate the “wanting” system just the way a cute person or tasty food would, and the dopamine released influences memory and executive function circuitry to encourage the person to repeat the experience. With every use, the enabling circuits become stronger and more compelling, creating an addiction. Recent imaging studies of the brains of addicts while they were anticipating a fix show that the planning and executive function areas of the prefrontal cortex become highly activated as the addicts plan for the upcoming drug reward. 

As an interesting aside: A new area of study is the mathematical modeling of the reward by economists. It should not be a surprise that the mathematical models that predict our consumption of cards, food, and perfume could be applied to more basic reward phenomena, and a group of mathematicians have shown that these models predict a wide range of normal human behaviors. The field of “behavioral economics” has become one of the most exciting forefronts of neuroscience research. Some scientists have proposed that addiction hijacks the normal reward circuitry and so disrupts the normally perfectly quantifiable relationships between reward and behavior. 

Addiction hijacks the normal reward circuitry and so disrupts the normally perfectly quantifiable relationships between reward and behavior. 

The level of addiction to a drug can vary immensely, depending on the characteristics of the particular drug. If a person uses a drug such as cocaine or amphetamine, which produce a profound dopamine release, the addict’s reward system experiences surges of activation. With repeated use, the circuitry adapts (perhaps becomes tolerant) to dopamine, and normal pleasures, such as sex, become less pleasurable compared with the drug. 

In alcoholics, using neuroimaging we can actually see decreases in the brain’s receptors for dopamine. Since it is hard to study human addicts before their addiction, we have a bit of a “chicken and egg” problem with this finding; we do not know which came first, the low receptors or the addiction. We do know from a recent rat study that raising the level of dopamine receptors by a sophisticated molecular strategy (transfection with a virus) caused rats to decrease their alcohol intake. 

Some addictive drugs, such as nicotine, might seem rather innocuous, because they do not produce a profound “buzz” or euphoria. How can nicotine be as addictive as it is? We know that nicotine is a reliable dopamine-releasing agent, although the amount of dopamine released is small with each use. People smoke or chew quite frequently, however, providing the brain a large number of exposures to the drug, allowing the reward system to modify the brain to crave the drug and take action to get it. The powerfully addicting effects of nicotine demonstrate that the conscious “liking” of the drug experience is not the most important effect of addictive drugs. Most smokers describe nicotine as relaxing, or anxiety reducing, but not as particularly pleasurable. 

This dissociation between liking the drug experience and taking drugs is described by most addicts. Many addicts will say that their initial experiences with addictive drugs were the best they ever had, and they have spent the remainder of their addiction seeking out a similar high. Addicts do report that when they stop, they go through a period when they are unable to experience pleasure from normally pleasurable activities; this is called “anhedonia.” But the result of the addiction is more than simply missing pleasure, as bad as that is. In an established addiction, the brain’s executive centers have become programmed to take all action necessary to acquire the drug. The person begins to crave the drug and feel compelled to take whatever action—spend money, rob a mini-market, steal from his parents— is necessary to get the drug and the high levels of dopamine that come with it. After awhile, seeking out the drug can become an automatic behavior that the addict does not even enjoy. 

And yet, the reasons that addicts keep using drugs are more complicated than activation of the reward system by dopamine. We think that long-lasting changes in the production of certain brain molecules are at work. Until recently, researchers patiently focused on single molecules, one at a time, to evaluate their potential role in addiction. Using this approach, we learned how to identify molecules that changed as an addiction developed and remained altered for a long time after drug use stopped, in concert with the long-lasting cravings that people experienced. Some of the molecules identified, such as the dopamine receptors, were expected, but others were not. For example, growth factors that produce long-lasting structural changes in the brain may also contribute to the changes in brain function associated with addiction. 

Scientists now know that the best way to produce long-lasting changes in the brain is to regulate the production of proteins by activating or silencing their genes. With the new ability to track changes simultaneously in thousands of brain molecules, we have started looking for patterns of change in genes. Some of the single molecules targeted earlier, such as proteins like CREB and delta fos B, themselves coordinate production of families of genes. Furthermore, these families change over different time frames. CREB is important during the early phases of cocaine use, but becomes much less important once addiction is long established. The fos family of proteins is the opposite: many more are changed after long-term exposure to cocaine. 

These changes do not go away quickly. The biological memories of the drug can be as profound and long-lasting as any other kinds of memories, and cues can activate the executive system to initiate drug-seeking years after the most recent previous exposure. So addiction is far more than seeking pleasure by choice. Nor is it just the unwillingness to avoid withdrawal symptoms. It is a hijacking of the brain circuitry that controls behavior, so that the addict’s behavior is fully directed to drug seeking and use. With repeated drug use, the reward system of the brain becomes subservient to the need for the drug. Brain changes have occurred that will probably influence the addict for life, regardless of whether or not he continues to use the drug. 

Now back to a question we posed earlier: “How are dangerous habits related to addiction?” Researchers are discovering that behaviors such as promiscuous sex, gambling, and overeating have some commonality with drug addiction, and you can probably imagine why. Nature did not create the brain reward circuit to help us get high on cocaine; this system evolved to help us eat and reproduce, behaviors that are complex but necessary to life. Recent brain imaging studies show that some of the changes that happen in the drug-addicted brain—for example, a decrease in the receptors for the neurochemical dopamine—are also seen in the brains of extreme overeaters. Other researchers are exploring this phenomenon in connection with other types of behaviors. 

TREATING THE ADDICTED BRAIN AND ADDICTED PERSON

How can we help control or reverse addictions? We do not yet have tools to erase the long-lasting brain changes that underlie addiction. The best pharmacological tools that we have now use a simple but effective strategy: an alternative drug is used to stimulate the brain on a low and steady level. This can fend off withdrawal, while providing a mild, almost subliminal, stimulation to the reward system, allowing the brain circuitry to readapt over time from the intense stimulation of daily use of addictive drugs to the very slight stimulation by steady, low levels of the medication. As the brain adapts back toward normality, an addict may gradually decrease the substitute drug until he becomes drug free. The narcotic drugs methadone and buprenorphine are safe and effective examples of such drugs. A recently approved drug called acamprosate uses a similar approach to treating alcoholism by providing a very mild sedative action that resembles alcohol. Is this just a chemical “crutch” that maintains the same brain changes caused by addiction? Perhaps, but by providing a minimal action it allows considerable normalization of brain function. Furthermore, these drugs allow people to reconnect with their families, hold jobs, and be productive members of society. 

Why not use a drug that blocks the effects of all addictive drugs, an abstinence-based approach that appeals to some people? The problem with such a drug is that it would also prevent all the normal rewards through which people need to find satisfaction in living. 

Why not use a drug that blocks the effects of all addictive drugs, an abstinence-based approach that appeals to some people? The problem with such a drug is that it would also prevent all the normal rewards through which people need to find satisfaction in living. If you invented the perfect reward-blocking drug, nobody would take it at the cost of losing the pleasures of life. Another approach may be found in a new drug called rimonabant, which blocks the actions of the cannabinnoid receptor, the brain receptor that the active ingredients in marijuana act upon. There is a tremendous amount of excitement right now about this new drug, and successful trials in weight reduction and smoking cessation have raised hopes that this drug might prevent certain addictive behaviors, and also block the effects of alcohol and narcotics. Recent experiments with “knockout” mice that lack the cannabinnoid receptor show that these animals do not drink alcohol, and they will not self-administer narcotics. This is consistent with older studies that have hinted that there is some common thread in the addiction pathway for these three drugs. What is philosophically more appealing about rimonabant is that the effects of drugs are prevented, not mimicked. Time will tell, but its effectiveness against several problems suggests that neuropharmacologists are on the right track. 

There will never be a simple pill to regulate such a complicated disease as addiction. The most important contribution that anyone dealing with addicted individuals can make is to recognize that reversing addiction is not just a matter of giving up something pleasurable but of accepting that addicted individuals have undergone a formidable reorganization of their brains. Treating an addict requires dealing with every aspect of this reorganization. 

Acute withdrawal is the first problem that any addict faces after he stops using, and this process plays an important role in maintaining drug-taking behavior. The withdrawal can be a day or two, or many days, even weeks, depending on the particular drug, how long the addict has been using, and how much he has been taking. We must recognize that the executive system in the brain of an addict is programmed to initiate drug seeking in response to cues, so it is critical to help the addict avoid those cues. This usually means removing the addict from the environment where he became addicted. The addict will also have to relearn impulse control; his executive system will have to be retrained to inhibit the impulses toward drug use as they occur. 

Finally, we should recognize that addiction is one of the most powerful memories we can have. These memories are imbedded in the brain; we do not forget an addiction any more easily than we forget our first love. People often receive drug treatment more than once, and still relapse. Relapses are unfortunately common in treating addiction, but the same thing happens in treating cancer and we still keep trying for a cure. We must take the same attitude toward addictive diseases and offer extensive as well as intensive treatment. But, most of all, we must offer understanding, which comes from knowing that addiction lies at the very core of our brains.

Cynthia M. Kuhn, and Wilkie A. Wilson, Dana Foundation

Consciousness and Neuroscience

 

“When all’s said and done, more is said than done.” — Anon.

The main purposes of this review are to set out for neuroscientists one possible approach to the problem of consciousness and to describe the relevant ongoing experimental work. We have not attempted an exhaustive review of other approaches.

Clearing The Ground

We assume that when people talk about “consciousness,” there is something to be explained. While most neuroscientists acknowledge that consciousness exists, and that at present it is something of a mystery, most of them do not attempt to study it, mainly for one of two reasons:

  1. They consider it to be a philosophical problem, and so best left to philosophers.
  2. They concede that it is a scientific problem, but think it is premature to study it now.

We have taken exactly the opposite point of view. We think that most of the philosophical aspects of the problem should, for the moment, be left on one side, and that the time to start the scientific attack is now.

We can state bluntly the major question that neuroscience must first answer: It is probable that at any moment some active neuronal processes in your head correlate with consciousness, while others do not; what is the difference between them? In particular, are the neurons involved of any particular neuronal type? What is special (if anything) about their connections? And what is special (if anything)about their way of firing? The neuronal correlates of consciousness are often referred to as the NCC. Whenever some information is represented in the NCC it is represented in consciousness.

In approaching the problem, we made the tentative assumption (Crick and Koch, 1990) that all the different aspects of consciousness (for example, pain, visual awareness, self-consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then, we hope, we will have gone most of the way towards understanding them all.

We made the personal decision (Crick and Koch, 1990) that several topics should be set aside or merely stated without further discussion, for experience had shown us that otherwise valuable time can be wasted arguing about them without coming to any conclusion.

(1) Everyone has a rough idea of what is meant by being conscious. For now, it is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both. If this seems evasive, try defining the word “gene.” So much is now known about genes that any simple definition is likely to be inadequate. How much more difficult, then, to define a biological term when rather little is known about it.

(2) It is plausible that some species of animals — in particular the higher mammals — possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the mechanisms underlying consciousness. It follows that a language system (of the type found in humans) is not essential for consciousness — that is, one can have the key features of consciousness without language. This is not to say that language does not enrich consciousness considerably.

(3) It is not profitable at this stage to argue about whether simpler animals (such as octopus, fruit flies, nematodes) or even plants are conscious (Nagel, 1997). It is probable, however, that consciousness correlates to some extent with the degree of complexity of any nervous system. When one clearly understands, both in detail and in principle, what consciousness involves in humans, then will be the time to consider the problem of consciousness in much simpler animals. For the same reason, we won’t ask whether some parts of our nervous system have a special, isolated, consciousness of their own. If you say, “Of course my spinal cord is conscious but it’s not telling me,” we are not, at this stage, going to spend time arguing with you about it. Nor will we spend time discussing whether a digital computer could be conscious.

(4) There are many forms of consciousness, such as those associated with seeing, thinking, emotion, pain, and so on. Self-consciousness — that is, the self-referential aspect of consciousness — is probably a special case of consciousness. In our view, it is better left to one side for the moment, especially as it would be difficult to study self-consciousness in a monkey. Various rather unusual states, such as the hypnotic state, lucid dreaming, and sleep walking, will not be considered here, since they do not seem to us to have special features that would make them experimentally advantageous.

Visual Consciousness

How can one approach consciousness in a scientific manner? Consciousness takes many forms, but for an initial scientific attack it usually pays to concentrate on the form that appears easiest to study. We chose visual consciousness rather than other forms, because humans are very visual animals and our visual percepts are especially vivid and rich in information. In addition, the visual input is often highly structured yet easy to control.

The visual system has another advantage. There are many experiments that, for ethical reasons, cannot be done on humans but can be done on animals. Fortunately, the visual system of primates appears fairly similar to our own (Tootell et al., 1996), and many experiments on vision have already been done on animals such as the macaque monkey.

This choice of the visual system is a personal one. Other neuroscientists might prefer one of the other sensory systems. It is, of course, important to work on alert animals. Very light anesthesia may not make much difference to the response of neurons in macaque V1, but it certainly does to neurons in cortical areas like V4 or IT (inferotemporal).

Why Are We Conscious?

We have suggested (Crick and Koch, 1995a) that the biological usefulness of visual consciousness in humans is to produce the best current interpretation of the visual scene in the light of past experience, either of ourselves or of our ancestors (embodied in our genes), and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech.

Philosophers, in their carefree way, have invented a creature they call a “zombie,” who is supposed to act just as normal people do but to be completely unconscious (Chalmers, 1995). This seems to us to be an untenable scientific idea, but there is now suggestive evidence that part of the brain does behave like a zombie. That is, in some cases, a person uses the current visual input to produce a relevant motor output, without being able to say what was seen. Milner and Goodale (1995) point out that a frog has at least two independent systems for action, as shown by Ingle (1973). These may well be unconscious. One is used by the frog to snap at small, prey-like objects, and the other for jumping away from large, looming discs. Why does not our brain consist simply of a series of such specialized zombie systems?

We suggest that such an arrangement is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation.

Milner and Goodale (1995) suggest that in primates there are two systems, which we shall call the on-line system and the seeing system. The latter is conscious, while the former, acting more rapidly, is not. The general characteristics of these two systems and some of the experimental evidence for them are outlined below in the section on the on-line system. There is anecdotal evidence from sports. It is often stated that a trained tennis player reacting to a fast serve has no time to see the ball; the seeing comes afterwards. In a similar way, a sprinter is believed to start to run before he consciously hears the starting pistol.

The Nature of the Visual Representation

We have argued elsewhere (Crick and Koch, 1995a) that to be aware of an object or event, the brain has to construct a multilevel, explicit, symbolic interpretation of part of the visual scene. By multilevel, we mean, in psychological terms, different levels such as those that correspond, for example, to lines or eyes or faces. In neurological terms, we mean, loosely, the different levels in the visual hierarchy (Felleman and Van Essen, 1991).

The important idea is that the representation should be explicit. We have had some difficulty getting this idea across (Crick and Koch, 1995a). By an explicit representation, we mean a smallish group of neurons which employ coarse coding, as it is called (Ballard et al., 1983), to represent some aspect of the visual scene. In the case of a particular face, all of these neurons can fire to somewhat face-like objects (Young and Yamane, 1992). We postulate that one set of such neurons will be all of one type (say, one type of pyramidal cell in one particular layer or sublayer of cortex), will probably be fairly close together, and will all project to roughly the same place. If all such groups of neurons (there may be several of them, stacked one above the other) were destroyed, then the person would not see a face, though he or she might be able to see the parts of a face, such as the eyes, the nose, the mouth, etc. There may be other places in the brain that explicitly represent other aspects of a face, such as the emotion the face is expressing (Adolphs et al., 1994).

Notice that while the information needed to represent a face is contained in the firing of the ganglion cells in the retina, there is, in our terms, no explicit representation of the face there.

How many neurons are there likely to be in such a group? This is not yet known, but we would guess that the number to represent one aspect is likely to be closer to 100-1,000 than to 10,000-1,000,000.

A representation of an object or an event will usually consist of representations of many of the relevant aspects of it, and these are likely to be distributed, to some degree, over different parts of the visual system. How these different representations are bound together is known as the binding problem (von der Malsburg, 1995).

Much neural activity is usually needed for the brain to construct a representation. Most of this is probably unconscious. It may prove useful to consider this unconscious activity as the computations needed to find the best interpretation, while the interpretation itself may be considered to be the results of these computations, only some of which we are then conscious of. To judge from our perception, the results probably have something of a winner-take-all character.

As a working hypothesis we have assumed that only some types of specific neurons will express the NCC. It is already known (see the discussion under”Bistable Percepts”) that the firing of many cortical cells does not correspond to what the animal is currently seeing. An alternative possibility is that the NCC is necessarily global (Greenfield, 1995). In one extreme form this would mean that, at one time or another, any neuron in cortex and associated structures could express the NCC. At this point, we feel it more fruitful to explore the simpler hypothesis — that only particular types of neurons express the NCC — before pursuing the more global hypothesis. It would be a pity to miss the simpler one if it were true. As a rough analogy, consider a typical mammalian cell. The way its complex behavior is controlled and influenced by its genes could be considered to be largely global, but its genetic instructions are localized, and coded in a relatively straightforward manner.

Where is the Visual Representation?

The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen.

We have also wondered (Crick, 1994) whether the visual representation is largely confined to certain neurons in the lower cortical layers (layers 5 and 6). This hypothesis is still very speculative.

What is Essential for Visual Consciousness?

The term “visual consciousness” almost certainly covers a variety of processes. When one is actually looking at a visual scene, the experience is very vivid. This should be contrasted with the much less vivid and less detailed visual images produced by trying to remember the same scene. (A vivid recollection is usually called a hallucination.) We are concerned here mainly with the normal vivid experience. (It is possible that our dimmer visual recollections are mainly due to the back pathways in the visual hierarchy acting on the random activity in the earlier stages of the system.)

Some form of very short-term memory seems almost essential for consciousness, but this memory may be very transient, lasting for only a fraction of a second. Edelman (1989) has used the striking phrase, “the remembered present,” to make this point. The existence of iconic memory, as it is called, is well-established experimentally (Coltheart, 1983; Gegenfurtner and Sperling, 1993).

Psychophysical evidence for short-term memory (Potter, 1976; Subramaniam et al., 1997) suggests that if we do not pay attention to some part or aspect of the visual scene, our memory of it is very transient and can be overwritten (masked) by the following visual stimulus. This probably explains many of our fleeting memories when we drive a car over a familiar route. If we do pay attention (e.g., a child running in front of the car) our recollection of this can be longer lasting.

Our impression that at any moment we see all of a visual scene very clearly and in great detail is illusory, partly due to ever-present eye movements and partly due to our ability to use the scene itself as a readily available form of memory, since in most circumstances the scene usually changes rather little over a short span of time (O’Regan, 1992).

Although working memory (Baddeley, 1992; Goldman-Rakic, 1995) expands the time frame of consciousness, it is not obvious that it is essential for consciousness. It seems to us that working memory is a mechanism for bringing an item, or a small sequence of items, into vivid consciousness, by speech, or silent speech, for example. In a similar way, the episodic memory enabled by the hippocampal system (Zola-Morgan and Squire, 1993) is not essential for consciousness, though a person without it is severely handicapped.

Consciousness, then, is enriched by visual attention, though attention is not essential for visual consciousness to occur (Rock et al., 1992; Braun and Julesz, 1997). Attention is broadly of two types: bottom-up, caused by the sensory input; and top-down, produced by the planning parts of the brain. This is a complicated subject, and we will not try to summarize here all the experimental and theoretical work that has been done on it.

Visual attention can be directed to either a location in the visual field or to one or more (moving) objects (Kanwisher and Driver, 1992). The exact neural mechanisms that achieve this are still being debated. In order to interpret the visual input, the brain must arrive at a coalition of neurons whose firing represents the best interpretation of the visual scene, often in competition with other possible but less likely interpretations; and there is evidence that attentional mechanisms appear to bias this competition (Luck et al., 1997).

——————————————————————————–

Consciousness and Neuroscience, Francis Crick (The Salk Institute) & Christof Koch (Computation and Neural Systems Program, California Institute of Technology)

Has appeared in: Cerebral Cortex, 8:97-107, 1998

Corresponding author:

Francis Crick
The Salk Institute
10010 North Torrey Pines Road
La Jolla, California 92037
(619) 453-4100 x1242
Fax: (619) 550-9959

 

Role of Consciousness

In this post, a theoretical account of the functional role of consciousness in the cognitive system of normal subjects is developed. The account is based upon an approach to consciousness that is drawn from the phenomenological tradition. On this approach, consciousness is essentially peripheral self-awareness, in a sense to be duly explained. It will be argued that the functional role of consciousness, so construed, is to provide the subject with just enough information about her ongoing experience to make it possible for her to easily obtain as much more information as she may need. The argument for this account of consciousness’ functional role will proceed in three main stages. First, the phenomenological approach to consciousness as peripheral self-awareness will be expounded and endorsed. Second, an account of the functional role of peripheral perceptual awareness will be offered. Finally, the account of the functional role of peripheral self-awareness will be obtained by straightforward extension from the functional role of peripheral perceptual awareness.

For many, the ultimate goal of scientific research into consciousness is to identify the neural correlate of consciousness – to uncover the neurological “seat” of consciousness in the brain. There are many ways scientific investigation can proceed in pursuit of such a goal. Perhaps the most straightforward way is as follows: first find out what it is that consciousness does, then find out what structure or process in the brain does just that; one would then be justified in identifying the structure or process in question as the seat of consciousness.[1]

            This approach requires, as a first order of business, a comprehensive account of what consciousness does, that is, of the functional role of consciousness in the cognitive system of a normal subject. In order to understand what consciousness does, however, we must have an agreement on what consciousness is. In what follows, I adopt a specific view of this matter, a view drawn from the phenomenological tradition. On this view, consciousness is a form of peripheral self-awareness. What is meant by the concept of peripheral self-awareness, and what the emerging conception of consciousness is, will be elucidated in due course. In any event, on the phenomenological approach to consciousness adopted here, the functional role of consciousness is given by that of peripheral self-awareness. The latter is what I propose to discuss in the present paper.

Various accounts of the functional significance of consciousness already exist, both in the scientific literature and in the philosophical one. Most of these accounts, however, rest content with pointing out a number of cognitive functions consciousness is somehow involved in. But this falls short of precisely distilling the singular functional contribution of consciousness to any process or state in which it is present. An account attempting to do that will be offered in §5 below. On this account, the precise functional role of consciousness is to provide the subject with just enough information about her ongoing experience to make it possible for her to quickly and effortlessly obtain as much more information as she may happen to need.

            The argument will proceed as follows. In §§1-2, three constraints on the adequacy of an account of the functional role of consciousness will be set out. In §2, the phenomenological approach to consciousness in terms of peripheral self-awareness will be expounded and endorsed. In §3, I will expand on the notion of peripheral awareness, and in particular peripheral self-awareness. In §4, the functional role of peripheral awareness in general will be discussed. This will naturally lead to a discussion, in §5, of the functional role of peripheral self-awareness in particular. The account developed in §5 will be compared and contrasted with several other accounts of the functional role of consciousness in §6.

 

1.      The Functional Role of Consciousness and Functionalism About Consciousness

Mental states and events are rarely (if ever) idle. They normally bring about other mental states and events, as well as certain actions, and they are themselves brought about by other mental states and events, as well as certain physiological conditions. The set of causes and effects that surround a mental state is commonly referred to as the state’s functional role.

            The functional role of a mental state depends on how the state is. The picture is this: the state has various properties, F1, …, Fn, and each property Fi contributes something to (or modifies somehow) the state’s fund of causal powers. One of the properties that some mental states have and some do not is consciousness. We should expect consciousness to contribute something to the fund of causal powers of the mental states that exemplify it. It is not incoherent, of course, to maintain that the property of being conscious does not contribute anything to a mental state’s fund of causal powers – that consciousness is causally inert, or epiphenomenal.[2] But that is an extremely unlikely possibility, a non-starter to say the least. In all likelihood, consciousness has some functional significance, and there is a contribution it makes to mental states that have it.

            In this paper, I will assume that consciousness does have a functional role.[3] As such, consciousness adds something to the mental states that exemplify it. On the other hand, it is implausible to suppose that consciousness is nothing but that “addition.” In other words, it is implausible that a functionalist approach to consciousness could be made to work. In general, functionalism is the view that mental states and properties can be identified with their functional role in the subject’s cognitive economy (Putnam 1967, Lewis 1972).[4] With regard to consciousness, the thesis is that consciousness can be identified with its functional role, that is, that a mental state’s property of being conscious is just the property of having the kind of functional profile we find in conscious states but not in unconscious states (Dennett 1981).

A principled problem for functionalism is that functional role is a dispositional notion, whereas many mental states are categorical. Functional role is a dispositional notion, in that the causal powers of a mental state are what they are independently of whether the state actually manifests them. A mental state’s functional role is a matter of its subject’s disposition to do (or undergo) certain things, not a matter of the subject’s actually doing (or undergoing) those things. But where there is a disposition there must be a categorical basis for it. When an object or state is disposed a certain way, there is a reason why it is so disposed. There must be something about it that grounds the disposition. Now, many mental states appear to be precisely the categorical bases for certain dispositions, rather than the dispositions themselves. It is because the subject is in the mental state she is in that she is disposed the way she is, not the other way round. Such mental states are not just functional role, then; they are what plays, or grounds, the functional role.

There may be some mental states that are plausibly construed as nothing but the relevant bundles of dispositions. A subject’s tacit belief that there are birds in China is plausibly identified with a set of dispositions; there appears to be no need to posit a concrete item that underlies those dispositions. This is because nothing needs to actually happen with a subject who tacitly believes that there are birds in China. But many mental states are not like that. A subject’s conscious experience of the blue sky is more than a set of dispositions. Here there is a concrete item that underlies the relevant dispositions. Something does actually happen with a subject when she has the experience. In virtue of having a conscious experience of the blue sky, the subject is disposed to do (or undergo) certain things. But there is more to the subject’s having the conscious experience than her being so disposed. Indeed, it is precisely because the subject has her experience that she is disposed the way she is. The experience is the reason for the disposition, it is its categorical basis.

There are two points to retain from the foregoing discussion. First, to engage in a search for the functional role of consciousness is not to subscribe to a functionalist approach to consciousness. Second, understanding the functional role of consciousness requires two things. It requires, first of all, understanding how a subject’s having a conscious mental state disposes her (in ways that having an unconscious mental state does not). That is, it requires that the functional role of consciousness be correctly identified. And it requires, on top of that, understanding what it is about a mental state’s being conscious that endows it with this particular functional role. That is, it requires understanding why consciousness has just the functional role it does. This latter requirement is of the first importance. Our conception of consciousness must make it possible for us to see what it is about consciousness that yields the kinds of dispositions associated with conscious states and not with unconscious states. It must allow us not only to identify the functional role of consciousness, but also to explain it.

If consciousness was nothing more than a bundle of dispositions, there would be no question as to why consciousness is associated with just those dispositions. Consciousness would just be those dispositions. But because consciousness is more than a bundle of dispositions – because it is the categorical basis of those dispositions – there are two separate questions that arise in relation to its functional role: What does consciousness do?, and Why is that what consciousness does? The latter arises because, when we claim that consciousness underlies certain dispositions, we assume that there is a reason why these are the dispositions it underlies. The matter can hardly be completely arbitrary, a fluke of nature. Therefore, unless functionalism about consciousness is embraced, both questions must be answered. Conversely, functionalism about consciousness necessarily fails to explain why consciousness has the functional role it does, and is to that extent unsatisfactory. A more satisfactory account of consciousness would meet both our theoretical requirements: it would both identify and explain the functional role of consciousness.[5] Let us call the former the identification requirement and the latter the explanation requirement.[6]

 

2.      A Phenomenological Approach to Consciousness

When discussing the functional role of consciousness, it is important to distinguish the role of conscious states from the role of consciousness proper. As noted in the previous section, the causal powers of mental states are determined by these states’ properties. Each property a mental state exemplifies contributes something to the state’s fund of causal powers. Clearly, then, some of the causal powers of a conscious state are not contributed to it by its property of being conscious, but by its other properties. They are powers the state has, but not in virtue of being conscious. It would have them even if it were not conscious. Therefore, it is important that we distinguish between the causal powers that a conscious state has and the causal powers it has precisely in virtue of being conscious. Let us refer to the latter as the causal powers of consciousness proper. These are the powers contributed to a conscious state specifically by its property of being conscious.

            Consider a subject’s conscious perception of the words “terror alert” in the newspaper. Such a conscious experience is likely to raise the subject’s level of anxiety. But it is unclear that the rise is due to the fact that the subject’s perception is conscious. Indeed, data on the effects of subliminal perception on emotion suggests that an unconscious perception of the same stimulus would also raise the subject’s level of anxiety.[7] This suggests that while the subject’s perception of the words “terror alert” has the causal powers to raise the level of anxiety, it is not in virtue of being conscious that it has those causal powers. The conscious perception’s power to raise the level of anxiety is not a function of consciousness proper.

            An account of the functional role of consciousness must target the causal powers of consciousness proper. It must distill the singular contribution of consciousness itself to the fund of causal powers of conscious states. Our concern is not with the causal powers of mental states that happen to be conscious, but with the causal powers conscious states have because they are conscious. This constitutes a third requirement on an adequate account of the functional role of consciousness; let us call it the singularity requirement.

            To meet the singularity requirement, we must get clear on what consciousness proper is. What is the property mental states have when and only when they are conscious, and in virtue of which they are conscious? Oceans of ink have been spilled in recent years in search of an answer. A thorough discussion of the matter will require that we focus exclusively on it. For this reason, in this paper I adopt somewhat dogmatically a view of what consciousness is. Although I will do the minimum to justify that adoption, my main goal is to explore the implications of the view for the question of functional role.

            The view I will adopt is drawn from the phenomenological tradition. It is well known that Brentano (1874) proposed intentionality as the mark of the mental. It is less well known that he proposed self-directed intentionality as the mark of the conscious. For Brentano, a mental state is conscious when, and only when, it is intentionally directed at itself. Moreover, it is in virtue of being thus directed at itself that the state is conscious.[8] When a person is consciously aware of, say, a tree, she has a mental state that is intentionally directed both at the tree and at itself. Thus every conscious state includes within it an awareness of itself.

Normally, when a person is consciously aware of a tree, the focus of her awareness is the tree, not her awareness of the tree. In this respect, the self-directed intentionality enjoys a lower status, in a sense, than the outward-directed intentionality. To accommodate this fact, Brentano distinguished between primary intentionality and secondary intentionality.[9] Primary intentionality is a conscious state’s directedness at the main object of awareness, whereas secondary intentionality is its directedness toward objects that are outside the focal center of awareness.

The upshot is that for Brentano, a mental state is conscious when it exhibits secondary self-directed intentionality, that is, when it is secondarily directed at itself. This conception of consciousness has subsequently become commonplace in the phenomenological tradition, through Brentano’s influence on Husserl (1928), who defended a similar view.[10] The view was then embraced by Sartre (1943), Henry (1963), Gurwitsch (1985), and the members of the Heidelberg School in Germany.[11]

As I said above, I will not present a detailed defense of the phenomenological conception of consciousness. But let me indicate the main source for its plausibility. At first approximation, a conscious state is a state the subject is aware of having.[12] When I have a conscious experience of the blue sky, I am aware of having my experience. The experience does not just take place in me, it is also for me – in the sense that I am aware of its taking place. If I were completely unaware of perceiving the sky, the perception would have been unconscious. Conscious mental states are not sub-personal states, which we “host” in an impersonal sort of way, without being aware of them.

To be sure, we can readily have conscious experiences without becoming wholly consumed with them. Thus, I can have my conscious experience of the sky when glancing at it inadvertently. In that case, I am not aware of my experience in a very focused way. However, I am necessarily aware of my experience someway; otherwise it would not be conscious. Therefore, in this case I am aware of my experience in some sort of unfocused way. Upon reflection, most our conscious experiences are of this sort: they are not experiences we dwell on in a very focused and deliberate way. Normally, when we have a conscious experience of the sky, we do not concentrate on our experience, but on the sky itself. Normal conscious states are thus states of which we are aware in an unfocused way.

By way of clarifying the matter, let us distinguish three ways in which a subject may be related to one of her mental states, M. A subject may be either (i) completely unaware of M, or (ii) focally aware of M, or (iii) peripherally aware of M. Mental states the subject is completely unaware of are unconscious states. Only mental states the subject is aware of are conscious. Normally, the subject is only peripherally aware of her conscious mental states, though it may also happen that she is focally aware of a conscious state.[13]

Observe, however, that when a subject becomes focally aware of one of her mental states, it is not only the state in question that is conscious, but also that very state of focal awareness.[14] Since every conscious state is a state one is aware of having, this focal awareness – being a conscious state – must be itself a state the subject is aware of having. So the subject must be either focally aware of this focal awareness or peripherally aware of it; she cannot be completely unaware of her focal awareness. However, if the subject is focally aware of this focal awareness, her focal awareness of the focal awareness would also be conscious, and therefore the subject would have to be aware of it too. To avoid  an infinite regress of focal awarenesses, at some point one of the subject’s states of focal awareness must be such that the subject is not focally aware of having it. Yet being a conscious state it would have to be a state the subject is aware of. Therefore, the subject would have to be peripherally aware of that state. This peripheral awareness will cap the regress of focal awarenesses. It appears, then, that in every episode of our mental life in which we harbor a conscious state, we must be peripherally aware of at least one of our mental states. The same is not true of focal awareness: when I have my inadvertent experience of the sky, I am not focally aware of any of my mental states. Therefore, it is peripheral awareness of one of the subject’s mental states that is present when and only when the subject harbors a conscious state. So an account of the functional role of consciousness proper would have to identify and explain the functional role of this sort of peripheral awareness.

            In the next section, we will have occasion to clarify further the notion of peripheral awareness. As we will see, a subject can be peripherally aware not only of her own mental states, but of external stimuli as well. To distinguish peripheral awareness of external stimuli from peripheral awareness of one of one’s own mental states, let us call the latter peripheral self-awareness. On the phenomenological conception of consciousness, such peripheral self-awareness is constituted by secondary self-directed intentionality.[15]

In conclusion, an adequate account of the functional role of consciousness must not only meet the identification requirement and the explanation requirement, but also the singularity requirement. If peripheral self-awareness is indeed what is present when and only when a subject is undergoing a conscious episode, then meeting the singularity requirement would involve accounting for the functional role of peripheral self-awareness. That is, the identification and explanation of the singular contribution of consciousness to the fund of causal powers of conscious states would require the identification and explanation of the functional role of peripheral self-awareness.

 

3.      Focal Awareness and Peripheral Awareness

The distinction between focal and peripheral awareness does not apply only to awareness of one’s own mental states. It applies to awareness of external stimuli as well.

Consider the phenomenon of peripheral vision. When I look at the laptop in front of me, I am focally aware of the laptop. But in the periphery of my visual field appear other objects: books on the right side of my desk, printouts on the left side of my desk, etc. My awareness of these objects is not nearly as clear or as accurate as my awareness of the laptop I am focusing on, but it would be a mistake to say that I am completely unaware of these objects. The status of the books and printouts on my desk vis-à-vis my perceptual experience is unlike the status of the table in the living room, which I cannot perceive and am completely unaware of. To distinguish among the status of the laptop, the status of the books and printouts, and the status of the living-room table, we must again introduce a distinction between focal and peripheral awareness, and say that I have focal awareness of the laptop, peripheral awareness of the books and the printouts, and no awareness of the living-room table.[16]

The same tripartite distinction applies to perceptual experiences in non-visual modalities. Suppose you are listening to Brahms’ Piano Concerto No. 1. Your auditory perception of the piano is bound to be more focused than your perception of the cellos, or for that matter, of the cars driving by your window. That is, you are focally aware of the piano and only peripherally aware of the cellos and the cars.

Competition for the focus of awareness is not restricted to stimuli from the same modality. My current conscious experience is focused (visually) on the laptop before me, but it has many peripheral elements, only some of which are visual. I have visual peripheral awareness of the books and printouts on my desk, but also auditory peripheral awareness of the cars outside my window, olfactory peripheral awareness of burned toast, tactual peripheral awareness of the chair I am sitting on, etc. All these bits of awareness form part of a single overall experience. The focus of my overall awareness is the laptop, which is presented visually, but I am peripherally aware of a myriad of external stimuli presented in other modalities.

It was to capture the richness of peripheral awareness and its place in normal conscious experience that James (1890) introduced the notion of the fringe of consciousness. Similar notions have been developed by other psychologists, including within the phenomenological tradition. Brentano’s notion of secondary awareness, Husserl’s notion of non-thematic consciousness, Sartre’s notion of non-positional consciousness, and Gurwitsch’s notion of marginal consciousness are all supposed to capture the same phenomenon.[17]

Interestingly, some of the elements in the fringe of consciousness are altogether non-perceptual. Particularly conspicuous are emotional and mood-related elements. If I am in a good mood as I am having my conscious experience of the laptop, the experience will include, in its periphery, a certain feeling of cheerfulness. There are also intellectual elements in the fringe of consciousness, such as the so-called “feeling-of-knowing” and “rightness” phenomena (Mangan 2001).

On the phenomenological conception of consciousness proper laid out in the previous section, another important element in the fringe of consciousness is awareness of the subject’s current experience. When I have my conscious experience of my laptop, I am peripherally aware of the books and printouts on my desk, the cars outside my window, the chair I am sitting on, etc., but I am also peripherally aware of having that very experience. This sort of self-awareness is a peripheral element in my conscious experience; it is peripheral self-awareness.[18]

Some readers may object that they cannot find anything like peripheral self-awareness in their phenomenology. Now, it is quite difficult to see how to erect an argument for the very existence of peripheral self-awareness, but let me note two things. First, in §5 I will argue that the functional role of peripheral self-awareness is such that there are good reasons to expect that something like it would emerge over the course of evolution. Second, rejecting the notion of peripheral self-awareness would force us into an unhappy dilemma: either we allow that there can be conscious states whose subject is unaware of having, or we claim that all conscious states are states the subject is focally aware of having. To my mind, both horns of this dilemma are worse options than admitting the existence of peripheral self-awareness.

 

4.      The Functional Role of Peripheral Awareness

Even those disinclined to countenance peripheral self-awareness admit the existence of peripheral visual awareness. Yet the latter should not be taken for granted. The fact that our visual system employs peripheral awareness is not a brute, arbitrary fact. There are reasons for it.[19]

            Our cognitive system handles an inordinate amount of information. The flow of stimulation facing it is too torrential to take in indiscriminately. The system must therefore develop strategies for managing the flux of incoming information. The mechanism that mediates this management task is, in effect, what we know as attention.[20] There are many possible strategies the cognitive system could adopt – many ways the attention mechanism could be designed – and only some of them make place for peripheral visual awareness.

Suppose a subject faces a scene with five distinct visual stimuli: A, B, C, D, and E. The subject’s attention must somehow be distributed among these stimuli. At the two extremes are the following two strategies. One would have the subject distribute her attention evenly among the five stimuli, so that each stimulus is granted 20% of the subject’s overall attention resources; let us call this the “20/20 strategy.” The other would have the subject devote the entirety of her attention resources to a single salient stimulus to the exclusion of all others, in which case the relevant stimulus, say C, would be granted 100% of the subject’s resources, while A, B, D, and E would be granted 0%; let us call this the “100/0 strategy.” In-between these two extremes are any number of more flexible strategies. Consider only the following three: (i) the “60/10 strategy,” in which C is granted 60% of the resources and A, B, D, and E are granted 10% each; (ii) the “28/18 strategy,” in which C is granted 28% of the resources and A, B, D, and E are granted 18% each; and (iii) the “35/10 strategy,” in which two different stimuli, say C and D, are treated as salient and granted 35% of the resources, while A, B, and E are granted 10% each.

The strategy our visual system actually employs is something along the lines of the 60/10 strategy. This strategy has three key features: it allows for only one center of attention; the attention it grants to the elements outside that focal center is more or less equal; and it grants considerably more attention to the center than to the various elements in the periphery. When I look at the desktop before me, my visual experience has only one center of attention, namely, the desktop; it grants more or less equal attention to the two elements in the periphery, namely, the books on the right side of the desk and the printouts on the left side; and the attention it grants to the desktop is considerably greater than that it grants to the books and the printouts. Each of the other models misrepresents one feature or another of such an ordinary experience. The 20/20 strategy implies that my awareness of the books and printouts is just as focused as my awareness of the desktop before me, which is patently false. The 100/0 strategy implies that I am completely unaware of the books and printouts, which is again false. The 28/18 strategy misrepresents the contrast between my awareness of the desktop and my awareness of the books or printouts: the real contrast in awareness is much sharper than suggested. And the 35/10 strategy wrongly implies that my visual experience has two separate focal centers.[21] (There may – or may not – be highly abnormal experiences in which there are two independent centers of attention – say, one at 36 degree on the right side of the subject’s visual field and one at 15 degree on the left side of the visual field – but a normal experience is clearly unlike that. Normal experience has a single focal center.)[22]

The above treatment of the possible strategies for managing the information overload facing the visual system (and perforce the cognitive system) is of course oversimplifying. But it serves to highlight two important things. First, the existence of peripheral visual awareness is a contingent fact. In the 100/0 strategy, for instance, there is no such thing as peripheral awareness: the subject is either focally aware of a stimulus or completely unaware of it.[23] In a way, the 20/20 strategy likewise dispenses with peripheral awareness, as it admits no distinction between focal center and periphery.[24] Only the three other strategies make place for the notion of peripheral awareness.

Second, if the 60/10 strategy (or something like it) has won the day over the other possible candidates, there must be a reason for that. The 60/10 strategy has apparently been selected for, through evolution (and perhaps also learning), and this suggests that there must be some functional advantages to it.[25]

What are these functional advantages? It is impossible to answer this question without engaging in all-out speculation. In the remainder of this section, I offer my own hypothesis, but doing full justice to the issue at hand would be impossible here. I will only pursue the hypothesis to the extent that it may help illuminate, in the next section, the question of the functional role of peripheral self-awareness.

The distribution of attention resources in the 60/10 strategy accomplishes two things. First, with regard to the stimuli at the attentional periphery, it provides the subject with just enough information to know where to get more information. And second, by keeping the amount of information about the periphery to the minimum needed for knowing where to get more information, it leaves enough resources for the center of attention to provide the subject with rich and detailed information about the salient stimulus. On this hypothesis, the functional role of peripheral awareness is to give the subject “leads” as to how to obtain more detailed information about any of the peripheral stimuli, without encumbering the system overmuch. By doing so, peripheral awareness enhances the availability of rich and detailed information about those stimuli. Peripheral visual awareness thus serves as a gateway, as it were, to focal visual awareness: it smoothes out – facilitates – the process of assuming focal awareness of a stimulus (Mangan 1993, 2001).

Consider the subject’s position with regard to stimulus E, of which she is peripherally aware, and an object F, of which she is completely unaware. If the subject suddenly requires fuller information about E, she can readily obtain it simply by turning her gaze onto it. That is, the subject has enough information about E to be able to quickly and effortlessly obtain more information about it. By contrast, if she is in need of information about F, she has to engage in a “search” of some sort after the information needed. Her current visual experience offers her no leads as to where she might find the information she needs about F. (Such leads may be present in memory, or could be extracted by reasoning, but they are not to be found in the subject’s visual experience itself.) Peripheral awareness of a stimulus thus allows the subject to spend much less energy and time to become focally aware of the stimulus and obtain detailed information about it. It makes that information much more available and usable to the subject.

 

5.      The Functional Role of Peripheral Self-Awareness

The hypothesis delineated above, concerning the functional significance of peripheral visual awareness, suggests a simple extension to the case of peripheral self-awareness. The subject’s peripheral awareness of her ongoing experience makes detailed information about the experience much more available to the subject than it would otherwise have been. More specifically, it gives the subject just enough information about her current experience to know how to get more information quickly and effortlessly, should the need arise.

More accurately stated, the suggestion is that when, and only when, a mental state M is conscious, so the subject is peripherally aware of M, the subject possesses just enough information about M to make it possible for her to easily (i.e., quickly and effortlessly) obtain fuller information about M. Compare the subject’s position with regard to some unconscious state of hers, a state of which she is completely unaware. If the subject should happen to need detailed information about that unconscious state, she would have to engage in certain energy- and time-consuming activities to retrieve that information.

            It is important to stress that the information provided by peripheral self-awareness concerns the experience itself, not the objects of the experience. Consider again my laptop experience. In having my experience, I am focally aware of the laptop and peripherally aware of at least three things: the books on the right side of my desk, the printouts on the left side, and my very experience of all this. My peripheral awareness of the books provides me with just enough information about the books to know how to get more information about them. My peripheral awareness of having the experience provides me with just enough information to know how to get more information – not about the laptop or books, but about the very experiencing of the laptop and books.[26]

            Peripheral self-awareness is a constant element in the fringe of consciousness: we are at least minimally aware of our ongoing experience throughout our waking life. This continuous awareness we have of our experience multiplies the functional significance of the awareness. The fact that at every moment of our waking life we have just enough information about our current experience to get as much further information as we should need means that our ongoing experience is an “open source” of information for all other modules and local mechanisms in the cognitive system. This is the basis of the idea that consciousness makes information globally available throughout the system. Baars (1988) puts it in what I think is a misleading way by saying that consciousness “broadcasts” information through the whole system; I would put it the other way around, saying that consciousness “invites” the whole system to grab that information.

It is not hard to see, on this picture, why peripheral self-awareness is a good thing to have. Consciousness is often described as a monitoring device, a device that allows us to gather and process detailed information about our very mechanisms of gathering and processing information (Lycan 1996). On the picture here defended, this is inaccurate: consciousness is not the monitoring device itself, but a gateway to the monitoring device. Consciousness does not give us detailed information about our inner goings-on, but rather makes it easy for us to get such detailed information whenever we want, by giving us just enough information about our concurrent inner goings-on to know how to get fuller information.[27] However, even though consciousness is not itself the monitoring device, the functional benefits of having a monitoring device – detecting malfunction in the processes of information gathering and processing, integrating disparate bits of information into a coherent whole, etc.[28] – explain also the benefit in having a gateway to the monitoring device. Whatever the function of the monitoring device itself, the function of consciousness is to give the subject “leads” that would prompt and facilitate the deployment of monitoring as need arise.

            The fact that peripheral self-awareness is a good thing to have may help us counter the objection, brought up at the end of §3, that there is no such thing as peripheral self-awareness. If peripheral self-awareness is a good thing to have, it is unsurprising that it should appear in the course of evolution. To be sure, the fact that a feature is good to have does not necessitate its evolution. But given that the existence of neither peripheral awareness itself nor self-awareness itself is in contention, it is hard to motivate the idea that something like peripheral self-awareness would not come into existence.[29]

            The account I have defended offers the following answer to the question of identification: the functional role of consciousness proper is to give the subject just enough information to know how to easily obtain fuller information about her concurrent experience. Against the background of §§3-4, the answer to the question of explanation should be clear: the reason consciousness has just this sort of functional role is that consciousness is essentially peripheral self-awareness, and peripheral self-awareness involves just this sort of functional role; the reason peripheral self-awareness involves just this sort of functional role is that it is a form of peripheral awareness, and this is the kind of functional role peripheral awareness in general has; and the reason peripheral awareness in general has just this kind of functional role has to do with the cognitive system’s strategy for dealing with the information overload it faces.

(This model explains both why there is such a thing as peripheral self-awareness and why peripheral self-awareness plays the functional role of giving the subject just enough information about her ongoing experience to be able to easily obtain fuller information. The key point is that providing the subject with just this sort of information is not what consciousness is, but what consciousness does. What consciousness is is peripheral self-awareness, that is, peripheral awareness of one’s concurrent experience. So in this account consciousness is not identified with the providing of the information, but is rather the categorical basis for it.)

            In conclusion, the account of the functional role of consciousness here proposed may be summarized in terms of the following three tenets:

 

  1. A mental state M is conscious when and only when the subject is peripherally aware of M.[30]
  2. The functional role of consciousness is to give the subject just enough information to know how to quickly and effortlessly obtain rich and detailed information about her concurrent experience.
  3. The reason this is the functional role of consciousness is that the cognitive system’s strategy for dealing with information overload employs peripheral awareness, a variety of which is peripheral self-awareness (hence consciousness), and the functional role of peripheral awareness in general is to give the subject just enough information to know how to get fuller information about whatever the subject is thereby aware of.

 

The three tenets satisfy our three requirements on an account of the functional role of consciousness. (1) is intended to meet the singularity requirement: it says what consciousness proper is. (2) is intended to meet the identification requirement: it says what the functional role of consciousness is. (3) is intended to meet the explanation requirement: it makes a claim as to why it is that consciousness has just the functional role attributed to it in (2).[31]

 

6.      Other Approaches to the Functional Role of Consciousness

Before closing, I would like to situate the account I have defended in relation to other central accounts of the functional role of consciousness. The purpose is not so much to argue against these other accounts as to illustrate the force of the present account.

            According to Baars (1997), consciousness does a good number of things: it prioritizes the cognitive system’s concerns, facilitates problem-solving, decision-making, and executive control, serves to optimize the trade-off between organization and flexibility, helps recruit and control actions, detects errors and edits action plans, creates access to the self, facilitates learning and adaptation, and in general “increase[s] access between otherwise separate sources of information.”[32] (1997: 162-3)

            There are two problems with Baars’ account. First, the functions he cites are not peculiar to consciousness. There is no question that conscious mental states are involved in all those things. But it is far from clear that conscious states perform any of these functions precisely in virtue of being conscious. By putting together this list, Baars is not distilling the singular functional significance of consciousness proper, but simply enumerating the functions performed by mental states which happen to be conscious. That is to say, Baars’ account fails to meet the singularity requirement. Second, all the specific functions Baars cites are monitoring functions. If the account offered in the previous section is correct, monitoring functions do not characterize consciousness proper, although consciousness does enhance the performance of those functions (by serving as a gateway to monitoring).

            Another common error is to misconstrue the relation between consciousness and its functional role. Consider Block’s (1995) distinction between what he calls phenomenal consciousness and access consciousness. Phenomenal consciousness is consciousness proper, the truly mysterious phenomenon we all want to understand. Access consciousness is, by contrast, a functional notion: a mental state “is access-conscious if it is poised for free use in reasoning and for direct ‘rational’ control of action and speech.” (1995: 382)

One problem with Block’s distinction is that any function we may wish to attribute to phenomenal consciousness would be more appropriately attributed to access consciousness, leaving phenomenal consciousness devoid of functional significance (Chalmers 1997). The source of this unhappy consequence is the notion that phenomenal and access consciousness are two separate phenomena sitting side by side at the same theoretical level. In reality, access consciousness appears to be the functional role of phenomenal consciousness. The relation between phenomenal and access consciousness is therefore the relation of player to role: phenomenal consciousness plays access consciousness, if you will. Once we construe access consciousness as the functional role of phenomenal consciousness, we can attribute again any function we may wish to phenomenal consciousness: the function is construed as part of access consciousness and is therefore performed by phenomenal consciousness. The conceptual confusion caused by Block’s distinction is overcome.

Another problematic aspect of Block’s views here is his particular characterization of access consciousness, the functional role of consciousness proper. On the account offered in the previous section, it is quite true that conscious states are poised for free use in reasoning and control. But this is a secondary function of theirs. The primary function of consciousness is to give the subject just enough information to know how to easily obtain detailed information about her concurrent experience. The secondary function identified by Block is a result of two factors: the primary function and the fact that peripheral self-awareness is constant throughout our waking life. That is to say, Block’s account offers an incorrect identification of the functional role of consciousness and therefore fails to meet the identification requirement.

Tye (2000) also identifies the functional role of consciousness in terms of poise for use in rational control and deliberation. More specifically, he claims that “experiences and feelings, qua bearers of phenomenal character…stand ready and available to make a direct impact on beliefs and/or desires.”[33] (2000: 62)

If the account defended in §5 is on the right track, then Tye’s identification of the functional role of consciousness is at least incomplete, as it leaves out the function consciousness has in giving the subject basic information about her concurrent experience. Furthermore, unless a lot rides on the phrase “stand ready and available,” the role identified by Tye is routinely played by unconscious perceptions (which do of course make an impact on beliefs and desires). So Tye’s account appears to fail the identification requirement as well.

According to Tye’s representational theory of consciousness, conscious states are essentially representational, in that what makes them the conscious states they are is their representational content. One major difficulty facing the representational theory is that, on the face of it, every stimulus can be represented either consciously or unconsciously, so the difference between conscious and unconscious states is not found in their representational properties (Kriegel 2002). Tye’s response is to claim that conscious representations, unlike unconscious representations, are functionally poised in the way described above.[34] The problem with this response is that it leaves Tye with no way to explain the functional role of conscious states. By claiming that what distinguishes conscious from unconscious states is functional role, Tye is effectively embracing a functionalist account of consciousness proper. But as we saw in §1, a functionalist account of consciousness proper is incapable of explaining why consciousness has just the functional role it has, since it identifies consciousness with the role in question, rather than construing consciousness as the categorical basis for it. Therefore, Tye’s account also fails to meet the explanation requirement.

One of the most interesting empirical findings about the function of consciousness is Libet’s (1985). Libet instructed his subjects to flex their right hand muscle and pay attention when their intention to flex the muscle is formed, with the goal of finding out the temporal relationship between (i) muscle activation, (ii) onset of the neurological cause of muscle activation, and (iii) the conscious intention to flex one’s muscle. Libet found that the neurological cause of muscle activation precedes conscious intention to flex the muscle by about 350 milliseconds and the muscle activation itself by 550 milliseconds. That is, the conscious intention to flex one’s muscle is formed when the causal process leading to the muscle activation is already well underway. This suggests that consciousness proper does not have the function of initiating the causal process leading to the muscle activation, and is therefore not the cause of the intended act. According to Libet, the only thing consciousness can do is undercut the causal process at its final stages. That is, the only role consciousness has is that of “vetoing” the production of the act or allowing it to go through.

The phenomenological approach to consciousness proper we have taken in §2 starts from the assumption that conscious states are states we are aware of having. This means that a mental state must exist for some time before it becomes conscious, since the awareness of the state in question necessarily takes some time to form. Now, it is only to be expected that the state in question should be able to perform at least some of its functions before it becomes conscious. In many processes, the state can readily play a causal role independently of the subject’s awareness of it. So it is unsurprising that consciousness proper should have a small role to play in such processes (Rosenthal 2002b). What would be surprising is for consciousness to play that limited role in all or most cognitive processes. But this cannot be established by Libet’s experiment. One overlooked factor in Libet’s experiment is the functional role of the subjects’ conscious intention to follow the experimenter’s instructions (Flanagan 1992). This introduces two limitations on Libet’s findings. First, we do not know what the causal role of the conscious intention to follow the experimenter’s instructions is in the production of muscle activation. Second, we do not know what causal role a conscious intention to flex one’s muscle plays when it is not preceded by a conscious intention to follow certain instructions related to flexing one’s muscle. Given that the majority of instances of muscle flexing involve a single conscious intention (rather than a succession of two separate but related conscious intentions), we do not as yet know what the functional role of conscious intention to flex one’s muscle is in the majority of instances.

In any case, observe that Libet’s findings bear only on the role of consciousness vis-à-vis motor output. But internal states of the cognitive system can bring about not only motor output, but also further internal states.[35] On the account defended here, the latter is more central to the functional role of consciousness. The fact that a subject is peripherally aware of her mental states plays a role in bringing about states of focal awareness of those mental states, and more generally a role in the operation of internal monitoring processes. 

The account of the functional role of consciousness I defended in §5 is thus different in clear and significant ways from other accounts to be found in the literature on consciousness, including some leading accounts in the psychological, philosophical, and neuroscientific literature.

 

7.      Conclusion

In this article, I have developed a novel account of the functional role of consciousness. This account identifies a very specific function which it claims characterizes the singular contribution of consciousness to the fund of causal powers of conscious states, and embeds this identification in a larger explanatory account of the purpose and operation of attention. According to the account I have offered, when a mental state M is conscious, its subject has just enough information about M to be able to easily obtain fuller information about it.

The account is grounded in empirical considerations but is quite speculative, in that it depends on a number of unargued-for assumptions. As such, it is a “risky” account, an account whose plausibility may be undermined at several junctures. At the same time, none of the assumptions made above is flagrantly implausible. So at the very least, the account of the functional role of consciousness here defended offers a viable alternative to the accounts currently on offer in the literature on consciousness.

In any event, if one does accept the phenomenological conception of consciousness, the account proposed here of its functional role is hard to deny. Conversely, the fact that a clear and precise account of the functional significance of consciousness follows rather straightforwardly from the phenomenological conception of consciousness in terms of peripheral self-awareness is a testimony to the theoretical force of the phenomenological conception.
References

  • Baars, B. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge UP.
  • Baars, B. 1997. In the Theater of Consciousness: The Workspace of the Mind. Oxford and New York: Oxford UP.
  • Baron-Cohen, S. 1995. Mindblindness. Cambridge MA: MIT Press.
  • Block, N. J. 1995. “On a Confusion About the Function of Consciousness.” Behavioral and Brain Sciences 18: 227-247. Reprinted in N. J. Block, O. Flanagan, and G. Guzeldere (eds.), The Nature of Consciousness: Philosophical Debates, Cambridge MA: MIT Press, 1997.
  • Brentano, F. 1874. Psychology from Empirical Standpoint. Ed. O. Kraus. Ed. of English edition L. L. McAlister, 1973. Translation A. C. Rancurello, D. B. Terrell, and L. L. McAlister. London: Routledge and Kegan Paul.
  • Broadbent, D. E. 1958. Perception and Communication. London: Pergamon Press.
  • Brough, J. B. 1972. “The Emergence of an Absolute Consciousness in Husserl’s Early Writings on Time-Consciousness.” Man and World 5 (1972): 298-326.
  • Carruthers, P. 2000. Phenomenal Consciousness. Cambridge: Cambridge UP.
  • Carruthers, P. 2002. “The Evolution of Consciousness.” In P. Carruthers and A. Chamberlin (eds.), Evolution and the Human Mind, Cambridge: Cambridge UP.
  • Chalmers, D. J. 1997. “Availability: The Cognitive Basis of Consciousness?” Behavioral and Brain Sciences 20: 148-149.
  • Dennett, D. C. 1981. “Towards a Cognitive Theory of Consciousness.” In his Brainstorms, Brighton: Harvester.
  • Dixon, N. F. 1971. Subliminal Perception: The Nature of a Controversy. London: McGraw-Hill.
  • Flanagan, O. 1992. “Conscious Inessentialism and the Epiphenomenalist Suspicion.” In his Consciousness Reconsidered, Cambridge MA: MIT Press.
  • Frank, M. 1995. “Mental Familiarity and Epistemic Self-Ascription.” Common Knowledge 4 (1995): 30-50.
  • Gennaro, R. 2002. “Jean-Paul Sartre and the HOT Theory of Consciousness.” Canadian Journal of Philosophy 32: 293-330.
  • Gurwitsch, A. 1985. Marginal Consciousness. Athens, OH: Ohio UP.
  • Henrich, D. 1966. “Fichte’s Original Insight.” Translation D. R. Lachterman. Contemporary German Philosophy 1 (1982): 15-53.
  • Henry, M. 1963. The Essence of Manifestation. Translation G. Etzkorn. The Hague: Nijhoff, 1973.
  • Husserl, E. 1928. Phenomenology of Internal Time-Consciousness. Ed. M. Heidegger, trans. J. S. Churchill, Bloomington IN: Indiana UP, 1964.
  • James, W. 1890. The Principles of Psychology (2 vols.). London: McMillan (second edition, 1918).
  • Kim, J. 1998. Mind in a Physical World. Cambridge MA: MIT Press.
  • Kriegel, U. 2002. “PANIC Theory and the Prospects for a Representationalist Theory of Phenomenal Consciousness.” Philosophical Psychology 15: 55-64.
  • Kriegel U. 2003a. “Consciousness as Sensory Quality and as Implicit Self-Awareness.” Phenomenology and the Cognitive Sciences 2 (2003): 1-26.
  • Kriegel, U. 2003b. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” Synthese 134: 477-504.
  • Levine, J. 2001. Purple Haze: The Puzzle of Consciousness. Oxford and New York: Oxford UP.
  • Lewis, D. 1972. “Psychophysical and Theoretical Identifications.” Australasian Journal of Philosophy 50: 249-258.
  • Libet, B. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” Behavioral and Brain Sciences 8: 529-566.
  • Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: MIT Press.
  • Mangan, B. 1993. “Taking Phenomenology Seriously: The ‘Fringe’ and its Implications for Cognitive Research.” Consciousness and Cognition 2: 89-108.
  • Mangan, B. 2001. “Sensation’s Ghost: The Non-Sensory ‘Fringe’ of Consciousness.” Psyche 7(18). http://psyche.cs.monash.edu.au/v7/psyche-7-18-mangan.html
  • Moray, N. 1969. Listening and Attention. Harmondsworth: Penguin Books.
  • Natsoulas, T. 1996b. “The Case for Intrinsic Theory: II. An Examination of a Conception of Consciousness4 as Intrinsic, Necessary, and Concomitant.” Journal of Mind and Behavior 17: 369-390.
  • Natsoulas, T. 1999. “The Case for Intrinsic Theory: IV. An Argument from How Conscious4 Mental-Occurrence Instances Seem.” Journal of Mind and Behavior 20 (1999): 257-276.
  • Nichols, S. and S. Stich 2003. “How to Read Your Own Mind: A Cognitive Theory of Self-Consciousness.” In Q. Smith and A. Jokic (eds.), Consciousness: New Philosophical Perspectives. Oxford and New York: Oxford UP.
  • Putnam, H. 1967. “The Nature of Mental States.” Originally published as “Psychological Predicates,” in W. H. Capitan and D. D. Merrill (eds.), Art, Mind, and Religion. Reprinted in D. M. Rosenthal (ed.), The Nature of Mind. Oxford: Oxford UP.
  • Rosenthal, D. M. 1986. “Two Concept of Consciousness.” Philosophical Studies 94: 329-359.
  • Rosenthal, D. M. 1990. “A Theory of Consciousness.” ZiF Technical Report 40, Bielfield, Germany. Reprinted in N. J. Block, O. Flanagan, and G. Guzeldere (eds.), The Nature of Consciousness: Philosophical Debates. Cambridge MA: MIT Press, 1997.
  • Rosenthal, D. M. 2002a. “Explaining Consciousness.” In D. J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. Oxford and New York: Oxford UP.
  • Rosenthal, D. M. 2002b. “The Timing of Consciousness.” Consciousness and Cognition 11: 215-220.
  • Sartre, J.-P. 1943. L’Être et le néant. Paris: Gallimard.
  • Silverman, L. H., A. Martin, R. Ungaro, and E. Medelsohn 1978. “Effect of Subliminal Stimulation of Symbiotic Fantasies on Behavior Modification Treatment of Obesity.” Journal of Consultative Clinical Psychology  46: 432-441.
  • Smith, D. W. 1986. “The Structure of (Self-)Consciousness.” Topoi 5: 149-156.
  • Smith, D. W. 1989. The Circle of Acquaintance. Dordrecht: Kluwer Academic Publishers.
  • Sokolowski, R. 1974. Husserlian Meditations. Evanston, IL: Northwestern UP.
  • Sturma, D. 1995. “Self-Consciousness and the Philosophy of Mind: A Kantian Reconsideration.” Proceedings of the Eighth International Kant Congress, Vol. 1, Milwaukee WI: Marquette UP.
  • Thomasson, A. L. 2000. “After Brentano: A One-Level Theory of Consciousness.” European Journal of Philosophy 8 (2000): 190-209.
  • Tye, M. 2000. Consciousness, Color, and Content. Cambridge MA: MIT Press.
  • Van Gulick, 2002. “Consciousness May Still Have a Processing Role to Play.” Behavioral and Brain Sciences 14: 699-700.
  • Velmans, M. 1992. “Is Human Information Processing Conscious?” Behavioral and Brain Sciences 14: 651-669.
  • Weiskrantz, L. 1986. Blindsight. Oxford: Oxford UP.
  • Wider, K. 1997. The Bodily Nature of Consciousness: Sartre and Contemporary Philosophy of Mind. Ithaca, NY: Cornell UP.
  • Zahavi, D. 1998a. “Brentano and Husserl on Self-Awareness.” Êtudes Phénomènologiques 27-8 (1998): 127-169.
  • Zahavi, D. 1998b. “The Fracture in Self-Awareness.” In D. Zahavi (ed.), Self-Awareness, Temporality, and Alterity. Dordrecht: Kluwer Academic Publishers.
  • Zahavi, D. 1999. Self-awareness and Alterity. Evanston, IL: Northwestern UP.

 

[1] According to Kim (1998), this is how all scientific reduction proceeds. Thus, the reduction of water to H2O proceeded according to the same “plan”: in a first stage, water was “functionalized,” meaning that its causes and effects were studied; in a second stage, H2O was studied till it was known to have just those causes and effects singled out in the first stage; finally, water was identified with H2O on this basis.
[2] This seems to be Velmans’ (1992) view, for instance.
[3] For concrete argumentation in favor of the causal efficacy of consciousness, see Flanagan 1992, and Van Gulick 1992. According to Kim (1998), all phenomena must be causally efficient, hence not epiphenomenal, because of what he calls “Alexander’s dictum”: to be is to be causally efficient. If Alexander’s dictum is correct, nothing can be completely causally inert. If so, either consciousness is not epiphenomenal, or there is no such thing as consciousness.
[4] Functionalism is not the view that mental states and events have a functional role – that is almost beyond dispute. What functionalism claims is there is nothing more to a mental state or event beyond its functional role.
[5] In other words, the discussion of this section paves the way for a certain argument against functionalism about consciousness, namely, the argument that functionalism necessarily fails to explain the functional role of consciousness.
[6] In this paper, however, I am less interested in the causes of consciousness and more in its effects. The notion of functional role relates equally to the causes and effects of whatever plays the role, but the ‘causes’ part is of lesser interest to me here.
[7] For very concrete effects of subliminal perception on anxiety, see Silverman et al. 1978. For more general discussion of subliminal perception and its functional significance, see Dixon 1971. Another well known form of unconscious perception which retains some of the causal powers of conscious perception is blindsight (see Weiskrantz 1986). Unless the function of consciousness is implausibly duplicated, such that another mechanism has exactly the function consciousness has, any function a blindsighted subject can execute in response to her blindsighted perceptions must thereby not be part of the function of consciousness proper.
[8] For close interpretations of Brentano along these lines, see Smith (1986, 1989), Zahavi (1998a, 1999), Thomasson (2000), and Kriegel (2003a, 2003b).
[9] He writes (Brentano 1874: 153-4): “[Every conscious act] includes within it a consciousness of itself. Therefore, every [conscious] act, no matter how simple, has a double object, a primary and a secondary object. The simplest act, for example the act of hearing, has as its primary object the sound, and for its secondary object, itself, the mental phenomenon in which the sound is heard.”
[10] This is not to say that there are no important differences between Husserl’s and Brentano’s views. For a comparison of their respective views, see Zahavi (1998a). For other discussions of Husserl’s view, see Brough (1972), Sokolowski (1974), Smith (1989), and Zahavi (1999).
[11] Again, each of these views is importantly dissimilar to Brentano’s original view and to each other. But they all share the same general outlook. For discussion of Sartre’s view, see Wider (1997), Zahavi (1999), and Gennaro (2003). For discussion of Henry’s view, see Zahavi (1998b, 1999). For discussion of Gurwitsch’s view, see Natsoulas (1999). For work by members of the so-called Heidelberg School, see Henrich (1966), Frank (1995), and Sturma (1996).
[12] See Smith 1986, Rosenthal 1986, 2002a, Lycan 1996, Carruthers 2000, and Levine 2001.
[13] Focal awareness of our conscious states characterizes the more reflective, or introspective, moments of our mental life. When a person introspects, she focuses on her conscious state. When she starts focusing on something else, her state either becomes unconscious, or she retains a peripheral awareness of it.
[14] I am assuming that focal awareness is always conscious (i.e., that states of focal awareness are conscious states). This is admittedly not an indubitable assumption, but a full defense of it would take us too far afield.
[15] In the sense in which I am using the term, peripheral self-awareness is not necessarily peripheral awareness of oneself. Rather, it is peripheral awareness of a mental state, event, or process going on within oneself. This does not mean that peripheral self-awareness cannot be awareness of the self. Self-awareness in the sense in which I am using the term may be either awareness of oneself or merely awareness of one of one’s mental states – or both. We need not commit to any particular view here, although there are good independent reasons to think that peripheral self-awareness does involve awareness of the self (see Rosenthal 1990 and Kriegel 2003b). In any event, it is clear that peripheral self-awareness as construed in the phenomenological tradition, does include reference to the self.
[16] In the case of visual perception, the distinction between focal and peripheral awareness is what cognitive scientists refer to as the distinction between foveal vision and peripheral vision. Foveal vision is vision of stimuli presented to the fovea, a tiny central part of the retina with an angle on about two degrees of the visual field; peripheral vision is vision of stimuli outside that central part of the visual field.
[17] The same phenomenon was referred to by Husserl (1928) as non-thematic consciousness and by Sartre (1943) as non-positional consciousness.
[18] Indeed, peripheral self-awareness seems to be a constant element in the fringe of consciousness. This must be the case if peripheral self-awareness is indeed what consciousness proper is. Peripheral self-awareness is then necessarily an element in every conscious state, since it is what makes the state conscious.
[19] The functional analysis of peripheral awareness that I will develop in this section owes much to the work of Bruce Mangan (1993, 2001).
[20] At least this conception of attention has been widely accepted since Broadbent’s (1958) seminal work on attention. See also Moray 1969.
[21] It may happen that two adjacent stimuli form part of a single center of focus for the subject, but this situation is not a case in which the experience has two independent focal centers. To make sure that the example in the text brings the point across, we may stipulate that A, B, C, D, and E are so distant from each other that no two of them could form part of a larger, compound stimulus which would be the focal center of attention.
[22] There are other possible strategies that would misrepresent other features of normal experience. Consider the strategy that grants 60% of attention to C, 2% of attention to A, 8% to B, 8% to D, and 22% to E. It violates the principle that all elements in the periphery are more or less granted equal attention, which is a feature of the 60/10 strategy. We need not – should not – require that the amount of attention granted to all peripheral elements would be exactly identical, of course, but the variations seem to be rather small.
[23] Note, furthermore, that there are conditions under which peripheral awareness is actually extinguished. When a subject comes close to passing out, for instance, more and more of her peripheral visual field goes dark, starting at the very edge and drawing nearer the center. The moment before passing out, the subject remains aware only of foveated stimuli (i.e., stimuli presented in foveal vision), while her entire peripheral visual field lies in darkness. It appears that the system, being under duress, cannot afford to expend any resources whatsoever on peripheral awareness. The presence of peripheral awareness is the norm, then, but hardly a necessity.
[24] Although we might understand the notion of peripheral awareness in such a way that the 20/20 strategy entails that all (or at any rate most) awareness is peripheral. I think this would be a mistake, but let us not dwell on this issue. The possibility of the 100/0 strategy is sufficient to establish that there is no deep necessity in the existence of peripheral awareness.
[25] It does not matter for our purposes whether the 60/10 strategy is based in a mechanism that is cognitive in nature or biologically hardwired. It is probably a little bit of both, but in any event the mechanism – whether cognitive, biological, or mixed – has been selected for due to its adaptational value.
[26] There is a question as to what precisely one is aware of in peripheral self-awareness. Am I peripherally aware of my entire experience, including the peripheral elements in it, or only of the focal center of the experience? For instance, am I peripherally aware of my peripheral awareness of the books, or only of my focal awareness of the desktop? I will not broach this issue here, as it does not seem to bear on the issue of the functional role of peripheral self-awareness (at least not at the level at which I am interested in it).
[27] I am construing here the notion of a monitoring device in a relatively restrictive way, i.e., as describing a mechanism that gives the subject focused, rich information on its own processes and states. There is also a more relaxed usage, in which any mechanism that gives the subject some sort of information on its own states and processes is a monitoring mechanism. In this more relaxed sense, consciousness as portrayed in this paper does qualify as a monitoring mechanism.
[28] For a fuller list, see the discussion of Baars’ (1997) account of the functional role of consciousness at the beginning of §6. For more on the functional significance of a monitoring module, see Baron-Cohen 1995, Carruthers 2000, 2002, Nichols and Stich 2003.
[29] If we accept the common conception of evolution as a process of variation-and-retention, we may say that the fact that a feature is good to have does suggest that it will be retained, although it does not guarantee that it will appear through variation in the first place. The fact that peripheral awareness and self-awareness surely exist, however, suggests that the basic building blocks for peripheral self-awareness have been in place, so that the appearance of peripheral self-awareness through variation should be expected.
[30] At least this is normally or typically so. In some cases, M may be conscious when the subject is peripherally aware of a chain of focal awarenesses leading up to M.
[31] It might be objected that the sort of functional role attributed to consciousness in the present paper could in principle be performed by an unconscious mechanism, and this would defy the singularity requirement. This objection would be misguided, however. The singularity requirement is intended to rule out functions that conscious states have, but not in virtue of being conscious. It is not intended to rule out function that unconscious states could also but do not in fact have.
[32] This list is obtained by bringing together the titles of different sections in Chapter 8 of Baars 1997.
[33] Note that Tye stresses that this is the functional role of conscious experience precisely qua conscious experiences – suggesting that he has the singularity requirement in mind.
[34] About blindsighted perception, Tye writes: “It is worth noting that, given an appropriate elucidation of the ‘poised’ condition, blindsight poses no threat to the representationalist view… What is missing, on [my] theory, is the presence of appropriately poised, nonconceptual, representational states. There are nonconceptual states, no doubt representationally impoverished, that make a cognitive difference… But there is no complete, unified representation of the visual field, the content of which is poised to make direct difference in beliefs.” (Tye 2000: 62-3)
[35] Thus, a thought that it is raining can play a causal role in taking an umbrella, which is a motor output, but it can also play a causal role in producing the thought that it has been raining for the past week, which is a not a motor output but a further internal state.

 

The Functional Role of Consciousness (A Phenomenological Approach), Uriah Kriegel, University of Arizona, Phenomenology and the Cognitive Sciences 4 (2004): 171-193.

Evolution of Consciousness

Even the simplest organisms, such as those consisting of but a single cell, interact with their environments. As metabolic systems in a balanced steady-state, all organisms must obtain nutrition from their surroundings. As they do not live in a vacuum, organisms are also in constant contact with the water or air around them, and they are also exposed to solar radiation and other electromagnetic and chemical influences. The long-term interaction between organisms and environmental stimuli resulted in development of various sensory systems for detecting the diverse external stimuli on which the organisms rely for food or which they must avoid as dangerous. In both cases, a sensory apparatus had to be developed which, via the interneurons , automatically provided signals to the motoric cells for inherent responses of flight or approach.

The Phylogenesis of Symbolic Information

It is necessary to recall these ancient interactions between organisms and their surroundings because they gave rise to the development of sensory systems appropriate for the physical stimuli. However, whereas environmental stimuli in the form of energy and food were ingested, the sensory apparatus evolved into organs which did not take in the stimulus itself, but rather received information about it. Only in plants do photoreceptors still serve as a source of energy. As the environment of multicellular organisms expanded, and stimuli to which organisms had to react in order to survive became more varied, the processes of trial-and-error and natural selection led to development of stimulus filters in the form of receptor systems which reacted only to combinations and sets of stimuli that were of importance to the organism. These combinations of stimuli relationships were embodied by a sensory apparatus capable of selecting stimuli according to certain categories, determined by biological factors. During development of sense qualities in the course of evolution, the formation of invariants played a key role, for recognition of food or predators under varied conditions of light and the surroundings was essential for survival. Therefore, it was advantageous to have a sensory apparatus capable of identifying stimuli by means of a filter consisting of signals generated by the apparatus itself. This mechanism, in turn, was capable of evolution.
Very early in the course of evolution, we encounter the colorful world of flowers, colors, sounds, shapes, and scents which grew out of the interactions between insects and their environments. The question as to whether bees respond only to certain electromagnetic wavelengths, that is, whether they react to physical stimuli or actually to certain colors, was resolved by von Frisch, whose experiments showed that they really do respond to the same colors, even under changing conditions of light and wavelength.

To be sure, neither color nor light nor other sense qualities really exist in the environment: They are products of the sensory apparatus, which selects them by means of its filter. The sense qualities perceived by insects and other invertebrates are projected by the sensory filter onto the physical stimulus. Thus, the latter serves as vehicle carrying symbolic information to the sensory system. The sensory filter serves both as the projector and the receiver of sense qualities. The sensory apparatus uses its own analyzers to process the stimulus signals in such a way that it responds only to certain colors or sound sequences.

With these filters and analyzers, the sensory systems “invented” an entirely new form of information: Instead of physical properties that cannot be transferred to sensory channels, a representation of them was selected and produced, namely, the filtered sense qualities. Such a representation is also referred to as a “symbol”; therefore, one may refer to sense qualities as elements or signs of symbolic information.

As implied by the aforementioned insect’s world of colors, sounds, and scents, the sensory filters of sense qualities not only filter, but also project sense qualities onto the environmental physical stimuli, which animals take up only through the “eyeglasses” of sensory qualities. In other words, insects take up their surroundings in a form they develop themselves. The symbolic information requires a material carrier. When a sense quality is projected onto a physical stimulus, the stimulus also becomes a carrier of sense qualities, so that in this guise they may be picked up and processed by the senses. Otherwise, it is difficult to conceive of how the colors, flowers, and scents in an insect’s world might have originated.

The entire visual world is based on this type of projection: The eyes, instead of picking up electromagnetic waves which a physical object has absorbed and assimilated, receive only waves which are reflected or deflected without having penetrated the physical object. Therefore, it is not the object itself which meets the eye, but only a projection of the waves the object failed to absorb.

The sensory filter, too, functions in a way similar to that in which vision is affected by eyeglasses, through which the surroundings may be perceived as distorted or sharp, red or dark. The filter evolved by interaction with the environment and natural selection. Even though stimuli passing the sensory filter take on properties of the latter, the sense qualities still are not states of the organism whose sensory systems interact with the stimulus to produce them. At this level, the symbolic information contained in sense qualities is the product of two material systems or mechanisms, namely, the environmental stimulus and the sensory apparatus. The information achieves an existence separate from that of the filter only in that the filter projects it onto the physical stimulus, which then becomes a carrier of information to the sensory apparatus. The symbolic information exists solely in a material carrier, which thus becomes an indispensible component. If the series of material carriers in the recoding chain, to be described below, is interrupted, the information is lost.

This preconscious origin of symbolic information in the interaction of the sensory system with environmental stimuli, of which the symbolic elements or signs are the sense qualities, is also a critical factor in the development of consciousness and its “language”. The highly developed mammalian brain with its cognitive apparatus or organs is capable of obtaining the information about the external surroundings needed for central control of behavior only in preexisting terms of the symbols of sense qualities. In other words, an organism does not have to reinvent symbolic information about physical properties of environmental stimuli from scratch. “Consciousness” becomes an unsolvable conundrum if its origin is attributed only to the neural network without regard to antecedent developments. The symbols of information, that is, the sense qualities, are not derived from the neural network, which communicates with nervous impulses and neuronal potentials and stores and encodes the information contained in patterns of neuronal excitation.

Neurons and neuronal patterns are not the information itself; rather, they merely convey information. Thus, symbolic information originates outside its carriers. The sources of information for the neuronal network are the sensory systems with their receptors. A neuronal network that is cut off from the sensory system is incapable of creating symbolic information in and of itself; even to obtain information about its own state of excitation, the nervous system requires a sensory apparatus. Without a sensory apparatus, the nervous system receives no symbolic information, either about events within itself or about outside stimuli. Actually, an organism is unaware of processes which transpire subconsciously and automatically. Many neuroscientists ignore this fact and attribute their expertise to the nervous system. Notwithstanding, the nervous system is unsurpassed as a storage unit and processor of signals it obtains from the sensory apparatus and as a carrier of information.

In invertebrates, the sensory apparatus is directly connected to effectors by way of interneurons. The sense qualities of signals elicited by stimuli are analyzed, then signals are transmitted directly to the motoric cells, which react to the signals with genetically determined patterns of motility.

Even invertebrates are capable of reinforcing the connections among heavily used pathways of excitation, and thus of learning, despite lack of cognition, within narrow limits. However, aside from genetically programmed sensory filter and analysis cells, invertebrates lack the ability to store newly acquired information, to be recalled for later use. The memory of invertebrates still consists of the variable strength of interneuronal synaptic connections.
The Development of Cortical Information Storage and the Neural Code

Organisms had to develop a cognitive apparatus in order to utilize information about the outer environment to adjust their activities, thus using learning processes to expand the less adaptable behavioral program established by the genes. A long period of development was necessary before organisms were able to store and analyze information in the cortical network and centralize their controls in the reticulo-thalamo-cortical system. Only the organisms equipped with such a system became capable of taking up symbolic information and storing it.
In the course of time and evolution, organisms developed a neural apparatus that enabled them not just to react to symbolic information, but to utilize the sense qualities as elements of an internal language. This internal language opened unlimited possibilities for new symbols designating objects and events, such as the human language.

This purpose was served by the neocortical network, among others, whose primary and secondary sensory areas represent the peripheral sensory receptory system in the cortex, and continue its functions of analysis and filtering in a more refined way. For example, the visual system in the occipital and temporal brain lobes comprises six different fields, V1 to V6, in which light differences, colors, orientation and movement as well as shape and contours of objects are analyzed separately in specialized fields and neuronal assemblies. This analysis of incoming signals from the receptor fields of sense organs is a continuation of the sensory system’s filtering function, by means of which the manifold sense qualities are selected before the act of seeing can take place. This subconscious analysis of cortical sensory fields, unlike the organization of the invertebrate brain, is not directly connected to motoric functions or effectors. The neural representations or cortical sensory detectors are the neural carrier or code for the sense qualities, which must be decoded into the original symbolic information in order to be invested with semantic meaning.
The Preattentive Phase

Preconscious, preattentive analysis precedes the first storage of information and conscious perception; it has a latency period of about 60 ms. The signals are transmitted to the sensory fields of the cortex by way of the lemniscate tract of the spinal cord, crossing two synapses. This process has been most precisely studied for the visual system.

During the preattentive orientation phase, the organism (more precisely, its central control system) and the stimulus excite primary arousal of the activation system itself and and the sensory fields. The body and its senses become aligned with the stimulus via the sensomotoric aminergic and cholinergic paths of the reticular brain stem, which probably releases the neurotransmitters noradrenaline, dopamine, serotonin and acetylcholine into the extracellular cortical fields, raising the excitation level of certain areas in preparation for uptake and processing of sensory signals. Furthermore, by way of branches of the sensory tracts to the reticular system, the stimulus induces a higher state of excitation in select groups of neurons. In the cerebral cortex, this leads to so-called expectation potentials, which increase gradually until the level of activation of the sensory areas becomes high enough to receive and process sensory signals. With a latency period of 70 to 500 ms, this preattentive preactivation phase then proceeds with the components N 100 to P 300 of the endogenous or exogenous event-related potentials to a state of conscious attention. During the preattentive phase, the subconscious transformation of sensory cells to sensory detectors by the sensory signals sets in, and the sensory neuronal groups must be primed for this function. Only after such preparation can the sensory apparatus be aligned with the stimulus and turned to it centrifugally, so that perception may occur. Experts still disagree about the latency period that elapses between the stimulation and conscious perception; in contrast to the 60 ms mentioned above, Libet found a latency of 500 ms. In any case, it is certain that more time elapses between stimulus and conscious perception than the signal needs to travel from the periphery to the cortex, even if it must cross two or three synapses. The brain needs this time in order to transform the signals into detectors and align them centrifugally with the stimulus.

During the preconscious sensory impression of the preattentive phase of perception, the sensory stimulus triggers the formation of detectors in the cortex. In other words, a neuron or group of neurons is attuned by signals of the sensory system to a certain sense quality, for which the cell or cell group may then function as a detector. Since this detector function is stored both by facilitation and in a pattern of excitation, it may be referred to as a code for and carrier of sense qualities.

Preattentive orientation proceeds subconsciously at the level of the nervous system. Not until sensory perception is attained can attention focus upon information as an object with which it can operate; only when this level is reached does preattention make the transition to the conscious attention of a cognitive system.
The Reticulo-Thalamo-Cortical System (= Activation System)

The task of the sensory system, which includes the sensory fields of the cortex, in the preattentive phase is to analyze stimuli, so that the sensory system can filter the stimuli and align the filtered sense qualities with the stimulus. Preattentive orientation precedes conscious sensation; it is the focussing, concentration, or strengthening of the excitation or activation of a neuronal field with sensomotoric functions. This activation of attention proceeds from the activating system and the nonspecific excitation which turns sensomotoric fields on and off, and involves activated groups of neurons in its functional unit. The relationship between the activation system and attention is so close that they are referred to as the attention system. Some of its manifold, reciprocal pathways of excitation extend from the brain stem across the limbic system to the prefrontal cortex; another path runs from the reticular system of the brain stem across the intralaminary or nonspecific thalamic nuclei to the upper layers and to layer VI of the cortical columns, which are joined by the lemniscate sensory tracts in layer IV (Newman/Baars 1993).

Since the activation system has been mentioned several times, a brief introduction to this neuroanatomic innovation in vertebrates is necessary. As recently as 1949, G. Moruzzi and H. W. Magoun discovered in the brain stem a structure apparently devoid of specific sensory or motoric function, which was the reason why it had been overlooked for so long. However, the role it plays is a crucial one. Gradually it became evident that this structure serves as a central activating system that both monitors and regulates the level of excitation of the entire organism. It is conjoined with the limbic system, and through it with the autonomic nervous system and the hypothalamus to form a functional unit extending to the nonspecific and intralaminary thalamic nuclei and communicating via two tracts with cortical structures, especially the limbic prefrontal brain. The activating system contains its own nonspecific excitation tracts, by way of which it monitors and regulates not only itself, but also sensory and motoric functions. Because of its preeminence and the control function it exerts, it is a sort of metasystem within the central nervous system.

The attention system is served by neurons in the parietal, temporal, and frontal cortex as well as in the region of the supplementary motoric areas in field 6; the best-known example is the frontal visual field. In the immediate vicinity of these sensory fields with attention functions are the sensory hand-arm field and the like, all of which serve to align the body and sensory systems to the stimulus. There are several visual fields (prefrontal, supplementary, and parietal fields); the same is true of the other sensory systems. There are also several hand-arm fields in the immediate vicinity of the visual fields. This proximity suggests a coupling of eye-hand-arm control by the activation system. The premotoric cells of the hand-arm field (the anterior part of field 6) discharge during intentional hand movements, such as conscious grasping and when the mouth is used for similar intentional movements. The neurons also fired even if the ipsilateral arm or the mouth was used, indicating that the neurons do not reflect muscular activity; as further evidence, when the muscles were used for motoric actions, the neurons remained silent. Stimulation of the arm-hand fields elicited coordinated, stereotypic movements of the contralateral arm. These fields of selective attention serve to align the body and senses toward the stimulus (G. M. Edelman et al. 1990). These and the observations described above support the notion that the activation system has a whole roster of secondary sensomotoric fields at its disposal for vision, hearing, etc., distributed all over the cortex, when exercising its function of sensomotoric attention and coordination. The process of sensory perception and awareness begins in such secondary fields, which are subordinated to the metasystem. By way of these cortical fields, which are connected to the superior colliculi and the reticular nuclei of the brain stem, muscles of the sensory receptors are aligned toward the stimulus and adjusted so as to be able to follow the moving stimulus. This has been studied in detail for visual processes (Ch. J. Bruce 1990). The next question is how visual processes become seeing, and how other senses elicit conscious awareness and perception.

The development of symbolic information was possible only in organisms with some degree of central concentration of drive and behavior in the reticulo-thalamo-cortical activating system to make them capable of activity.

The contention that the activating system truly participates in conscious sensory perception and recognition, memory, and imagination is supported by several uncontroversial findings:

If the nonspecific impulses between the intralaminary thalamic nuclei and the cortical sensory fields are blocked, consciousness is lost; the same thing also happens when the reticular system of the brain stem and the nonspecific thalamic nuclei are completely interrupted.

If the collaterals, i.e., the branches, of the sensory tract to the reticular nuclei of the mammalian brain stem is interrupted, the animal ceases to react to stimuli, although signals still reach the intact cortex, where they can be detected (D. B. Lindsay 1957).

If the reticular system of the midbrain is severed, the decerebrated animals lose the capability of attentive, conscious, centrally regulated behavior (S. Grillner 1990).

The prerequisite for conscious behavior in humans is simultaneous activation of the cortical columns of the sensory fields, i.e., of the upper layers or of layer VI by the nonspecific excitation of the activating system, and of layer IV by specific sensory excitation. If any one of these tracts is interrupted, conscious perception ceases (J. Newman and B. J. Baars 1993).
Therefore, conscious behavior evidently results from the synchronous interaction of two systems, namely, the reticulo-thalamo-cortical activating system (also referred to as the metasystem) and the specific sensomotoric system.

Most neurophysiologists concerned with explaining consciousness now recognize the role of the reticular activation system in conscious processes of attention, sensory perception, and memory. However, instead of explaining how the neural network and its processes elicit conscious behavior, Edelman, Crick and many others offer masterly descriptions of the neural events that accompany conscious behavior. These descriptions are still within the confines of psychophysical parallelism, which lacks appropriate categories to which the role of the reticulo-thalamo-cortical activation system, for example, may be assigned within the more comprehensive system of the organism of the whole. Such descriptions and analyses remain at the level of the neuronal network and its processes, which run in parallel to conscious processes. In other words, it is not enough to verify with psychophysical parallelism the existence of synchronous interaction between nonspecific activation system and the specific sensory system during conscious behavior. It is essential to demonstrate the active regulatory and monitoring functions exercised by the reticulo-thalamo-cortical sensory fields on specific sensory apparatus, including the cortical sensory fields involved in conscious processes (e.g., feeling, perception, memory, etc.), in order to supercede the level of psychophysical parallelism, since these systemic properties overstep the limitations imposed by the properties of the neuronal network.

Without Interaction with the External Stimulus, the Neural Code Cannot Be Deciphered

Although the preattentive sensory impression that precedes conscious perception and serves in formation of cortical sensory detectors and neuronal carriers of information by analyzing input signals in the various sensory fields has frequently been studied, documented, and proven by neuropsychologists and neurophysiolgists, the significance of this event has largely escaped attention. Nevertheless, the explanatory model for perception presented here stipulates preattentive analysis of stimuli before the activating system is able to align the sensory system with its appropriately attuned filters centrifugally toward the stimulus, from which it may decode the sense qualities. Many reputable researchers believe that the sensory fields of the cortex not only represent the indispensable analyzers of the stimulus signals, but go beyond that to actually generate sense qualities, for example, the categories of color in the visual system. In support of this notion, they refer to the observation that malfunction of the sensory fields causes the corresponding sense qualities to disappear. This observation, of course, is unquestioned, but the interpretation is subject to doubt; for although the cortical analyzer may be an indispensable prerequisite for sensory perception, it is not the only one. The sensory system, with its cortical sensory detectors attuned to the stimulus, still must be aligned with the physical stimulus in order to decode the sense qualities. Sensory qualities are generated and perceived by the system as a whole only when the physical stimulus meets the detector and information carrier attuned to it in a feedback excitation circuit.

In contrast, S. Zeki, among others, attribute to the sensory fields of the cortex the ability to generate various sense qualities such as light, color, tonality, and scent (“transforming the signals reaching it to generate constructs that are the property of the brain, not of the world outside, and thus in a sense labeling the unlabeled features of the world in its own code”). Naturally, this would be the simplest explanation; but it is refuted by the fact that people born blind or deaf cannot be made to see or hear by electrical stimulation of their intact sensory fields. In other words, it is not enough for stimulus signals to simply arrive at the sensory fields of the brain, be analyzed there, and be transformed into detectors of selected sense qualities by the cortical filters. In addition, the sensory detectors and neural carriers of informaiton thus produced must be confronted with the stimulus, which must be present if the sensory system with its adjusted filters is to extract the sense qualities from the physical stimulus. This applies, of course, only to the elementary, nonspatial sense qualities.

When the sensory system and the reticular activation system report a stimulus and simultaneously activate the corresponding cortical sensory detector, the activation system directs aligns the cortical detector and its sensory system toward the stimulus. Corticofugal influences modulating the afferent impulses from the periphery have been reported in a number of publications (G. D. Dawson 1958; K. E. Hagbarth and D. J. B. Kerr 1954; G. E. Mangun and S. A. Hillyard 1990, pp. 271 ff.). This control center of centrifugal excitation involves the following events: The sensory system permits the stimulus to appear only through its filter, that is, the sensory system understands only its own projection of the stimulus, namely, the sense qualities it generates itself. However, these are not arbitrary products of the brain, as some presume. The symbolic information, that is, the sense qualities, can be generated by the sensory system only if the the physical stimulus is actually present to interact with it. Symbols invented by the brain would be self-contradictory, for they would represent no other physical reality. As already mentioned, electrical stimulation of cortical sensory cells fails to elicit perception of the respective sense qualities in persons born blind or deaf, even if their cortical sensory fields are intact. However, if the organism has already had such sensory experience, e.g., once seen colors or heard sounds, these experiences can be elicited again by electrical stimulation of the cortical storage, as experiments by Penfield, Libet, and others have shown. The initial sensory experience must therefore be gathered in the confrontation and interaction of the sensory system with stimuli from the outside world. This also applies to the so-called internal stimuli of the limbic system, which must first make a detour through interoceptive tracts of the peripheral or autonomic nervous system before they can be felt and perceived as sense qualities by the cortical detectors.

In addition to this evidence, several other observations also contradict the view that stimulus signals are transformed into sense qualities by the brain alone. Finnish researchers found the primary visual field of the cortex in blind people to be utilized by the sense of hearing. “In the deaf, the areas of the temporal lobe in which sounds are normally processed are used instead for processing visual information” (R. Ornstein, R. F. Thompson). In Paris, Michel Imbert and Chr. Matin of Pierre et Marie Curie University interrupted the neural tracts connecting the thalamus (lateral geniculate body) and the visual cortex in a newborn hamster, since in these mammals the brain development is not yet complete at birth. The visual nerves were then attached to the somatosensory tracts, which had been likewise been cut, so that visual signals were sent to the somatosensory fields of the parietal cortex. After the animal recovered, the researchers were able to derive visual signals from the parietal field; the visual behavior of the hamster did not differ from that of normal animals.

These experiments clearly indicate that light, color, sound, and other sense qualities cannot be generated solely by the sensory fields of the cortex. The properties of analysis and filtering in the cortical fields are developed by interaction with peripheral sensory receptors by way of connections between the receptor fields and the cortical representations. Actual deployment of the filter function of the sensory system is possible only with an external stimulus, and the filter can switch to a generator of sense qualities only by interacting with this complementary part.

The Mechanisms of Generating Information

The symbolic information is generated by the interaction of two material systems, namely, the physical stimulus and the sensory system. In the course of evolution, they have become assimilated and adapted to each other and developed two complementary systems: both the physical properties of stimulation that must enter the receptor system and the filters of the sensory systems are adjusted to each other. The sense qualities emerge as products of the interaction between the physical stimulus and the sensory system. When sense qualities are projected onto the physical stimulus, the latter becomes their carrier, for symbolic information needs a material carrier. The sensory system reads or scans the carrier in order to obtain symbolic information generated within itself.
In mammals, the preconscious generation and transmission of information has been transmuted in that the sensory system is now part of an organism capable of self-regulating behavior. After preconscious adjustment to the stimulus, the central neural governor once again confronts the sensory system with the stimulus, but this time as an organ of attention under the control of the organism’s central regulatory system, i.e., the activating system.

The condition of the sense qualities in the carrier of the physical stimulus is also the only decoded condition of the sense qualities to which the brain, by way of the senses it controls, has direct access to sensation and perception. Without these sensory events, the brain fails to perceive any decoded sense qualities, and without perception of sense qualities there can be no psychological or mental world; that is, there is no differentiation between subject and object until sense qualities are perceived. The self-generated conditions of the sense qualities are hidden from the brain or kept at an unconscious level until they confront the sensory system in a physical information carrier as an external object, rendering them accessible. This is made possible, as it were, by a trick of evolution, which has unlimited inventiveness: The same sensory filters that permit the sensory system both to project sense qualities onto the physical stimulus and to utilize the stimulus as its carrier of information also read and perceive the self-generated sense qualities from it, because they fit it like lock and key.

The sensory receptors and the sensory filters are not the only ones having a lock-and-key mechanism consisting of their self-generated sense qualities projected onto the physical stimulus; the cortical sensory detectors, too, are attuned to the sense qualities projected onto the physical stimulus as a key to a lock. The cortical detectors and the sensory filters are complementary systems, and form a functional unit themselves. For the transmission of symbolic information from the outside into the brain, evolutionary processes have led to a chain of complementary systems, along which symbolic information is transmitted and recoded from one level to the next higher one, without ever losing the material carrier, even temporarily. Sensory receptors and cortical sensory detectors are examples of such complementary systems, across which the same symbolic information in the decoded state is transmitted from the physical carrier to its neural code in the cortex. Since the complementarity or tuning between the peripheral receptor and the cortical detector systems is determined during embryonic development and in the subsequent period of learning, the simplest neural frequency code of all is sufficient: on or off, excited or inhibited. If complementary systems are activated, they are tuned in to each other, related to each other, or self-referent.

In principle, sensation is decoded when the central neural metasystem utilizes the nonspecific activation to align the sensory detector and the sensory system to the stimulus. Upon meeting it, the detector “recognizes” the physical information carrier by means of the tuned-in sense qualities, because they fit together. The long-established lock-and-key mechanism lives on in a more advanced form in this process of recognition, which is reminiscent of recognition of a receptor by a ligand. The information is transmitted by its original carrier, the physical stimulus, to the neural carrier, the detector, by way of an activity circuit with manifold feedback between the peripheral sensory receptors and the cortical sensory detectors.

The stimulus instigates a periodic process. “An optical or acoustical stimulus leads to periodic discharges in the addressed nerve cells”, wrote E. Pöppel. These discharges occur at intervals of about 30 ms, as shown by electroencephalography. Their periodicity enables the cortical structures to analyze the incoming signals, while once again aligning the sensory organ (e.g., the eye) to the physical stimulus, all at the same time. The centripetal and centrifugal excitation of sensation forms the feedback loop, already referred to several times, between the peripheral and cortical systems, and establishes synchronous peripheral decoding and its cortical representations.

There is a way to obtain scientific evidence that the neural processes under study actually do involve transmission and processing of sense qualities. It is based not on introspective experiences, but rather on verifiable data, in a sense, meta-data. To mention a few:

Conscious processes of sensation require that both the system of activation and specific sensory systems are simultaneously operative and interacting.
During the preattentive phase preceding conscious sensation, the cortical sensory detector is formed by an unconscious sensory impression. Without a sensory detector, no perception or experience occurs.
Attention structures in the parietal, prefrontal, and temporal associational cortexes aligns the sensory systems centrifugally to sense qualities of the stimulus, which are attuned to the detector.
Sense qualities are not immediately retrievable from the brain without previously having been read or scanned by the sensory organ from the physical stimulus.

On the other hand, sensory perception without intact cortical representation is impossible (cf. “Blind Vision”).
Sensory perception occurs between the periphery and the cortex in a centripetal and centrifugal multiple feedback loop, in which specific and nonspecific impulses are also simultaneously dovetailed at different levels.
These and other data give us some knowledge of events of sensation, attention, and other conscious processes. At the same time, they permit us to draw inferences about processes which we cannot observe directly, but which are prerequisites for observable processes. Data of this nature are provided by experimental cognitive psychology.
Evolution developed the solution to a problem that network theoreticians have been working on without success to date. However, the point of departure for evolution was not a mechanical network, but rather an organism with a central activation system. One must find the activity of an organism capable of self-regulating behavior behind the feedback excitation loops of sensation in order to understand what actually transpires with these feedback signals of the nervous system. The origin of symbolic information in the interaction between physical stimulus and sensory system as well as the developmental stages leading to perception of these sense qualities by the attention of a mammal can be traced step by step (Hernegger 1995).

Decoding the Neural Code in Sensation

The neural network is a highly organized, complex system of nerve cells that can be broken down all the way to the level of its molecular components for study. The nerve cells have no “inner life”, either individually nor as a group; they are capable neither of sensation nor of feeling. First, the activating system must align and prepare the sensory system and the cortical sensory detectors with the environmental stimulus before they can receive and process the sense qualities. Under the guidance and control by the activation system, the sensory apparatus, including the cortical sensory fields, is transformed to its organ of cognition. The transformation is initiated by the prior cortical analysis of signals from the peripheral receptor and the concomitant formation of a cortical sensory detector; the organ of recognition of the activation system can perceive external stimuli through its complementary filter only in the form of sense qualities, for the filter is now also the receptor of the sense qualities it generates itself.

But how does a perceived sense quality become an object of attention of the activation system?

Here, too, the importance and irreplaceability of the cortical sensory detectors is evident, even if it were only because of preattentive sensory impression represented by the neural code, which is later decoded by way of an excitatory feedback circuit with the perceived sense qualities. In this way, the neural carriers of information in the cortex are given the semantic meanings for the organism’s central controlling system, which can now direct its attention, that is, its nonspecific excitation, to the cortical sensory representations or include and incorporate the excitation patterns of the decoded sense qualities into its own system. The activation system is actually capable of including neural structures in its functional unit and releasing them again. The inclusion of the sensory apparatus in such a functional unit transforms the sensory apparatus into an organ of perception of the activation system, the representation of the organism as a whole.

Before sensation occurs, the unconscious, preattentive sensory impression involves formation of a cortical representation or sensory detector of sense qualities in the neural code of the nervous system. This code must be decoded for the information to become an object of attention.

Once they have been tuned in to the stimulus, the sensory systems, regulated by the central system of attention, are aimed outward at the stimulus, in order to decode the neural representations or the neural code of the cortex by sensation or perception of sense qualities upon meeting the stimulus. Decoding means transforming one code into another one, or into a “language” which the recipient can “understand”.

The recipient capable of “understanding” the language of sense qualities is not the isolated nervous system, in whose code the information is already stored, but rather the whole organism. Initially, although the sensory systems were directed toward the external environment, the organism was unable to sense, perceive, nor recognize anything, for lack of corresponding internal conditions, but was only capable of picking up symbolic information from outside of the central nervous system. For this purpose, it became necessary to transform the sensory system and the sensory cortex into an organ of recognition.

Decoding occurs via the feedback excitation circuit between the sensory receptor and the cortical detector. While the stimulus signals are sent inside to the brain, the brain directs the eye or ear (the sensory receptors) to the outside. By way of the reticular excitation pathways, however, the limbic-autonomic and the peripheral nervous systems, i.e., the entire organism, is involved in this process of sensation, perception and recognition, especially since somatosensory perception is involved in every other sensation. In sensory perception, feedback occurs between the organism and the nervous system by way of these complicated loops, and not only within the neural network, as contended by Edelman and most neuroscientists who are trying to find an explanation for consciousness. For this reason, the conditions with which the organism responds to sensory perception involve not only the nervous system, but the organism in its entirety. The two spheres are integrated by the feedback loops, however. Thus the organism is the receiver, for which the neural code must be decoded.

Sensation is reported to the corresponding cortical sensory fields via two separate pathways. The sensory signals reach the brain by way of a tract from the spinal cord. In the brain stem, collaterals branch off to various reticular nuclei of the activation system. The specific sensory tracts proceed further across specific relaying nuclei in the thalamus to the sensory fields of the cortex, but the nonspecific excitation in the reticular system of the brain stem divides into several paths. One such path leads to the part of the forebrain known as the limbic cortex, and another runs parallel to it through the nonspecific intralaminary thalamic nuclei to the same columns of the cortical sensory fields as the specific tracts, but in the upper layers (usually I and II) or in layer VI of the columns, whereas the specific tract has as its goal cells in layer IV of the same column. Feedback loops between the periphery and the cortex and between specific and nonspecific excitations synchronize these events.

The feedback excitation circuit of sensation or sensory perception occurs as long and as often as necessary until a firm linkage between the peripheral picking-up of sense qualities and their cortical representations has been developed. It is now known that short-term memory enters a long-term linkage by way of the hippocampal system. However, this association must be continually renewed, either by the same sensory experience or by dreaming (the REM phase of sleep). Complete sensory deprivation causes the brain to create hallucinations, during which, as in dreams, stored patterns are endogenously activated in the absence of a corresponding external stimulus.

The nonspecific neural patterns of long-term memory, which are complementary to the specific patterns, store the attention conditions of the activation system with which the organism perceived the decoding of the sense qualities. These conditions must be renewed again and again by practice and linked to the neural code.

With every new experience there is a tendency to disassociate the sense qualities from the environmental stimulus, to make it an autonomous, operant “coin” for the central controlling system. Parallel to this disassociation from the external stimulus, a linkage develops between the decoded sense qualities and their neural code or representations. Every sensation is a transfer of the symbolic information from the outside or from the periphery to neural representations by way of a pattern of connections, which finally form cortical excitation patterns.

Transformation of the Code of Symbolic Information

Before organisms equipped with sensory systems appeared, the lock-and-key mechanism was the code enabling information to be passed on. In the genes, in the immune system, and in transmission across synapses, this lock-and-key mechanism between ligand and receptor molecule is still to be found.
With the advent of sensory systems in organisms, a completely new kind of information coding cropped up, namely, symbolic information defined from the outset. The transition from an information filter to self-generated, detached information in the form of sense qualities was a fairly complicated process, especially since sense qualities cannot exist without a material carrier. First, for the neural network, the symbolic information contained in the sense qualities was translated into the neural code of nerve impulses and stored as a pattern of excitation of neuron groups. Then the central activating or attention system of the organism had to retranslate the neural code into sensory perception and associate the sense qualities decoded in this way with their cortical representations or carriers.

In the transformation of sense qualities to an object of an activating or attention system, somatosensory perception plays a critical part; it either precedes all sensation and perception, or transpires parallel to it. The body of the organism itself is represented severalfold in the parietal cortex (in areas 1, 2, 3, 5, and 7), and receives stimulus signals from the entire body surface, as well as from joints and muscles, by way of somatosensory senses; these exteroceptive somatic senses are supplemented by the interoceptive senses from the peripheral and autonomic nervous systems. This somatic sense, which is coupled by feedback with the motoric and activation systems, is crucial to the development of consciousness, for the self-reference of the periphery and the cortical equivalents by way of feedback between somatomotoric and somatosensory systems is the framework of all other sensations andperceptions. In other words, once this storing of experience of the body itself begins in the fashion described, it is continually renewed and elaborated. These somatosensory qualities derived from one’s own body become the first “language elements” of the brain. They are simultaneously a state of the body and an object of attention, i.e., the somatosensory qualities are experiences of bodily conditions. The states of the body itself were able to become the object of attention only by being perceived in the way we know as symbolic information about the physical properties of stimuli impinging on the body. These somatosensory sensations are unique, because they can take place even without involvement of other sensations; the condition of one’s own body can be perceived only as symbolic information. In other words, only symbolic information contained in somatosensory qualities can be an object of attention and perceived; somatosensoryqualities represent physical and energetic events within the body. In this fashion, an infinite series or infinite regression of conditions is prevented. The initial sensory perception cannot draw upon another condition, sensation, or feeling; it is actually the initiation of a process from which and in which conscious perception originates and happens. The organism perceives its own condition by way of symbolic information of somatosensory qualities as an object of its own attention.

Each sensation and perception can happen only by way of symbolic information of sense qualities, for there is no other way to become an object of attention or sensory cognition. It is naive and unreflected to attribute to the nervous system the ability to directly experience its processes and conditions. Only symbolic information can become an object of attention at which the sensory or cognitive systems are aimed. The only properties of physical events or objects which can be perceived are those which can be transformed into sense qualities. Consciousness and cognition have their wellsprings in this object formation.

Somatosensory perception proceeds along reciprocal pathways of the nonspecific mediodorsal thalamic nucleus to the somatic fields of the parietal cortex, among others. The somatosensory perceptions are connected in a special way, directly and inseparably, with the excitation of the activating system. Self-referring somatosensory decoding is the prerequisite for any subjective experience and the states it entails, for in this case the roles of sense qualities as objects and as states coincide in the decoded sense quality; with somatosensory perception, the organism also has an object of its attention, but the object is a condition of its own body. For this reason, in this context we speak of self-reference. The dual nature of decoded sense qualities as an object and as a state of the attention system may be explained by assuming that the activating system regards the decoded sense qualities as an object of attention, and incorporates it into its own system by way of nonspecific excitation; alternatively, the activation system may extends to include the cortical structures serving as sensory representations. The basis for this contention is the already mentioned fact that sensory qualities do not reach a conscious level until the excitation of the specific sensory systems and the nonspecific activation system unite to produce a state of common, synchronous excitation.

The perception of sense qualities happens via the previously described excitation loops in various patterns of excitation in the sensory fields and the prefrontal, parietal, and temporal, as well as the subcortical, reticular, and limbic-autonomic components of the activating system. The organism, which articulates itself in these patterns of excitation, is both carrier and object of the perception; its activating system is its organ by means of which the cortical structures of attention are steered toward the decoding process or to reactivate stored representations.

The organism, which distributes its nonspecific excitation to various cortical regulatory structures, is therefore what senses, perceives and feels. If the excitation of the activation system is turned off, the organism ceases to perceive anything. In this way, the organism, or its activating system, is in a state influenced by the process of sensation; this state is not consciously perceived as such, for only its products and the object it is attuned to, i.e., the perceived sense qualities, reach the level of consciousness. However, those sense qualities include somatosensory and interoceptory perceptions, including bodily states and the autonomic nervous system. The reference to this state of the organism, which is the foundation of conscious perception, is important for understanding the reactivation of memory; for it has been postulated that the program for reawakening of consciousness is coded in the nonspecific stores. The same condition enables the organism to perceive the decoded sense qualities as the object of its attention.

Before consciousness came into being, there were neither sensations nor feelings, perceptions of sense qualities, nor imagination. Nor was the brain able to generate these psychic events all by itself, so its only option was to take up information from the outside or from the environment and convert it to self-generated sense qualities. The road to conscious perception and cognition led from the filter of the sensory systems through the neural code of the brain to its decoding, based on the interaction of several complementary systems. The nonspatial sense qualities themselves are the elements out of which spatial forms, movements, and orientation of the body are constructed. The information symbol of the nonspatial properties bears no resemblance to the information carrier or the code, which is often a carrier of information as well. However, the brain’s code for space and time properties retains a spatio-temporal similarity, a quasi-isomorphism with the spatial stimulus properties. Several nerve structures in the peripheral receptor, in the thalamus, and in the sensory fields of the cortex serve to analyze it. And these spatial secondary sense qualities are the elements for objects, classes of objects, and entire categories.

With this inexhaustible reservoir of symbolic information, the human brain was now able to creatively construct new mental worlds. The potential combinations possibilities of the elements of symbolic information, i.e., the sense qualities, are just as inexhaustible as the sounds of human speech. As a matter of fact, sense qualities and human language share the same line of development.

Let me recapitulate the critical stages in development toward consciousness:

  1. The origin of the development was the sensory system with filters for sense qualities, the elements of symbolic information.
  2. The sensory system changed with the development of the cortical network and the central driving or activation system, and became a centrally regulated organ.
  3. Every new perception is preceded by a preattentive sensory impression for unconscious analysis of the stimulus signals, resulting in formation of a sensory detector before perception. In the second, conscious phase of sensory perception, the sensory system can therefore be aimed outward and selectively, its filters already tuned in, toward the environmental stimulus. The filters match the sense qualities as a key matches its lock or a template its matrix. The sense qualities gathered in this way are the decoding of the neural code in the cortex. The peripheral process is connected to the sensory target neurons in the cortex by way of a feedback excitation circuit, forming a unit. The long-term connection between the neural code and its decoded sense qualities is established by learning.
  4. The symbolic information, or sense qualities, thus become an object of central attention. This object formation is the origin of cognition and consciousness.

The mere description of the neurophysiological substrate of sensation and perception, however comprehensive and detailed, can do no more than relate the observable events that accompany the process of conscious perception. The widely-held notion of psychophysical parallelism is satisfied to describe the correlation or parallelism between physical (i.e., neurophysiological) and psychic (i.e., conscious, phenomenal) events, without offering an explanation of how conscious behavior came into being from these neurobiological prerequisites. The neobehaviorists tend to consider the description of the physical, neurobiological events sufficient to explain them. In order to understand what goes on in neurophysiological processes, it was necessary to regard them in a more comprehensive framework of relationships and interactions, in which the central nervous system wass not treated as if it were an isolated, autonomic entity, separate and isolated from the organism.

We have replaced psychophysical parallelism, which for a century has amassed an incalculably rich collection of observations and data, by a different model that attempts to explain the interaction of various components not reducible to each other, i.e., symbolic information and the nervous system. In our model, the observations of psychophysical parallelism have a new importance and another interpretation; the temporal correlations of inseparable events are now regarded as interactions and interdependencies of systems that generate new products and new systemic properties. The process of sensory perception can be described separately from the standpoints of sensory physiology and perception psychology, and both descriptions are correct. Nevertheless, the same sensory perception can be described, as here, under the assumption that the other two are an information process in a dynamic cybernetic system. All three descriptions are justified, but they answer different questions.

The description presented here does not merely draw upon results of neurophysiological and psychological research; it also integrates them by studying system levels within the organism and how they relate to one another. E. Pöppel formulated this systemic approach as a question: “How do individual system levels in biological systems come into being? How does something higher develop from a lower level?”

Conscious behavior has many facets, and can be defined in quite various ways. On the one hand, it is not an independent being hovering outside the body and transcending the nervous system. On the other hand, in contradiction to the so-called identity theory, it cannot be identical with the nervous system, for the first thing to become conscious is symbolic information about the external world, impinging from the outside and not generated by the nervous system alone.

The process of conscious behavior thus always involves two irreducible elements: a) the recognizing organism, and b) the recognized information, in which, in turn, information about the physical properties of the external stimulus must be differentiated from the self-generated symbol (i.e., the sense quality), by means of which the information is received by the sensory system. The symbolic information therefore goes beyond the neural process and is not reducible to it. The sensory apparatus and the sensomotoric cortex develop increasingly into organs of transmission, analysis, processing, and storage of this symbolic information, which it translates from one code into another during transmission from the peripheral sensory receptor to the cortical network, where finally the cortical representations are decoded into the original language. The symbolic information is what remains; it must not be confused or identified with the nervous system that transmits, processes and encodes it.

The sense qualities have not ceased to fascinate modern thinkers since John Locke (1632±1704). Immanuel Kant (1724±1804) regarded them as subjective forms in which we see things, and which rather tend to interfere with seeing “the things themselves”. In that era, the notion of information was hardly important, but Shannon’s concept of information turned out to be unsuitable in all attempts to apply it to consciousness. It was another train of thought in modern times, embodied by E. Cassirer’s “philosophy of symbolic forms”, Karl Bühler’s “theory of speech”, or Susanne K. Langer’s “symbol in thought, rites, and art”, to name but a few, that paved the way for the notion of symbolic information. This notion probably had little or no influence on Shannon and Weaver as they developed their theory of information. Regarding sense qualities as elements of symbolic information about the physical properties of environmental stimuli opens entirely new perspectives and possible explanations for consciousness research. In this sense, consciousness research is part of the basic science of language theory, linking the origin of human language to phylogenetic development. Conversely, consciousness research profits from the methods and categories of language research, as long as the common fallacy of coupling consciousness with the origin of human speech is avoided, i.e., confusing cause and effect. It is not inconceivable that Shannon’s concept of information and the development of mathematical formalism in theory of information that followed may also be applicable to symbolic information, permitting it to be quantified. Notwithstanding, such quantifying of information should not be confused with a mathematical model explaining consciousness; we are still far away from that.

 

References:

  1. Bruce, C. J.: Integration of sensory and motor signals in primate frontal eye fields. In: G. M. Edelman et al. (eds.) 1990, pp. 261±313.
  2. Buser, P. A., E. Rougel-Buser (eds.): Cerebral Correlates of Conscious Experience. North Holland Publ., Amsterdam 1978.
  3. Dawson, G. D.: The central control of sensory inflow. Proc. Roy. Soc. Med., London 51 (5), 531±535 (1958).
  4. Edelman, G. M., W. Einar Gall, W. M. Cowan (eds.): Signal and Sense. Local and Global Order in Perceptual Maps. Wiley, New York 1990.
  5. Grillner, S.: Neurobiology of vertebrate motor behavior. From flexion reflexes and locomotion to manipulative movements. In: G. M. Edelman et al. (eds.) 1990, pp. 187±208.
  6. Hagbarth, K. E., D. J. B. Kerr: Central influences on spinal afferent conduction. J. Neurophysiol. 17 (3), 295±297 (1954).
  7. Hassler, R.: Interaction of reticular activating system for vigilance and the corticothalamic and pallidal systems for directing awareness and attention under striatal control. In: Buser et al. (eds.) 1978.
  8. Hernegger, R.: Wahrnehmung und Bewußtsein. Ein Diskussionsbeitrag zu den Neurowissenschaften. Spektrum Akademischer Verlag, Berlin±Heidelberg±Oxford 1995.
  9. Hobson, J. A., M. Steriade: Neuronal basis of behavioral state control. In: Mountcastle, V. B., F. E. Bloom (eds.): Handbook of Physiology. The Nervous System, Vol. IV, pp. 701±825. American Physiological Society, Bethesda 1986.
  10. LeDoux, J. E.: Emotional networks in the brain. In: Lewis, M., J. M. Haviland (eds.): Handbook of Emotions. Guildford Press, New York 1993.
  11. Lindsley, D. B.: Psychophysiology and motivation. In: Jones, M. R. (ed.): Nebraska Symposium on Motivation, Vol. 5. University of Nebraska Press, Lincoln 1957.
  12. Mangun, G. E., S. A. Hillyard, in: Scheibel, A. B., A. F. Wechsler (eds.): Neurobiology of Higher Cognitive Function. Guildford Press, New York 1990.
  13. Meric, C., L. Collet: Attention and otoacoustic emissions. Neuroscience and Behavioral Reviews 18 (2), 215±222 (1994).
  14. Newman, J., B. J. Baars: A neural attentional model for access to consciousness: a global workspace perspective. Conceptions in Neuroscience 4 (2) 255±290 (1993).
  15. Ornstein, R., R. F. Thompson: The Amazing Brain. Boston 1984.
  16. Pöppel, E., A. L. Edinghaus: Geheimnisvoller Kosmos Gehirn. München 1994.
  17. Scheibel, A. B.: The brain stem reticular core and sensory function. In: Handbook of Physiology. The Nervous System, Vol. III,1. American Physiological Society, Bethesda 1984.
  18. Scheibel, A. B., A. F. Wechsler (eds.): Neurobiology of Higher Cognitive Function. Guildford Press, New York 1990.
  19. Zeki, S.: Functional specialization in the visual cortex: the generalisation of separate constructs and their multistage integration. In: Edelman, G. M., et al. 1990, pp. 85±130.

R. Hernegger, Change of Paradigms in Consciousness Research: On the Evolution of Consciousness

Consciousness

Explaining the nature of consciousness is one of the most important and perplexing areas of philosophy, but the concept is notoriously ambiguous. The abstract noun “consciousness” is not frequently used by itself in the contemporary literature, but is originally derived from the Latin con (with) and scire (to know). Perhaps the most commonly used contemporary notion of a conscious mental state is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view. But how are we to understand this? For instance, how is the conscious mental state related to the body? Can consciousness be explained in terms of brain activity? What makes a mental state be a conscious mental state? The problem of consciousness is arguably the most central issue in current philosophy of mind and is also importantly related to major traditional topics in metaphysics, such as the possibility of immortality and the belief in free will. This article focuses on Western theories and conceptions of consciousness, especially as found in contemporary analytic philosophy of mind.

The two broad, traditional and competing theories of mind are dualism and materialism (or physicalism). While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense, whereas the latter holds that, to put it crudely, the mind is the brain, or is caused by neural activity. It is against this general backdrop that many answers to the above questions are formulated and developed. There are also many familiar objections to both materialism and dualism. For example, it is often said that materialism cannot truly explain just how or why some brain states are conscious, and that there is an important “explanatory gap” between mind and matter. On the other hand, dualism faces the problem of explaining how a non-physical substance or mental state can causally interact with the physical body.

Some philosophers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. The relationship between consciousness and science is also central in much current theorizing on this topic: How does the brain “bind together” various sensory inputs to produce a unified subjective experience? What are the neural correlates of consciousness? What can be learned from abnormal psychology which might help us to understand normal consciousness? To what extent are animal minds different from human minds? Could an appropriately programmed machine be conscious?

1. Terminological Matters: Various Concepts of Consciousness

The concept of consciousness is notoriously ambiguous. It is important first to make several distinctions and to define related terms. The abstract noun “consciousness” is not often used in the contemporary literature, though it should be noted that it is originally derived from the Latin con (with) and scire (to know). Thus, “consciousness” has etymological ties to one’s ability to know and perceive, and should not be confused with conscience, which has the much more specific moral connotation of knowing when one has done or is doing something wrong. Through consciousness, one can have knowledge of the external world or one’s own mental states. The primary contemporary interest lies more in the use of the expressions “x is conscious” or “x is conscious of y.” Under the former category, perhaps most important is the distinction between state and creature consciousness (Rosenthal 1993a). We sometimes speak of an individual mental state, such as a pain or perception, as conscious. On the other hand, we also often speak of organisms or creatures as conscious, such as when we say “human beings are conscious” or “dogs are conscious.” Creature consciousness is also simply meant to refer to the fact that an organism is awake, as opposed to sleeping or in a coma. However, some kind of state consciousness is often implied by creature consciousness, that is, the organism is having conscious mental states. Due to the lack of a direct object in the expression “x is conscious,” this is usually referred to as intransitive consciousness, in contrast to transitive consciousness where the locution “x is conscious of y” is used (Rosenthal 1993a, 1997). Most contemporary theories of consciousness are aimed at explaining state consciousness; that is, explaining what makes a mental state a conscious mental state.

It might seem that “conscious” is synonymous with, say, “awareness” or “experience” or “attention.” However, it is crucial to recognize that this is not generally accepted today. For example, though perhaps somewhat atypical, one might hold that there are even unconscious experiences, depending of course on how the term “experience” is defined (Carruthers 2000). More common is the belief that we can be aware of external objects in some unconscious sense, for example, during cases of subliminal perception. The expression “conscious awareness” does not therefore seem to be redundant. Finally, it is not clear that consciousness ought to be restricted to attention. It seems plausible to suppose that one is conscious (in some sense) of objects in one’s peripheral visual field even though one is only attending to some narrow (focal) set of objects within that visual field.
Perhaps the most fundamental and commonly used notion of “conscious” is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the subjective or first-person point of view. When I am, for example, smelling a rose or having a conscious visual experience, there is something it “seems” or “feels” like from my perspective. An organism, such as a bat, is conscious if it is able to experience the outer world through its (echo-locatory) senses. There is also something it is like to be a conscious creature whereas there is nothing it is like to be, for example, a table or tree. This is primarily the sense of “conscious state” that will be used throughout this entry. There are still, though, a cluster of expressions and terms related to Nagel’s sense, and some authors simply stipulate the way that they use such terms. For example, philosophers sometimes refer to conscious states as phenomenal or qualitative states. More technically, philosophers often view such states as having qualitative properties called “qualia” (singular, quale). There is significant disagreement over the nature, and even the existence, of qualia, but they are perhaps most frequently understood as the felt properties or qualities of conscious states.

Ned Block (1995) makes an often cited distinction between phenomenal consciousness (or “phenomenality”) and access consciousness. The former is very much in line with the Nagelian notion described above. However, Block also defines the quite different notion of access consciousness in terms of a mental state’s relationship with other mental states; for example, a mental state’s “availability for use in reasoning and rationality guiding speech and action” (Block 1995: 227). This would, for example, count a visual perception as (access) conscious not because it has the “what it’s likeness” of phenomenal states, but rather because it carries visual information which is generally available for use by the organism, regardless of whether or not it has any qualitative properties. Access consciousness is therefore more of a functional notion; that is, concerned with what such states do. Although this concept of consciousness is certainly very important in cognitive science and philosophy of mind generally, not everyone agrees that access consciousness deserves to be called “consciousnesses” in any important sense. Block himself argues that neither sense of consciousness implies the other, while others urge that there is a more intimate connection between the two.

Finally, it is helpful to distinguish between consciousness and self-consciousness, which plausibly involves some kind of awareness or consciousness of one’s own mental states (instead of something out in the world). Self-consciousness arguably comes in degrees of sophistication ranging from minimal bodily self-awareness to the ability to reason and reflect on one’s own mental states, such as one’s beliefs and desires. Some important historical figures have even held that consciousness entails some form of self-consciousness (Kant 1781/1965, Sartre 1956), a view shared by some contemporary philosophers (Gennaro 1996a, Kriegel 2004).

 

2. Some History on the Topic

Interest in the nature of conscious experience has no doubt been around for as long as there have been reflective humans. It would be impossible here to survey the entire history, but a few highlights are in order. In the history of Western philosophy, which is the focus of this entry, important writings on human nature and the soul and mind go back to ancient philosophers, such as Plato. More sophisticated work on the nature of consciousness and perception can be found in the work of Plato’s most famous student Aristotle (see Caston 2002), and then throughout the later Medieval period. It is, however, with the work of René Descartes (1596-1650) and his successors in the early modern period of philosophy that consciousness and the relationship between the mind and body took center stage. As we shall see, Descartes argued that the mind is a non-physical substance distinct from the body. He also did not believe in the existence of unconscious mental states, a view certainly not widely held today. Descartes defined “thinking” very broadly to include virtually every kind of mental state and urged that consciousness is essential to thought. Our mental states are, according to Descartes, infallibly transparent to introspection. John Locke (1689/1975) held a similar position regarding the connection between mentality and consciousness, but was far less committed on the exact metaphysical nature of the mind.

Perhaps the most important philosopher of the period explicitly to endorse the existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925). Although Leibniz also believed in the immaterial nature of mental substances (which he called “monads”), he recognized the existence of what he called “petit perceptions,” which are basically unconscious perceptions. He also importantly distinguished between perception and apperception, roughly the difference between outer-directed consciousness and self-consciousness (see Gennaro 1999 for some discussion). The most important detailed theory of mind in the early modern period was developed by Immanuel Kant. His main work Critique of Pure Reason (1781/1965) is as equally dense as it is important, and cannot easily be summarized in this context. Although he owes a great debt to his immediate predecessors, Kant is arguably the most important philosopher since Plato and Aristotle and is highly relevant today. Kant basically thought that an adequate account of phenomenal consciousness involved far more than any of his predecessors had considered. There are important mental structures which are “presupposed” in conscious experience, and Kant presented an elaborate theory as to what those structures are, which, in turn, had other important implications. He, like Leibniz, also saw the need to postulate the existence of unconscious mental states and mechanisms in order to provide an adequate theory of mind (Kitcher 1990 and Brook 1994 are two excellent books on Kant’s theory of mind.).

Over the past one hundred years or so, however, research on consciousness has taken off in many important directions. In psychology, with the notable exception of the virtual banishment of consciousness by behaviorist psychologists (e.g., Skinner 1953), there were also those deeply interested in consciousness and various introspective (or “first-person”) methods of investigating the mind. The writings of such figures as Wilhelm Wundt (1897), William James (1890) and Alfred Titchener (1901) are good examples of this approach. Franz Brentano (1874/1973) also had a profound effect on some contemporary theories of consciousness. Similar introspectionist approaches were used by those in the so-called “phenomenological” tradition in philosophy, such as in the writings of Edmund Husserl (1913/1931, 1929/1960) and Martin Heidegger (1927/1962).  The work of Sigmund Freud was very important, at minimum, in bringing about the near universal acceptance of the existence of unconscious mental states and processes.

It must, however, be kept in mind that none of the above had very much scientific knowledge about the detailed workings of the brain.  The relatively recent development of neurophysiology is, in part, also responsible for the unprecedented interdisciplinary research interest in consciousness, particularly since the 1980s.  There are now several important journals devoted entirely to the study of consciousness: Consciousness and Cognition, Journal of Consciousness Studies, and Psyche.  There are also major annual conferences sponsored by world wide professional organizations, such as the Association for the Scientific Study of Consciousness, and an entire book series called “Advances in Consciousness Research” published by John Benjamins.  (For a small sample of introductory texts and important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997, Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005.)

3. The Metaphysics of Consciousness: Materialism vs. Dualism

Metaphysics is the branch of philosophy concerned with the ultimate nature of reality. There are two broad traditional and competing metaphysical views concerning the nature of the mind and conscious mental states: dualism and materialism. While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense. On the other hand, materialists hold that the mind is the brain, or, more accurately, that conscious mental activity is identical with neural activity. It is important to recognize that by non-physical, dualists do not merely mean “not visible to the naked eye.” Many physical things fit this description, such as the atoms which make up the air in a typical room. For something to be non-physical, it must literally be outside the realm of physics; that is, not in space at all and undetectable in principle by the instruments of physics. It is equally important to recognize that the category “physical” is broader than the category “material.” Materialists are called such because there is the tendency to view the brain, a material thing, as the most likely physical candidate to identify with the mind. However, something might be physical but not material in this sense, such as an electromagnetic or energy field. One might therefore instead be a “physicalist” in some broader sense and still not a dualist. Thus, to say that the mind is non-physical is to say something much stronger than that it is non-material. Dualists, then, tend to believe that conscious mental states or minds are radically different from anything in the physical world at all.

a. Dualism: General Support and Related Issues

There are a number of reasons why some version of dualism has been held throughout the centuries. For one thing, especially from the introspective or first-person perspective, our conscious mental states just do not seem like physical things or processes. That is, when we reflect on our conscious perceptions, pains, and desires, they do not seem to be physical in any sense. Consciousness seems to be a unique aspect of the world not to be understood in any physical way. Although materialists will urge that this completely ignores the more scientific third-person perspective on the nature of consciousness and mind, this idea continues to have force for many today. Indeed, it is arguably the crucial underlying intuition behind historically significant “conceivability arguments” against materialism and for dualism. Such arguments typically reason from the premise that one can conceive of one’s conscious states existing without one’s body or, conversely, that one can imagine one’s own physical duplicate without consciousness at all (see section 3b.iv). The metaphysical conclusion ultimately drawn is that consciousness cannot be identical with anything physical, partly because there is no essential conceptual connection between the mental and the physical. Arguments such as these go back to Descartes and continue to be used today in various ways (Kripke 1972, Chalmers 1996), but it is highly controversial as to whether they succeed in showing that materialism is false. Materialists have replied in various ways to such arguments and the relevant literature has grown dramatically in recent years.

Historically, there is also the clear link between dualism and a belief in immortality, and hence a more theistic perspective than one tends to find among materialists. Indeed, belief in dualism is often explicitly theologically motivated. If the conscious mind is not physical, it seems more plausible to believe in the possibility of life after bodily death. On the other hand, if conscious mental activity is identical with brain activity, then it would seem that when all brain activity ceases, so do all conscious experiences and thus no immortality. After all, what do many people believe continues after bodily death? Presumably, one’s own conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a similar historical connection to a belief in free will, which is of course a major topic in its own right. For our purposes, it suffices to say that, on some definitions of what it is to act freely, such ability seems almost “supernatural” in the sense that one’s conscious decisions can alter the otherwise deterministic sequence of events in nature. To put it another way: If we are entirely physical beings as the materialist holds, then mustn’t all of the brain activity and behavior in question be determined by the laws of nature? Although materialism may not logically rule out immortality or free will, materialists will likely often reply that such traditional, perhaps even outdated or pre-scientific beliefs simply ought to be rejected to the extent that they conflict with materialism. After all, if the weight of the evidence points toward materialism and away from dualism, then so much the worse for those related views.

One might wonder “even if the mind is physical, what about the soul?” Maybe it’s the soul, not the mind, which is non-physical as one might be told in many religious traditions. While it is true that the term “soul” (or “spirit”) is often used instead of “mind” in such religious contexts, the problem is that it is unclear just how the soul is supposed to differ from the mind. The terms are often even used interchangeably in many historical texts and by many philosophers because it is unclear what else the soul could be other than “the mental substance.” It is difficult to describe the soul in any way that doesn’t make it sound like what we mean by the mind. After all, that’s what many believe goes on after bodily death; namely, conscious mental activity. Granted that the term “soul” carries a more theological connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely different things. Somewhat related to the issue of immortality, the existence of near death experiences is also used as some evidence for dualism and immortality. Such patients experience a peaceful moving toward a light through a tunnel like structure, or are able to see doctors working on their bodies while hovering over them in an emergency room (sometimes akin to what is called an “out of body experience”). In response, materialists will point out that such experiences can be artificially induced in various experimental situations, and that starving the brain of oxygen is known to cause hallucinations.

Various paranormal and psychic phenomena, such as clairvoyance, faith healing, and mind-reading, are sometimes also cited as evidence for dualism. However, materialists (and even many dualists) will first likely wish to be skeptical of the alleged phenomena themselves for numerous reasons. There are many modern day charlatans who should make us seriously question whether there really are such phenomena or mental abilities in the first place. Second, it is not quite clear just how dualism follows from such phenomena even if they are genuine. A materialist, or physicalist at least, might insist that though such phenomena are puzzling and perhaps currently difficult to explain in physical terms, they are nonetheless ultimately physical in nature; for example, having to do with very unusual transfers of energy in the physical world. The dualist advantage is perhaps not as obvious as one might think, and we need not jump to supernatural conclusions so quickly.

i. Substance Dualism and Objections

Interactionist Dualism or simply “interactionism” is the most common form of “substance dualism” and its name derives from the widely accepted fact that mental states and bodily states causally interact with each other. For example, my desire to drink something cold causes my body to move to the refrigerator and get something to drink and, conversely, kicking me in the shin will cause me to feel a pain and get angry. Due to Descartes’ influence, it is also sometimes referred to as “Cartesian dualism.” Knowing nothing about just where such causal interaction could take place, Descartes speculated that it was through the pineal gland, a now almost humorous conjecture. But a modern day interactionist would certainly wish to treat various areas of the brain as the location of such interactions.

Three serious objections are briefly worth noting here. The first is simply the issue of just how does or could such radically different substances causally interact. How something non-physical causally interacts with something physical, such as the brain? No such explanation is forthcoming or is perhaps even possible, according to materialists. Moreover, if causation involves a transfer of energy from cause to effect, then how is that possible if the mind is really non-physical? Gilbert Ryle (1949) mockingly calls the Cartesian view about the nature of mind, a belief in the “ghost in the machine.” Secondly, assuming that some such energy transfer makes any sense at all, it is also then often alleged that interactionism is inconsistent with the scientifically well-established Conservation of Energy principle, which says that the total amount of energy in the universe, or any controlled part of it, remains constant. So any loss of energy in the cause must be passed along as a corresponding gain of energy in the effect, as in standard billiard ball examples. But if interactionism is true, then when mental events cause physical events, energy would literally come into the physical word. On the other hand, when bodily events cause mental events, energy would literally go out of the physical world. At the least, there is a very peculiar and unique notion of energy involved, unless one wished, even more radically, to deny the conservation principle itself. Third, some materialists might also use the well-known fact that brain damage (even to very specific areas of the brain) causes mental defects as a serious objection to interactionism (and thus as support for materialism). This has of course been known for many centuries, but the level of detailed knowledge has increased dramatically in recent years. Now a dualist might reply that such phenomena do not absolutely refute her metaphysical position since it could be replied that damage to the brain simply causes corresponding damage to the mind. However, this raises a host of other questions: Why not opt for the simpler explanation, i.e., that brain damage causes mental damage because mental processes simply are brain processes? If the non-physical mind is damaged when brain damage occurs, how does that leave one’s mind according to the dualist’s conception of an afterlife? Will the severe amnesic at the end of life on Earth retain such a deficit in the afterlife? If proper mental functioning still depends on proper brain functioning, then is dualism really in no better position to offer hope for immortality?

It should be noted that there is also another less popular form of substance dualism called parallelism, which denies the causal interaction between the non-physical mental and physical bodily realms. It seems fair to say that it encounters even more serious objections than interactionism.

ii. Other Forms of Dualism

While a detailed survey of all varieties of dualism is beyond the scope of this entry, it is at least important to note here that the main and most popular form of dualism today is called property dualism. Substance dualism has largely fallen out of favor at least in most philosophical circles, though there are important exceptions (e.g., Swinburne 1986, Foster 1996) and it often continues to be tied to various theological positions. Property dualism, on the other hand, is a more modest version of dualism and it holds that there are mental properties (i.e., characteristics or aspects of things) that are neither identical with nor reducible to physical properties. There are actually several different kinds of property dualism, but what they have in common is the idea that conscious properties, such as the color qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process.
Two other views worth mentioning are epiphenomenalism and panpsychism. The latter is the somewhat eccentric view that all things in physical reality, even down to micro-particles, have some mental properties. All substances have a mental aspect, though it is not always clear exactly how to characterize or test such a claim. Epiphenomenalism holds that mental events are caused by brain events but those mental events are mere “epiphenomena” which do not, in turn, cause anything physical at all, despite appearances to the contrary (for a recent defense, see Robinson 2004).

Finally, although not a form of dualism, idealism holds that there are only immaterial mental substances, a view more common in the Eastern tradition. The most prominent Western proponent of idealism was 18th century empiricist George Berkeley. The idealist agrees with the substance dualist, however, that minds are non-physical, but then denies the existence of mind-independent physical substances altogether. Such a view faces a number of serious objections, and it also requires a belief in the existence of God.

b. Materialism: General Support

Some form of materialism is probably much more widely held today than in centuries past. No doubt part of the reason for this has to do with the explosion in scientific knowledge about the workings of the brain and its intimate connection with consciousness, including the close connection between brain damage and various states of consciousness. Brain death is now the main criterion for when someone dies. Stimulation to specific areas of the brain results in modality specific conscious experiences. Indeed, materialism often seems to be a working assumption in neurophysiology. Imagine saying to a neuroscientist “you are not really studying the conscious mind itself” when she is examining the workings of the brain during an fMRI. The idea is that science is showing us that conscious mental states, such as visual perceptions, are simply identical with certain neuro-chemical brain processes; much like the science of chemistry taught us that water just is H2O.

There are also theoretical factors on the side of materialism, such as adherence to the so-called “principle of simplicity” which says that if two theories can equally explain a given phenomenon, then we should accept the one which posits fewer objects or forces. In this case, even if dualism could equally explain consciousness (which would of course be disputed by materialists), materialism is clearly the simpler theory in so far as it does not posit any objects or processes over and above physical ones. Materialists will wonder why there is a need to believe in the existence of such mysterious non-physical entities. Moreover, in the aftermath of the Darwinian revolution, it would seem that materialism is on even stronger ground provided that one accepts basic evolutionary theory and the notion that most animals are conscious. Given the similarities between the more primitive parts of the human brain and the brains of other animals, it seems most natural to conclude that, through evolution, increasing layers of brain areas correspond to increased mental abilities. For example, having a well developed prefrontal cortex allows humans to reason and plan in ways not available to dogs and cats. It also seems fairly uncontroversial to hold that we should be materialists about the minds of animals. If so, then it would be odd indeed to hold that non-physical conscious states suddenly appear on the scene with humans.

There are still, however, a number of much discussed and important objections to materialism, most of which question the notion that materialism can adequately explain conscious experience.

i. Objection 1: The Explanatory Gap and The Hard Problem

Joseph Levine (1983) coined the expression “the explanatory gap” to express a difficulty for any materialistic attempt to explain consciousness. Although not concerned to reject the metaphysics of materialism, Levine gives eloquent expression to the idea that there is a key gap in our ability to explain the connection between phenomenal properties and brain properties (see also Levine 1993, 2001). The basic problem is that it is, at least at present, very difficult for us to understand the relationship between brain properties and phenomenal properties in any explanatory satisfying way, especially given the fact that it seems possible for one to be present without the other. There is an odd kind of arbitrariness involved: Why or how does some particular brain process produce that particular taste or visual sensation? It is difficult to see any real explanatory connection between specific conscious states and brain states in a way that explains just how or why the former are identical with the latter. There is therefore an explanatory gap between the physical and mental. Levine argues that this difficulty in explaining consciousness is unique; that is, we do not have similar worries about other scientific identities, such as that “water is H2O” or that “heat is mean molecular kinetic energy.” There is “an important sense in which we can’t really understand how [materialism] could be true.” (2001: 68)

David Chalmers (1995) has articulated a similar worry by using the catchy phrase “the hard problem of consciousness,” which basically refers to the difficulty of explaining just how physical processes in the brain give rise to subjective conscious experiences. The “really hard problem is the problem of experience…How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?” (1995: 201) Others have made similar points, as Chalmers acknowledges, but reference to the phrase “the hard problem” has now become commonplace in the literature. Unlike Levine, however, Chalmers is much more inclined to draw anti-materialist metaphysical conclusions from these and other considerations. Chalmers usefully distinguishes the hard problem of consciousness from what he calls the (relatively) “easy problems” of consciousness, such as the ability to discriminate and categorize stimuli, the ability of a cognitive system to access its own internal states, and the difference between wakefulness and sleep. The easy problems generally have more to do with the functions of consciousness, but Chalmers urges that solving them does not touch the hard problem of phenomenal consciousness. Most philosophers, according to Chalmers, are really only addressing the easy problems, perhaps merely with something like Block’s “access consciousness” in mind. Their theories ignore phenomenal consciousness.

There are many responses by materialists to the above charges, but it is worth emphasizing that Levine, at least, does not reject the metaphysics of materialism. Instead, he sees the “explanatory gap [as] primarily an epistemological problem” (2001: 10). That is, it is primarily a problem having to do with knowledge or understanding. This concession is still important at least to the extent that one is concerned with the larger related metaphysical issues discussed in section 3a, such as the possibility of immortality.

Perhaps most important for the materialist, however, is recognition of the fact that different concepts can pick out the same property or object in the world (Loar 1990, 1997). Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” The traditional distinction, made most notably by Gottlob Frege in the late 19th century, between “meaning” (or “sense”) and “reference” is also relevant here. Two or more concepts, which can have different meanings, can refer to the same property or object, much like “Venus” and “The Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are so-called “phenomenal concepts” which uses a phenomenal or “first-person” property to refer to some conscious mental state, such as a sensation of red. In contrast, we can also use various concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus but one conscious mental state which can be conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then still be identical to physical properties. Moreover, this response provides a diagnosis for why there even seems to be such a gap; namely, that we use very different concepts to pick out the same property. Science will be able, in principle, to close the gap and solve the hard problem of consciousness in an analogous way that we now have a very good understanding for why “water is H2O” or “heat is mean molecular kinetic energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after all – it will just take some more time. After all, the science of chemistry didn’t develop overnight and we are relatively early in the history of neurophysiology and our understanding of phenomenal consciousness. (See Shear 1997 for many more specific responses to the hard problem, but also for Chalmers’ counter-replies.)

ii. Objection 2: The Knowledge Argument

There is a pair of very widely discussed, and arguably related, objections to materialism which come from the seminal writings of Thomas Nagel (1974) and Frank Jackson (1982, 1986). These arguments, especially Jackson’s, have come to be known as examples of the “knowledge argument” against materialism, due to their clear emphasis on the epistemological (that is, knowledge related) limitations of materialism. Like Levine, Nagel does not reject the metaphysics of materialism. Jackson had originally intended for his argument to yield a dualistic conclusion, but he no longer holds that view. The general pattern of each argument is to assume that all the physical facts are known about some conscious mind or conscious experience. Yet, the argument goes, not all is known about the mind or experience. It is then inferred that the missing knowledge is non-physical in some sense, which is surely an anti-materialist conclusion in some sense.

Nagel imagines a future where we know everything physical there is to know about some other conscious creature’s mind, such as a bat. However, it seems clear that we would still not know something crucial; namely, “what it is like to be a bat.” It will not do to imagine what it is like for us to be a bat. We would still not know what it is like to be a bat from the bat’s subjective or first-person point of view. The idea, then, is that if we accept the hypothesis that we know all of the physical facts about bat minds, and yet some knowledge about bat minds is left out, then materialism is inherently flawed when it comes to explaining consciousness. Even in an ideal future in which everything physical is known by us, something would still be left out. Jackson’s somewhat similar, but no less influential, argument begins by asking us to imagine a future where a person, Mary, is kept in a black and white room from birth during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red for example, but she learns all of the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new; namely, to use Nagel’s famous phrase, what it is like to experience red. This is a new piece of knowledge and hence she must have come to know some non-physical fact (since, by hypothesis, she already knew all of the physical facts). Thus, not all knowledge about the conscious mind is physical knowledge.

The influence and the quantity of work that these ideas have generated cannot be exaggerated. Numerous materialist responses to Nagel’s argument have been presented (such as Van Gulick 1985), and there is now a very useful anthology devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some materialists have wondered if we should concede up front that Mary wouldn’t be able to imagine the color red even before leaving the room, so that maybe she wouldn’t even be surprised upon seeing red for the first time. Various suspicions about the nature and effectiveness of such thought experiments also usually accompany this response. More commonly, however, materialists reply by arguing that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. Recalling the distinction made in section 3b.i between concepts and objects or properties, the materialist will urge that there is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. We might say that Mary, upon leaving the black and white room, becomes acquainted with the same neural property as before, but only now from the first-person point of view. The property itself isn’t new; only the perspective, or what philosophers sometimes call the “mode of presentation,” is different. In short, coming to learn or know something new does not entail learning some new fact about the world. Analogies are again given in other less controversial areas, for example, one can come to know about some historical fact or event by reading a (reliable) third-person historical account or by having observed that event oneself. But there is still only the one objective fact under two different descriptions. Finally, it is crucial to remember that, according to most, the metaphysics of materialism remains unaffected. Drawing a metaphysical conclusion from such purely epistemological premises is always a questionable practice. Nagel’s argument doesn’t show that bat mental states are not identical with bat brain states. Indeed, a materialist might even expect the conclusion that Nagel draws; after all, given that our brains are so different from bat brains, it almost seems natural for there to be certain aspects of bat experience that we could never fully comprehend. Only the bat actually undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t show that Mary’s color experience is distinct from her brain processes.

Despite the plethora of materialist responses, vigorous debate continues as there are those who still think that something profound must always be missing from any materialist attempt to explain consciousness; namely, that understanding subjective phenomenal consciousness is an inherently first-person activity which cannot be captured by any objective third-person scientific means, no matter how much scientific knowledge is accumulated. Some knowledge about consciousness is essentially limited to first-person knowledge. Such a sense, no doubt, continues to fuel the related anti-materialist intuitions raised in the previous section. Perhaps consciousness is simply a fundamental or irreducible part of nature in some sense (Chalmers 1996). (For more see Van Gulick 1993.)

iii. Objection 3: Mysterianism

Finally, some go so far as to argue that we are simply not capable of solving the problem of consciousness (McGinn 1989, 1991, 1995). In short, “mysterians” believe that the hard problem can never be solved because of human cognitive limitations; the explanatory gap can never be filled. Once again, however, McGinn does not reject the metaphysics of materialism, but rather argues that we are “cognitively closed” with respect to this problem much like a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. More specifically, McGinn claims that we are cognitively closed as to how the brain produces conscious awareness. McGinn concedes that some brain property produces conscious experience, but we cannot understand how this is so or even know what that brain property is. Our concept forming mechanisms simply will not allow us to grasp the physical and causal basis of consciousness. We are not conceptually suited to be able to do so.

McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in materialist terms; instead, he presents another argument for his admittedly pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection or the first-person perspective, but our access to the brain is through the use of outer spatial senses (e.g., vision) or a more third-person perspective. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.
Materialist responses are numerous. First, one might wonder why we can’t combine the two perspectives within certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalist theory emerge from such a combined approach? Presumably, McGinn would say that we are not capable of putting such a theory together in any appropriate way. Second, despite McGinn’s protests to the contrary, many will view the problem of explaining consciousness as a merely temporary limit of our theorizing, and not something which is unsolvable in principle (Dennett 1991). Third, it may be that McGinn expects too much; namely, grasping some causal link between the brain and consciousness. After all, if conscious mental states are simply identical to brain states, then there may simply be a “brute fact” that really does not need any further explaining. Indeed, this is sometimes also said in response to the explanatory gap and the hard problem, as we saw earlier. It may even be that some form of dualism is presupposed in McGinn’s argument, to the extent that brain states are said to “cause” or “give rise to” consciousness, instead of using the language of identity. Fourth, McGinn’s analogy to lower animals and mathematics is not quite accurate. Rats, for example, have no concept whatsoever of calculus. It is not as if they can grasp it to some extent but just haven’t figured out the answer to some particular problem within mathematics. Rats are just completely oblivious to calculus problems. On the other hand, we humans obviously do have some grasp on consciousness and on the workings of the brain—just see the references at the end of this entry! It is not clear, then, why we should accept the extremely pessimistic and universally negative conclusion that we can never discover the answer to the problem of consciousness, or, more specifically, why we could never understand the link between consciousness and the brain.

iv. Objection 4: Zombies

Unlike many of the above objections to materialism, the appeal to the possibility of zombies is often taken as both a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” basically refers to conceivable creatures which are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for there to be such creatures: “the conceivability of zombies seems…obvious to me…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996: 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus, it is logically possible for me to jump fifty feet in the air, but not empirically possible. Philosophers often use the notion of “possible worlds,” i.e., different ways that the world might have been, in describing such non-actual situations or possibilities. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because materialism would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all possible worlds), and the same should therefore go for mind-brain identity claims. Since the possibility of zombies shows that it doesn’t, then we should conclude that materialism is false. [See Identity Theory.]

It is impossible to do justice to all of the subtleties here. The literature in response to zombie, and related “conceivability,” arguments is enormous (see, for example, Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001). A few lines of reply are as follows: First, it is sometimes objected that the conceivability of something does not really entail its possibility. Perhaps we can also conceive of water not being H2O, since there seems to be no logical contradiction in doing so, but, according to received wisdom from Kripke, that is really impossible. Perhaps, then, some things just seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind-brain and water-H2O cases (or other such scientific identities). Indeed, the entire issue of the exact relationship between “conceivability” and “possibility” is the subject of an important recently published anthology (Gendler and Hawthorne 2002). Second, even if zombies are conceivable in the sense of logically possible, how can we draw a substantial metaphysical conclusion about the actual world? There is often suspicion on the part of materialists about what, if anything, such philosophers’ “thought experiments” can teach us about the nature of our minds. It seems that one could take virtually any philosophical or scientific theory about almost anything, conceive that it is possibly false, and then conclude that it is actually false. Something, perhaps, is generally wrong with this way of reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why such zombie scenarios seem possible; namely, that we do not (at least, not yet) see what the necessary connection is between neural events and conscious mental events. On the one side, we are dealing with scientific third-person concepts and, on the other, we are employing phenomenal concepts. We are, perhaps, simply currently not in a position to understand completely such a necessary connection.
Debate and discussion on all four objections remains very active.

v. Varieties of Materialism

Despite the apparent simplicity of materialism, say, in terms of the identity between mental states and neural states, the fact is that there are many different forms of materialism. While a detailed survey of all varieties is beyond the scope of this entry, it is at least important to acknowledge the commonly drawn distinction between two kinds of “identity theory”: token-token and type-type materialism. Type-type identity theory is the stronger thesis and says that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956, Smart 1959). However, this view historically came under serious assault due to the fact that it seems to rule out the so-called “multiple realizability” of conscious mental states. The idea is simply that it seems perfectly possible for there to be other conscious beings (e.g., aliens, radically different animals) who can have those same mental states but who also are radically different from us physiologically (Fodor 1974). It seems that commitment to type-type identity theory led to the undesirable result that only organisms with brains like ours can have conscious states. Somewhat more technically, most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. (But for more recent defenses of type-type identity theory see Hill and McLaughlin 1999, Papineau 1994, 1995, 1998, Polger 2004.) As a consequence, a more modest “token-token” identity theory has become preferable to many materialists. This view simply holds that each particular conscious mental event in some organism is identical with some particular brain process or event in that organism. This seems to preserve much of what the materialist wants but yet allows for the multiple realizability of conscious states, because both the human and the alien can still have a conscious desire for something to drink while each mental event is identical with a (different) physical state in each organism.

Taking the notion of multiple realizability very seriously has also led many to embrace functionalism, which is the view that conscious mental states should really only be identified with the functional role they play within an organism. For example, conscious pains are defined more in terms of input and output, such as causing bodily damage and avoidance behavior, as well as in terms of their relationship to other mental states. It is normally viewed as a form of materialism since virtually all functionalists also believe, like the token-token theorist, that something physical ultimately realizes that functional state in the organism, but functionalism does not, by itself, entail that materialism is true. Critics of functionalism, however, have long argued that such purely functional accounts cannot adequately explain the essential “feel” of conscious states, or that it seems possible to have two functionally equivalent creatures, one of whom lacks qualia entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).

Some materialists even deny the very existence of mind and mental states altogether, at least in the sense that the very concept of consciousness is muddled (Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such as desires and beliefs, will eventually be eliminated and replaced by physicalistic terms as neurophysiology matures into the future (Churchland 1983). This is meant as analogous to past similar eliminations based on deeper scientific understanding, for example, we no longer need to speak of “ether” or “phlogiston.” Other eliminativists, more modestly, argue that there is no such thing as qualia when they are defined in certain problematic ways (Dennett 1988).

Finally, it should also be noted that not all materialists believe that conscious mentality can be explained in terms of the physical, at least in the sense that the former cannot be “reduced” to the latter. Materialism is true as an ontological or metaphysical doctrine, but facts about the mind cannot be deduced from facts about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might be viewed as a relatively harmless variation on materialist themes, but others object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed, the line between such “non-reductive materialism” and property dualism is not always so easy to draw; partly because the entire notion of “reduction” is ambiguous and a very complex topic in its own right. On a related front, some materialists are happy enough to talk about a somewhat weaker “supervenience” relation between mind and matter. Although “supervenience” is a highly technical notion with many variations, the idea is basically one of dependence (instead of identity); for example, that the mental depends on the physical in the sense that any mental change must be accompanied by some physical change (see Kim 1993).

4. Specific Theories of Consciousness

Most specific theories of consciousness tend to be reductionist in some sense. The classic notion at work is that consciousness or individual conscious mental states can be explained in terms of something else or in some other terms. This section will focus on several prominent contemporary reductionist theories. We should, however, distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so in mentalistic terms, such as by using unconscious mental states or other cognitive notions.

a. Neural Theories

The more direct reductionist approach can be seen in various, more specific, neural theories of consciousness. Perhaps best known is the theory offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons fire in synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75 cycles per second). However, many philosophers and scientists have put forth other candidates for what, specifically, to identify in the brain with consciousness. This vast enterprise has come to be known as the search for the “neural correlates of consciousness” or NCCs (see section 5b below for more). The overall idea is to show how one or more specific kinds of neuro-chemical activity can underlie and explain conscious mental activity (Metzinger 2000). Of course, mere “correlation” is not enough for a fully adequate neural theory and explaining just what counts as a NCC turns out to be more difficult than one might think (Chalmers 2000). Even Crick and Koch have acknowledged that they, at best, provide a necessary condition for consciousness, and that such firing patters are not automatically sufficient for having conscious experience.

b. Representational Theories of Consciousness

Many current theories attempt to reduce consciousness in mentalistic terms. One broadly popular approach along these lines is to reduce consciousness to “mental representations” of some kind. The notion of a “representation” is of course very general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way; for example, as mental events representing outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. More specifically, philosophers will often call such representational mental states “intentional states” which have representational content; that is, mental states which are “about something” or “directed at something” as when one has a thought about the house or a perception of the tree. Although intentional states are sometimes contrasted with phenomenal states, such as pains and color experiences, it is clear that many conscious states have both phenomenal and intentional properties, such as visual perceptions. It should be noted that the relation between intentionalilty and consciousness is itself a major ongoing area of dispute with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Siewart 1998, Horgan and Tienson 2002) while most representationalists insist that intentionality is prior to consciousness.

The general view that we can explain conscious mental states in terms of representational or intentional states is called “representationalism.” Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, then, believe that there is room for a kind of “second-step” reduction to be filled in later by neuroscience. The other related motivation for representational theories of consciousness is that many believe that an account of representation or intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a reductionist and naturalistic theory of consciousness. Most generally, however, we can say that a representationalist will typically hold that the phenomenal properties of experience (that is, the “qualia” or “what it is like of experience” or “phenomenal character”) can be explained in terms of the experiences’ representational properties. Alternatively, conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.

i. First-Order Representationalism

A First-order representational (FOR) theory of consciousness is a theory that attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states. Probably the two most cited FOR theories of consciousness are those of Fred Dretske (1995) and Michael Tye (1995, 2000), though there are many others as well (e.g., Harman 1990, Kirk 1994, Byrne 2001, Thau 2002, Droege 2003). Tye’s theory is more fully worked out and so will be the focus of this section. Like other FOR theorists, Tye holds that the representational content of my conscious experience (i.e., what my experience is about or directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and other FOR representationalists often use the somewhat technical notion of the “transparency of experience” as support for their view (Harman 1990). This is an argument based on the phenomenological first-person observation, which goes back to Moore (1903), that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is not blue; rather, one “sees right through” one’s experience to its representational properties, and there is nothing else to one’s experience over and above such properties.

Whatever the merits and exact nature of the argument from transparency (see Kind 2003), it is clear, of course, that not all mental representations are conscious, so the key question eventually becomes: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes a mental state a conscious mental state? Here Tye defends what he calls “PANIC theory.” The acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content. Without probing into every aspect of PANIC theory, Tye holds that at least some of the representational content in question is non-conceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. (Actually, the exact nature or even existence of non-conceptual content of experience is itself a highly debated and difficult issue in philosophy of mind. See Gunther 2003.) Conscious states clearly must also have “intentional content” (IC) for any representationalist. Tye also asserts that such content is “abstract” (A) and not necessarily about particular concrete objects. This condition is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious, however, is that such content must be “poised” (P), which is an importantly functional notion. The “key idea is that experiences and feelings…stand ready and available to make a direct impact on beliefs and/or desires. For example…feeling hungry… has an immediate cognitive effect, namely, the desire to eat….States with nonconceptual content that are not so poised lack phenomenal character [because]…they arise too early, as it were, in the information processing” (Tye 2000: 62).

One objection to Tye’s theory is that it does not really address the hard problem of phenomenal consciousness (see section 3b.i). This is partly because what really seems to be doing most of the work on Tye’s PANIC account is the very functional sounding “poised” notion, which is perhaps closer to Block’s access consciousness (see section 1) and is therefore not necessarily able to explain phenomenal consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s PANIC account might not equally apply to unconscious representations and thus how it really explains phenomenal consciousness.

Other standard objections to Tye’s theory as well as to other FOR accounts include the concern that it does not cover all kinds of conscious states. Some conscious states seem not to be “about” anything, such as pains, anxiety, or after-images, and so would be non-representational conscious states. If so, then conscious experience cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains, itches, and the like do represent, in the sense that they represent parts of the body. And after-images, hallucinations, and the like either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in response to a whole host of alleged counter-examples to representationalism. Historically among them are various hypothetical cases of inverted qualia (see Shoemaker 1982), the mere possibility of which is sometimes taken as devastating to representationalism. These are cases where behaviorally indistinguishable individuals have inverted color perceptions of objects, such as person A visually experiences a lemon the way that person B experience a ripe tomato with respect to their color, and so on for all yellow and red objects. Isn’t it possible that there are two individuals whose color experiences are inverted with respect to the objects of perception? (For more on the importance of color in philosophy, see Hardin 1986.)
A somewhat different twist on the inverted spectrum is famously put forth in Block’s (1990) Inverted Earth case. On Inverted Earth every object has the complementary color to the one it has here, but we are asked to imagine that a person is equipped with color-inverting lenses and then sent to Inverted Earth completely ignorant of those facts. Since the color inversions cancel out, the phenomenal experiences remain the same, yet there certainly seem to be different representational properties of objects involved. The strategy on the part of critics, in short, is to think of counter-examples (either actual or hypothetical) whereby there is a difference between the phenomenal properties in experience and the relevant representational properties in the world. Such objections can, perhaps, be answered by Tye and others in various ways, but significant debate continues (Macpherson 2005). Intuitions also dramatically differ as to the very plausibility and value of such thought experiments. (For more, see Seager 1999, chapters 6 and 7. See also Chalmers 2004 for an excellent discussion of the dizzying array of possible representationalist positions.)

ii. Higher-Order Representationalism

As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness. For example, John Locke (1689/1975) once said that “consciousness is the perception of what passes in a man’s own mind.” This intuition has been revived by a number of philosophers (Rosenthal, 1986, 1993b, 1997, 2000, 2004; Gennaro 1996a; Armstrong, 1968, 1981; Lycan, 1996, 2001). In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a “meta-psychological” state, i.e., a mental state directed at another mental state. So, for example, my desire to write a good encyclopedia entry becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am “aware of” in some sense. Any theory which attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. It is best initially to use the more neutral term “representation” because there are a number of different kinds of higher-order theory, depending upon how one characterizes the HOR in question. HO theories, thus, attempt to explain consciousness in mentalistic terms, that is, by reference to such notions as “thoughts” and “awareness.” Conscious mental states arise when two unconscious mental states are related in a certain specific way; namely, that one of them (the HOR) is directed at the other (M). HO theorists are united in the belief that their approach can better explain consciousness than any purely FOR theory, which has significant difficulty in explaining the difference between unconscious and conscious mental states.

There are various kinds of HO theory with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David M. Rosenthal, think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual or experiential state of some kind (Lycan 1996) which does not require the kind of conceptual content invoked by HOT theorists. Partly due to Kant (1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (such as in Rosenthal 2004 and Lycan 2004). Some philosophers, however, have argued that the difference between these theories is perhaps not as important or as clear as some think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).

A common initial objection to HOR theories is that they are circular and lead to an infinite regress. It might seem that the HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state the higher-order thought (HOT) is not itself conscious; otherwise, circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one’s attention is directed back into one’s mind. For example, what makes my desire to write a good entry a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed at the entry and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).

Peter Carruthers (2000) has proposed another possibility within HO theory; namely, that it is better for various reasons to think of the HOTs as dispositional states instead of the standard view that the HOTs are actual, though he also understands his “dispositional HOT theory” to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to higher-order thought. So “conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves.” (Carruthers 2000: 228). Some first-order perceptual contents are available to a higher-order “theory of mind mechanism,” which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual intentional content; for example, a conscious experience of red not only has a first-order content of “red,” but also has the higher-order content “seems red” or “experience of red.” Carruthers also makes interesting use of so-called “consumer semantics” in order to fill out his theory of phenomenal consciousness. The content of a mental state depends, in part, on the powers of the organisms which “consume” that state, e.g., the kinds of inferences which the organism can make when it is in that state. Daniel Dennett (1991) is sometimes credited with an earlier version of a dispositional account (see Carruthers 2000, chapter ten). Carruthers’ dispositional theory is often criticized by those who, among other things, do not see how the mere disposition toward a mental state can render it conscious (Rosenthal 2004; see also Gennaro 2004; for more, see Consciousness, Higher Order Theories of.)

It is worth briefly noting a few typical objections to HO theories (many of which can be found in Byrne 1997): First, and perhaps most common, is that various animals (and even infants) are not likely to have to the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex higher-order thoughts such as “I am in mental state M”? Although most who bring forth this objection are not HO theorists, Peter Carruthers (1989) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. Gennaro (1993, 1996) has replied to Carruthers on this point; for example, it is argued that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states. Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate continues, however, in Carruthers (2000, 2005) and Gennaro (2004).

A second objection has been referred to as the “problem of the rock” (Stubenberg 1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HO theorists to explain just how adding the HO state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van Gulick 2000, 2004, Gennaro 2005). A common theme is that there is a principled difference in the objects of the HO states in question. Rocks and the like are not mental states in the first place, and so HO theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HO states must be “in the head.”
Third, the above leads somewhat naturally to an objection related to Chalmers’ hard problem (section 3b.i). It might be asked just how exactly any HO theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative “what it is like” aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HO theorists have been slow to address this problem, though a number of overlapping responses have emerged (see also Gennaro 2005 for more extensive treatment). Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HO theories. The claim is that HO theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, i.e., in terms of a higher-order awareness of some kind. A full account of “qualitative properties” or “sensory qualities” (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, Lycan 1996, 2001). Thus, a full explanation of phenomenal consciousness does require more than a HO theory, but that is no objection to HO theories as such. Another response is that proponents of the hard problem unjustly raise the bar as to what would count as a viable explanation of consciousness so that any such reductivist attempt would inevitably fall short (Carruthers 2000). Part of the problem, then, is a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995; see also section 3b). Moreover, anyone familiar with the literature knows that there are significant terminological difficulties in the use of various crucial terms which sometimes inhibits genuine progress (but see Byrne 2004 for some helpful clarification).

A fourth important objection to HO approaches is the question of how such theories can explain cases where the HO state might misrepresent the lower-order (LO) mental state (Byrne 1997, Neander 1998, Levine 2001). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation can be offered by the HO theorist? If my LO state registers a red percept and my HO state registers a thought about something green due, say, to some neural misfiring, then what happens? It seems that problems loom for any answer given by a HO theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if the HO theorist takes the option that the resulting conscious experience is reddish, then it seems that the HO state plays no role in determining the qualitative character of the experience. This objection forces HO theorists to be clearer about just how to view the relationship between the LO and HO states. (For one reply, see Gennaro 2004.) Debate is ongoing and significant both on varieties of HO theory and in terms of the above objections (see Gennaro 2004a). There is also interdisciplinary interest in how various HO theories might be realized in the brain.

iii. Hybrid Representational Accounts

A related and increasingly popular version of representational theory holds that the meta-psychological state in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HO state is extrinsic to (i.e., entirely distinct from) its target mental state. The assumption, made by Rosenthal for example, about the extrinsic nature of the meta-thought has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. One motivation for this movement is growing dissatisfaction with standard HO theory’s ability to handle some of the objections addressed in the previous section. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various other followers, normally associated with the phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also Smith 1986, 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which then still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a cold glass of water, one is also aware that one is in that very state. The conscious desire both represents the glass of water and itself. It is this “self-representing” which makes the state conscious.
These theories can go by various names, which sometimes seem in conflict, and have added significantly in recent years to the acronyms which abound in the literature. For example, Gennaro (1996a, 2002, 2004, 2006) has argued that, when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts. Gennaro calls this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s theory of consciousness can be understood in this way (Gennaro 2002).

Gennaro holds that conscious mental states should be understood (as Kant might have today) as global brain states which are combinations of passively received perceptual input and presupposed higher-order conceptual activity directed at that input. Higher-order concepts in the meta-psychological thoughts are presupposed in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Both Gennaro and Van Gulick have suggested that conscious states can be understood materialistically as global states of the brain, and it would be better to treat the first-order state as part of the larger complex brain state. This general approach is also forcefully advocated in a series of papers by Uriah Kriegel (such as Kriegel 2003a, 2003b, 2005, 2006) and is even the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and, more recently, the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development and is perhaps somewhat mysterious. Nonetheless, there is agreement among these authors that conscious mental states are, in some important sense, reflexive or self-directed. And, once again, there is keen interest in developing this model in a way that coheres with the latest neurophysiological research on consciousness. A point of emphasis is on the concept of global meta-representation within a complex brain state, and attempts are underway to identify just how such an account can be realized in the brain.

It is worth mentioning that this idea was also briefly explored by Thomas Metzinger who focused on the fact that consciousness “is something that unifies or synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of “higher-order binding” and thus uses the acronym HOB. Others who hold some form of the self-representational view include Kobes (1995), Caston (2002), Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory can be viewed in this light since he contends that conscious states have two representational contents. Thomas Natsoulas also has a series of papers defending a similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro) view this hybrid position to be a modified version of HOT theory; indeed, Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not inclined to call their views “higher-order” at all. To some extent, this is a terminological dispute, but, despite important similarities, there are also subtle differences between these hybrid alternatives. Like HO theorists, however, those who advocate this general approach all take very seriously the notion that a conscious mental state M is a state that subject S is (non-inferentially) aware that S is in. By contrast, one is obviously not aware of one’s unconscious mental states. Thus, there are various attempts to make sense of and elaborate upon this key intuition in a way that is, as it were, “in-between” standard FO and HO theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)

c. Other Cognitive Theories

Aside from the explicitly representational approaches discussed above, there are also related attempts to explain consciousness in other cognitive terms. The two most prominent such theories are worth describing here:
Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model (MDM) of consciousness. Although similar in some ways to representationalism, Dennett is most concerned that materialists avoid falling prey to what he calls the “myth of the Cartesian theater,” the notion that there is some privileged place in the brain where everything comes together to produce conscious experience. Instead, the MDM holds that all kinds of mental activity occur in the brain by parallel processes of interpretation, all of which are under frequent revision. The MDM rejects the idea of some “self” as an inner observer; rather, the self is the product or construction of a narrative which emerges over time. Dennett is also well known for rejecting the very assumption that there is a clear line to be drawn between conscious and unconscious mental states in terms of the problematic notion of “qualia.” He influentially rejects strong emphasis on any phenomenological or first-person approach to investigating consciousness, advocating instead what he calls “heterophenomenology” according to which we should follow a more neutral path “leading from objective physical science and its insistence on the third person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences.” (1991: 72)

Bernard Baars’ Global Workspace Theory (GWT) model of consciousness is probably the most influential theory proposed among psychologists (Baars 1988, 1997). The basic idea and metaphor is that we should think of the entire cognitive system as built on a “blackboard architecture” which is a kind of global workspace. According to GWT, unconscious processes and mental states compete for the spotlight of attention, from which information is “broadcast globally” throughout the system. Consciousness consists in such global broadcasting and is therefore also, according to Baars, an important functional and biological adaptation. We might say that consciousness is thus created by a kind of global access to select bits of information in the brain and nervous system. Despite Baars’ frequent use of “theater” and “spotlight” metaphors, he argues that his view does not entail the presence of the material Cartesian theater that Dennett is so concerned to avoid. It is, in any case, an empirical matter just how the brain performs the functions he describes, such as detecting mechanisms of attention.

Objections to these cognitive theories include the charge that they do not really address the hard problem of consciousness (as described in section 3b.i), but only the “easy” problems. Dennett is also often accused of explaining away consciousness rather than really explaining it. It is also interesting to think about Baars’ GWT in light of the Block’s distinction between access and phenomenal consciousness (see section 1). Does Baars’ theory only address access consciousness instead of the more difficult to explain phenomenal consciousness? (Two other psychological cognitive theories worth noting are the ones proposed by George Mandler 1975 and Tim Shallice 1988.)

d. Quantum Approaches

Finally, there are those who look deep beneath the neural level to the field of quantum mechanics, basically the study of sub-atomic particles, to find the key to unlocking the mysteries of consciousness. The bizarre world of quantum physics is quite different from the deterministic world of classical physics, and a major area of research in its own right. Such authors place the locus of consciousness at a very fundamental physical level. This somewhat radical, though exciting, option is explored most notably by physicist Roger Penrose (1989, 1994) and anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness arises through quantum effects which occur in subcellular neural structures known as microtubules, which are structural proteins in cell walls. There are also other quantum approaches which aim to explain the coherence of consciousness (Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to explain consciousness (Silberstein 1998, 2001). It is difficult to assess these somewhat exotic approaches at present. Given the puzzling and often very counterintuitive nature of quantum physics, it is unclear whether such approaches will prove genuinely scientifically valuable methods in explaining consciousness. One concern is simply that these authors are trying to explain one puzzling phenomenon (consciousness) in terms of another mysterious natural phenomenon (quantum effects). Thus, the thinking seems to go, perhaps the two are essentially related somehow and other physicalistic accounts are looking in the wrong place, such as at the neuro-chemical level. Although many attempts to explain consciousness often rely of conjecture or speculation, quantum approaches may indeed lead the field along these lines. Of course, this doesn’t mean that some such theory isn’t correct. One exciting aspect of this approach is the resulting interdisciplinary interest it has generated among physicists and other scientists in the problem of consciousness.

5. Consciousness and Science: Key Issues

Over the past two decades there has been an explosion of interdisciplinary work in the science of consciousness. Some of the credit must go to the ground breaking 1986 book by Patricia Churchland entitled Neurophilosophy. In this section, three of the most important such areas are addressed.

a. The Unity of Consciousness/The Binding Problem

Conscious experience seems to be “unified” in an important sense; this crucial feature of consciousness played an important role in the philosophy of Kant who argued that unified conscious experience must be the product of the (presupposed) synthesizing work of the mind. Getting clear about exactly what is meant by the “unity of consciousness” and explaining how the brain achieves such unity has become a central topic in the study of consciousness. There are, no doubt, many different senses of “unity” (see Tye 2003; Bayne and Chalmers 2003), but perhaps most common is the notion that, from the first-person point of view, we experience the world in an integrated way and as a single phenomenal field of experience. (For an important anthology on the subject, see Cleeremans 2003.) However, when one looks at how the brain processes information, one only sees discrete regions of the cortex processing separate aspects of perceptual objects. Even different aspects of the same object, such as its color and shape, are processed in different parts of the brain. Given that there is no “Cartesian theater” in the brain where all this information comes together, the problem arises as to just how the resulting conscious experience is unified. What mechanisms allow us to experience the world in such a unified way? What happens when this unity breaks down, as in various pathological cases? The “problem of integrating the information processed by different regions of the brain is known as the binding problem” (Cleeremans 2003: 1). Thus, the so-called “binding problem” is inextricably linked to explaining the unity of consciousness. As was seen earlier with neural theories (section 4a) and as will be seen below on the neural correlates of consciousness (5b), some attempts to solve the binding problem have to do with trying to isolate the precise brain mechanisms responsible for consciousness. For example, Crick and Koch’s (1990) idea that synchronous neural firings are (at least) necessary for consciousness can also be viewed as an attempt to explain how disparate neural networks bind together separate pieces of information to produce unified subjective conscious experience. Perhaps the binding problem and the hard problem of consciousness (section 3b.i) are very closely connected. If the binding problem can be solved, then we arguably have identified the elusive neural correlate of consciousness and have, therefore, perhaps even solved the hard problem. In addition, perhaps the explanatory gap between third-person scientific knowledge and first-person unified conscious experience can also be bridged. Thus, this exciting area of inquiry is central to some of the deepest questions in the philosophical and scientific exploration of consciousness.

b. The Neural Correlates of Consciousness (NCCs)

As was seen earlier in discussing neural theories of consciousness (section 4a), the search for the so-called “neural correlates of consciousness” (NCCs) is a major preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing down the precise brain property responsible for consciousness is a different and far more difficult enterprise than merely holding a generic belief in some form of materialism. One leading candidate is offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons all fire in synchrony with one another (oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently, one method used is simply to study some aspect of neural functioning with sophisticated detecting equipments (such as MRIs and PET scans) and then correlate it with first-person reports of conscious experience. Another method is to study the difference in brain activity between those under anesthesia and those not under any such influence. A detailed survey would be impossible to give here, but a number of other candidates for the NCC have emerged over the past two decades, including reentrant cortical feedback loops in the neural circuitry throughout the brain (Edelman 1989, Edelman and Tononi 2000), NMDA-mediated transient neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea is that anesthetics destroy conscious mental activity because they interfere with the functioning of NMDA synapses between neurons, which are those that are dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored at length in Metzinger (2000). Ongoing scientific investigation is significant and an important aspect of current scientific research in the field.

One problem with some of the above candidates is determining exactly how they are related to consciousness. For example, although a case can be made that some of them are necessary for conscious mentality, it is unclear that they are sufficient. That is, some of the above seem to occur unconsciously as well. And pinning down a narrow enough necessary condition is not as easy as it might seem. Another general worry is with the very use of the term “correlate.” As any philosopher, scientist, and even undergraduate student should know, saying that “A is correlated with B” is rather weak (though it is an important first step), especially if one wishes to establish the stronger identity claim between consciousness and neural activity. Even if such a correlation can be established, we cannot automatically conclude that there is an identity relation. Perhaps A causes B or B causes A, and that’s why we find the correlation. Even most dualists can accept such interpretations. Maybe there is some other neural process C which causes both A and B. “Correlation” is not even the same as “cause,” let alone enough to establish “identity.” Finally, some NCCs are not even necessarily put forth as candidates for all conscious states, but rather for certain specific kinds of consciousness (e.g., visual).

c. Philosophical Psychopathology

Philosophers have long been intrigued by disorders of the mind and consciousness. Part of the interest is presumably that if we can understand how consciousness goes wrong, then that can help us to theorize about the normal functioning mind. Going back at least as far as John Locke (1689/1975), there has been some discussion about the philosophical implications of multiple personality disorder (MPD) which is now called “dissociative identity disorder” (DID). Questions abound: Could there be two centers of consciousness in one body? What makes a person the same person over time? What makes a person a person at any given time? These questions are closely linked to the traditional philosophical problem of personal identity, which is also importantly related to some aspects of consciousness research. Much the same can be said for memory disorders, such as various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness require some kind of autobiographical memory or psychological continuity? On a related front, there is significant interest in experimental results from patients who have undergone a commisurotomy, which is usually performed to relieve symptoms of severe epilepsy when all else fails. During this procedure, the nerve fibers connecting the two brain hemispheres are cut, resulting in so-called “split-brain” patients.

Philosophical interest is so high that there is now a book series called Philosophical Psychopathology published by MIT Press. Another rich source of information comes from the provocative and accessible writings of neurologists on a whole host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book) and, more recently, V. S. Ramachandran (2004; see also Ramachandran and Blakeslee 1998). Another launching point came from the discovery of the phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently discussed in the philosophical literature regarding its implications for consciousness. Blindsight patients are blind in a well defined part of the visual field (due to cortical damage), but yet, when forced, can guess, with a higher than expected degree of accuracy, the location or orientation of an object in the blind field.

There is also philosophical interest in many other disorders, such as phantom limb pain (where one feels pain in a missing or amputated limb), various agnosias (such as visual agnosia where one is not capable of visually recognizing everyday objects), and anosognosia (which is denial of illness, such as when one claims that a paralyzed limb is still functioning, or when one denies that one is blind). These phenomena raise a number of important philosophical questions and have forced philosophers to rethink some very basic assumptions about the nature of mind and consciousness. Much has also recently been learned about autism and various forms of schizophrenia. A common view is that these disorders involve some kind of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal phenomenon, although not really a “pathological” condition as such (Cytowic 2003). Those with synesthesia literally have taste sensations when seeing certain shapes or have color sensations when hearing certain sounds. It is thus an often bizarre mixing of incoming sensory input via different modalities.
One of the exciting results of this relatively new sub-field is the important interdisciplinary interest that it has generated among philosophers, psychologists, and scientists.

6. Animal and Machine Consciousness

Two final areas of interest involve animal and machine consciousness. In the former case it is clear that we have come a long way from the Cartesian view that animals are mere “automata” and that they do not even have conscious experience (perhaps partly because they do not have immortal souls). In addition to the obviously significant behavioral similarities between humans and many animals, much more is known today about other physiological similarities, such as brain and DNA structures. To be sure, there are important differences as well and there are, no doubt, some genuinely difficult “grey areas” where one might have legitimate doubts about some animal or organism consciousness, such as small rodents, some birds and fish, and especially various insects.

Nonetheless, it seems fair to say that most philosophers today readily accept the fact that a significant portion of the animal kingdom is capable of having conscious mental states, though there are still notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say that various animals can have all of the same kinds of sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical and mathematical problems, enjoying artworks, thinking about the vast universe or the distant past, and so on. However, it still seems reasonable to believe that animals can have at least some conscious states from rudimentary pains to various perceptual states and perhaps even to some level of self-consciousness. A number of key areas are under continuing investigation. For example, to what extent can animals recognize themselves, such as in a mirror, in order to demonstrate some level of self-awareness? To what extent can animals deceive or empathize with other animals, either of which would indicate awareness of the minds of others? These and other important questions are at the center of much current theorizing about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some ways, the problem of knowing about animal minds is an interesting sub-area of the traditional epistemological “problem of other minds”: How do we even know that other humans have conscious minds? What justifies such a belief?

The possibility of machine (or robot) consciousness has intrigued philosophers and non-philosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? One important early launching point was a well-known paper by the mathematician Alan Turing (1950) which proposed what has come to be known as the “Turing test” for machine intelligence and thought (and perhaps consciousness as well). The basic idea is that if a machine could fool an interrogator (who could not see the machine) into thinking that it was human, then we should say it thinks or, at least, has intelligence. However, Turing was probably overly optimistic about whether anything even today can pass the Turing Test, as most programs are specialized and have very narrow uses. One cannot ask the machine about virtually anything, as Turing had envisioned. Moreover, even if a machine or robot could pass the Turing Test, many remain very skeptical as to whether or not this demonstrates genuine machine thinking, let alone consciousness. For one thing, many philosophers would not take such purely behavioral (e.g., linguistic) evidence to support the conclusion that machines are capable of having phenomenal first person experiences. Merely using words like “red” doesn’t ensure that there is the corresponding sensation of red or real grasp of the meaning of “red.” Turing himself considered numerous objections and offered his own replies, many of which are still debated today.

Another much discussed argument is John Searle’s (1980) famous Chinese Room Argument, which has spawned an enormous amount of literature since its original publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to reject what he calls “strong AI” which is the view that suitably programmed computers literally have a mind, that is, they really understand language and actually have other mental capacities similar to humans. This is contrasted with “weak AI” which is the view that computers are merely useful tools for studying the mind. The gist of Searle’s argument is that he imagines himself running a program for using Chinese and then shows that he does not understand Chinese; therefore, strong AI is false; that is, running the program does not result in any real understanding (or thought or consciousness, by implication). Searle supports his argument against strong AI by utilizing a thought experiment whereby he is in a room and follows English instructions for manipulating Chinese symbols in order to produce appropriate answers to questions in Chinese. Searle argues that, despite the appearance of understanding Chinese (say, from outside the room), he does not understand Chinese at all. He does not thereby know Chinese, but is merely manipulating symbols on the basis of syntax alone. Since this is what computers do, no computer, merely by following a program, genuinely understands anything. Searle replies to numerous possible criticisms in his original paper (which also comes with extensive peer commentary), but suffice it to say that not everyone is satisfied with his responses. For example, it might be argued that the entire room or “system” understands Chinese if we are forced to use Searle’s analogy and thought experiment. Each part of the room doesn’t understand Chinese (including Searle himself) but the entire system does, which includes the instructions and so on. Searle’s larger argument, however, is that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

Despite heavy criticism of the argument, two central issues are raised by Searle which continue to be of deep interest. First, how and when does one distinguish mere “simulation” of some mental activity from genuine “duplication”? Searle’s view is that computers are, at best, merely simulating understanding and thought, not really duplicating it. Much like we might say that a computerized hurricane simulation does not duplicate a real hurricane, Searle insists the same goes for any alleged computer “mental” activity. We do after all distinguish between real diamonds or leather and mere simulations which are just not the real thing. Second, and perhaps even more important, when considering just why computers really can’t think or be conscious, Searle interestingly reverts back to a biologically based argument. In essence, he says that computers or robots are just not made of the right stuff with the right kind of “causal powers” to produce genuine thought or consciousness. After all, even a materialist does not have to allow that any kind of physical stuff can produce consciousness any more than any type of physical substance can, say, conduct electricity. Of course, this raises a whole host of other questions which go to the heart of the metaphysics of consciousness. To what extent must an organism or system be physiologically like us in order to be conscious? Why is having a certain biological or chemical make up necessary for consciousness? Why exactly couldn’t an appropriately built robot be capable of having conscious mental states? How could we even know either way? However one answers these questions, it seems that building a truly conscious Commander Data is, at best, still just science fiction.

In any case, the growing areas of cognitive science and artificial intelligence are major fields within philosophy of mind and can importantly bear on philosophical questions of consciousness. Much of current research focuses on how to program a computer to model the workings of the human brain, such as with so-called “neural (or connectionist) networks.”
7. References and Further Reading

Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968.
Armstrong, D. “What is Consciousness?” In The Nature of Mind. Ithaca, NY: Cornell University Press, 1981.
Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988.
Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.
Baars, B., Banks, W., and Newman, J. eds. Essential Sources in the Scientific Study of Consciousness. Cambridge, MA: MIT Press, 2003.
Balog, K. “Conceivability, Possibility, and the Mind-Body Problem.” In Philosophical Review 108: 497-528, 1999.
Bayne, T. & Chalmers, D. “What is the Unity of Consciousness?” In Cleeremans, 2003.
Beckoff, M., Allen, C., and Burghardt, G. The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Cambridge, MA: MIT Press, 2002.
Blackmore, S. Consciousness: An Introduction. Oxford: Oxford University Press, 2004.
Block, N. “Troubles with Functionalism.” In Readings in the Philosophy of Psychology, Volume 1, Ned Block, ed., Cambridge, MA: Harvard University Press, 1980a.
Block, N. “Are Absent Qualia Impossible?” Philosophical Review 89: 257-74, 1980b.
Block, N. “Inverted Earth.” In Philosophical Perspectives, 4, J. Tomberlin, ed., Atascadero, CA: Ridgeview Publishing Company, 1990.
Block, N. “On a Confusion about the Function of Consciousness.” In Behavioral and Brain Sciences 18: 227-47, 1995.
Block, N. “Mental Paint and Mental Latex.” In E. Villanueva, ed. Perception. Atascadero, CA: Ridgeview, 1996.
Block, N, Flanagan, O. & Guzeledere, G. eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Block, N. & Stalnaker, R. “Conceptual Analysis, Dualism, and the Explanatory Gap.” Philosophical Review 108: 1-46, 1999.
Botterell, A. “Conceiving what is not there.” In Journal of Consciousness Studies 8 (8): 21-42, 2001.
Boyd, R. “Materialism without Reductionism: What Physicalism does not entail.” In N. Block, ed. Readings in the Philosophy of Psychology, Vol.1. Cambridge, MA: Harvard University Press, 1980.
Brentano, F. Psychology from an Empirical Standpoint. New York: Humanities, 1874/1973.
Brook, A. Kant and the Mind. New York: Cambridge University Press, 1994.
Brook, A. & Raymont, P. 2006. A Unified Theory of Consciousness. Forthcoming.
Byrne, A. “Some like it HOT: Consciousness and Higher-Order Thoughts.” In Philosophical Studies 86:103-29, 1997.
Byrne, A. “Intentionalism Defended.” In Philosophical Review 110: 199-240, 2001.
Byrne, A. “What Phenomenal Consciousness is like.” In Gennaro 2004a.
Campbell, N. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview, 2004.
Carruthers, P. “Brute Experience.” In Journal of Philosophy 86: 258-269, 1989.
Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000.
Carruthers, P. “HOP over FOR, HOT Theory.” In Gennaro 2004a.
Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University Press, 2005.
Caston, V. “Aristotle on Consciousness.” Mind 111: 751-815, 2002.
Chalmers, D.J. “Facing up to the Problem of Consciousness.” In Journal of Consciousness Studies 2:200-19, 1995.
Chalmers, D.J. The Conscious Mind. Oxford: Oxford University Press, 1996.
Chalmers, D.J. “What is a Neural Correlate of Consciousness?” In Metzinger 2000.
Chalmers, D.J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002.
Chalmers, D.J. “The Representational Character of Experience.” In B. Leiter ed. The Future for Philosophy. Oxford: Oxford University Press, 2004.
Churchland, P. S. “Consciousness: the Transmutation of a Concept.” In Pacific Philosophical Quarterly 64: 80-95, 1983.
Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986.
Cleeremans, A. The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford University Press, 2003.
Crick, F. and Koch, C. “Toward a Neurobiological Theory of Consciousness.” In Seminars in Neuroscience 2: 263-75, 1990.
Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994.
Cytowic, R. The Man Who Tasted Shapes. Cambridge, MA: MIT Press, 2003.
Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt, 1999.
Dennett, D. C. “Quining Qualia.” In A. Marcel & E. Bisiach eds. Consciousness and Contemporary Science. New York: Oxford University Press, 1988.
Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991.
Dennett, D. C. Sweet Dreams. Cambridge, MA: MIT Press, 2005.
Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995.
Droege, P. Caging the Beast. Philadelphia & Amsterdam: John Benjamins Publishers, 2003.
Edelman, G. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books, 1989.
Edelman, G. & Tononi, G. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.” In Metzinger 2000.
Flohr, H. “An Information Processing Theory of Anesthesia.” In Neuropsychologia 33: 9, 1169-80, 1995.
Fodor, J. “Special Sciences.” In Synthese 28, 77-115, 1974.
Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London: Routledge, 1996.
Gendler, T. & Hawthorne, J. eds. Conceivability and Possibility. Oxford: Oxford University Press, 2002.
Gennaro, R.J. “Brute Experience and the Higher-Order Thought Theory of Consciousness.” In Philosophical Papers 22: 51-69, 1993.
Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996a.
Gennaro, R.J. Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis: Hackett Publishing Company, 1996b.
Gennaro, R.J. “Leibniz on Consciousness and Self Consciousness.” In R. Gennaro & C. Huenemann, eds. New Essays on the Rationalists. New York: Oxford University Press, 1999.
Gennaro, R.J. “Jean-Paul Sartre and the HOT Theory of Consciousness.” In Canadian Journal of Philosophy 32: 293-330, 2002.
Gennaro, R.J. “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine,” 2004.  In Gennaro 2004a.
Gennaro, R.J., ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and Philadelphia: John Benjamins, 2004a.
Gennaro, R.J. “The HOT Theory of Consciousness: Between a Rock and a Hard Place?” In Journal of Consciousness Studies 12 (2): 3-21, 2005.
Gennaro, R.J. “Between Pure Self-referentialism and the (extrinsic) HOT Theory of Consciousness.” In Kriegel and Williford 2006.
Goldman, A. “Consciousness, Folk Psychology and Cognitive Science.” In Consciousness and Cognition 2: 264-82, 1993.
Graham, G. “Recent Work in Philosophical Psychopathology.” In American Philosophical Quarterly 39: 109-134, 2002.
Gunther, Y. ed. Essays on Nonconceptual Content. Cambridge, MA: MIT Press, 2003.
Guzeldere, G. “Is Consciousness the Perception of what passes in one’s own Mind?” In Metzinger 1995.
Hameroff, S. “Quantum Computation in Brain Microtubules? The Pemose-Hameroff “Orch OR” Model of Consciousness.” In Philosophical Transactions Royal Society London A 356:1869-96, 1998.
Hardin, C. Color for Philosophers. Indianapolis: Hackett, 1986.
Harman, G. “The Intrinsic Quality of Experience.” In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.
Heidegger, M. Being and Time (Sein und Zeit). Translated by J. Macquarrie and E. Robinson. New York: Harper and Row, 1927/1962.
Hill, C. S. “Imaginability, Conceivability, Possibility, and the Mind-Body Problem.” In Philosophical Studies 87: 61-85, 1997.
Hill, C. and McLaughlin, B. “There are fewer things in Reality than are dreamt of in Chalmers’ Philosophy.” In Philosophy and Phenomenological Research 59: 445-54, 1998.
Horgan, T. and Tienson, J. “The Intentionality of Phenomenology and the Phenomenology of Intentionality.” In Chalmers 2002.
Husserl, E. Ideas: General Introduction to Pure Phenomenology (Ideen au einer reinen Phänomenologie und phänomenologischen Philosophie). Translated by W. Boyce Gibson. New York: MacMillan, 1913/1931.
Husserl, E. Cartesian Meditations: an Introduction to Phenomenology. Translated by Dorian Cairns.The Hague: M. Nijhoff, 1929/1960.
Jackson, F. “Epiphenomenal Qualia.” In Philosophical Quarterly 32: 127-136, 1982.
Jackson, F. “What Mary didn’t Know.” In Journal of Philosophy 83: 291-5, 1986.
James, W. The Principles of Psychology. New York: Henry Holt & Company, 1890.
Kant, I. Critique of Pure Reason. Translated by N. Kemp Smith. New York: MacMillan, 1965.
Keenan, J., Gallup, G., and Falk, D. The Face in the Mirror. New York: HarperCollins, 2003.
Kim, J. “The Myth of Non-Reductive Physicalism.” In Proceedings and Addresses of the American Philosophical Association, 1987.
Kim, J. Supervenience and Mind. Cambridge, MA: Cambridge University Press, 1993.
Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998.
Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003.
Kirk, R. Raw Feeling. New York: Oxford University Press, 1994.
Kitcher, P. Kant’s Transcendental Psychology. New York: Oxford University Press, 1990.
Kobes, B. “Telic Higher-Order Thoughts and Moore’s Paradox.” In Philosophical Perspectives 9: 291-312, 1995.
Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company, 2004.
Kriegel, U. “PANIC Theory and the Prospects for a Representational Theory of Phenomenal Consciousness.” In Philosophical Psychology 15: 55-64, 2002.
Kriegel, U. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” In Synthese 134: 477-504, 2003a.
Kriegel, U. “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument.” In Canadian Journal of Philosophy 33: 103-132, 2003b.
Kriegel, U. “Consciousness and Self-Consciousness.” In The Monist 87: 182-205, 2004.
Kriegel, U. “Naturalizing Subjective Character.” In Philosophy and Phenomenological Research, forthcoming.
Kriegel, U. “The Same Order Monitoring Theory of Consciousness.” In Kriegel and Williford 2006.
Kriegel, U. & Williford, K. Self-Representational Approaches to Consciousness. Cambridge, MA: MIT Press, 2006.
Kripke, S. Naming and Necessity. Cambridge, MA: Harvard University Press, 1972.
Leibniz, G. W. Discourse on Metaphysics. Translated by D. Garber and R. Ariew. Indianapolis: Hackett, 1686/1991.
Leibniz, G. W. The Monadology. Translated by R. Lotte. London: Oxford University Press, 1720/1925.
Levine, J. “Materialism and Qualia: the Explanatory Gap.” In Pacific Philosophical Quarterly 64,354-361, 1983.
Levine, J. “On Leaving out what it’s like.” In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2003.
Loar, B. “Phenomenal States.” In Philosophical Perspectives 4, 81-108, 1990.
Loar, B. “Phenomenal States”. In N. Block, O. Flanagan, and G. Guzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-72, 1999.
Locke, J. An Essay Concerning Human Understanding. Ed. P. Nidditch. Oxford: Clarendon, 1689/1975.
Ludlow, P., Nagasawa, Y, & Stoljar, D. eds. There’s Something about Mary. Cambridge, MA: MIT Press, 2004.
Lurz, R. “Neither HOT nor COLD: An Alternative Account of Consciousness.” In Psyche 9, 2003.
Lurz, R. “Either FOR or HOR: A False Dichotomy.” In Gennaro 2004a.
Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996.
Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of Consciousness.” Analysis 61: 3-4, 2001.
Lycan, W.G. “The Superiority of HOP to HOT.” In Gennaro 2004a.
Macpherson, F. “Colour Inversion Problems for Representationalism.” In Philosophy and Phenomenological Research 70: 127-52, 2005.
Mandler, G. Mind and Emotion. New York: Wiley, 1975.
Marshall, J. and Zohar, D. The Quantum Self: Human Nature and Consciousness Defined by the New Physics. New York: Morrow, 1990.
McGinn, C. “Can we solve the Mind-Body Problem?” In Mind 98:349-66, 1989.
McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991.
McGinn, C. “Consciousness and Space.” In Metzinger 1995.
Metzinger, T. ed. Conscious Experience. Paderbom: Ferdinand Schöningh, 1995.
Metzinger, T. ed. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge, MA: MIT Press, 2000.
Moore, G. E. “The Refutation of Idealism.” In G. E. Moore Philosophical Studies. Totowa, NJ: Littlefield, Adams, and Company, 1903.
Nagel, T. “What is it like to be a Bat?” In Philosophical Review 83: 435-456, 1974.
Natsoulas, T. “The Case for Intrinsic Theory I. An Introduction.” In The Journal of Mind and Behavior 17: 267-286, 1996.
Neander, K. “The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness.” In Philosophical Perspectives 12: 411-434, 1998.
Papineau, D. Philosophical Naturalism. Oxford: Blackwell, 1994.
Papineau, D. “The Antipathetic Fallacy and the Boundaries of Consciousness.” In Metzinger 1995.
Papineau, D. “Mind the Gap.” In J. Tomberlin, ed. Philosophical Perspectives 12. Atascadero, CA: Ridgeview Publishing Company, 1998.
Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002.
Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001.
Penrose, R. The Emperor’s New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford University Press, 1989.
Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994.
Place, U. T. “Is Consciousness a Brain Process?” In British Journal of Psychology 47: 44-50, 1956.
Polger, T. Natural Minds. Cambridge, MA: MIT Press, 2004.
Preston, J. and Bishop, M. eds. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. New York: Oxford University Press, 2002.
Ramachandran, V.S. A Brief Tour of Human Consciousness. New York: PI Press, 2004.
Ramachandran, V.S. and Blakeslee, S. Phantoms in the Brain. New York: Harper Collins, 1998.
Robinson, W.S. Understanding Phenomenal Consciousness. New York: Cambridge University Press, 2004.
Rosenthal, D. M. “Two Concepts of Consciousness.” In Philosophical Studies 49:329-59, 1986.
Rosenthal, D. M. “The Independence of Consciousness and Sensory Quality.” In E. Villanueva, ed. Consciousness. Atascadero, CA: Ridgeview Publishing, 1991.
Rosenthal, D.M. “State Consciousness and Transitive Consciousness.” In Consciousness and Cognition 2: 355-63, 1993a.
Rosenthal, D. M. “Thinking that one thinks.” In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993b.
Rosenthal, D. M. “A Theory of Consciousness.” In N. Block, O. Flanagan, and G. Guzeldere, eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Rosenthal, D. M. “Introspection and Self-Interpretation.” In Philosophical Topics 28: 201-33, 2000.
Rosenthal, D. M. “Varieties of Higher-Order Theory.” In Gennaro 2004a.
Ryle, G. The Concept of Mind. London: Hutchinson and Company, 1949.
Sacks, 0. The Man who mistook his Wife for a Hat and Other Essays. New York: Harper and Row, 1987.
Sartre, J.P. Being and Nothingness. Trans. Hazel Barnes. New York: Philosophical Library, 1956.
Seager, W. Theories of Consciousness. London: Routledge, 1999.
Seager, W. “A Cold Look at HOT Theory.” In Gennaro 2004a.
Searle, J. “Minds, Brains, and Programs.” In Behavioral and Brain Sciences 3: 417-57, 1980.
Searle, J. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984.
Searle, J. The Rediscovery of the Mind. Cambridge. MA: MIT Press, 1992.
Siewert, C. The Significance of Consciousness. Princeton, NJ: Princeton University Press, 1998.
Shallice, T. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press, 1988.
Shear, J. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997.
Shoemaker, S. “Functionalism and Qualia.” In Philosophical Studies, 27, 291-315, 1975.
Shoemaker, S. “Absent Qualia are Impossible.” In Philosophical Review 90, 581-99, 1981.
Shoemaker, S. “The Inverted Spectrum.” In Journal of Philosophy, 79, 357-381, 1982.
Silberstein, M. “Emergence and the Mind-Body Problem.” In Journal of Consciousness Studies 5: 464-82, 1998.
Silberstein, M. “Converging on Emergence: Consciousness, Causation and Explanation.” In Journal of Consciousness Studies 8: 61-98, 2001.
Skinner, B. F. Science and Human Behavior. New York: MacMillan, 1953.
Smart, J.J.C. “Sensations and Brain Processes.” In Philosophical Review 68: 141-56, 1959.
Smith, D.W. “The Structure of (self-)consciousness.” In Topoi 5: 149-56, 1986.
Smith, D.W. Mind World: Essays in Phenomenology and Ontology. Cambridge, MA: Cambridge University Press, 2004.
Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers, 1998.
Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986.
Thau, M. Consciousness and Cognition. Oxford: Oxford University Press, 2002.
Titchener, E. An Outline of Psychology. New York: Macmillan, 1901.
Turing, A. “Computing Machinery and Intelligence.” In Mind 59: 433-60, 1950.
Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995.
Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000.
Tye, M. Consciousness and Persons. Cambridge, MA: MIT Press, 2003.
Van Gulick, R. “Physicalism and the Subjectivity of the Mental.” In Philosophical Topics 13, 51-70, 1985.
Van Gulick, R. “Nonreductive Materialism and Intertheoretical Constraint.” In A. Beckermann, H. Flohr, J. Kim, eds. Emergence and Reduction. Berlin and New York: De Gruyter, 1992.
Van Gulick, R. “Understanding the Phenomenal Mind: Are we all just armadillos?” In M. Davies and G. Humphreys, eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
Van Gulick, R. “What would count as Explaining Consciousness?” In Metzinger 1995.
Van Gulick, R. “Inward and Upward: Reflection, Introspection and Self-Awareness.” In Philosophical Topics 28: 275-305, 2000.
Van Gulick, R. “Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness.” In Gennaro 2004a.
Van Gulick, R. “Mirror Mirror – is that all?” In Kriegel and Williford 2006.
Weiskrantz, L. Blindsight. Oxford: Clarendon, 1986.
Wilkes, K. V. “Is Consciousness Important?” In British Journal for the Philosophy of Science 35: 223-43, 1984.
Wilkes, K. V. “Yishi, Duo, Us and Consciousness.” In A. Marcel & E. Bisiach, eds., Consciousness in Contemporary Science. Oxford: Oxford University Press, 1988.
Williford, K. “The Self-Representational Structure of Consciousness.” In Kriegel and Williford 2006.
Wundt, W. Outlines of Psychology. Leipzig: W. Engleman, 1897.
Yablo, S. “Concepts and Consciousness.” In Philosophy and Phenomenological Research 59: 455-63, 1999.

Rocco J. Gennaro, Email: rocco@indstate.edu, Indiana State University

Cognition, Consciousness, and Physics

REVIEW OF: Roger Penrose (1994) Shadows of the Mind. New York: Oxford University Press.

1. Introduction

1.1 Physics is surely the most beautiful of the sciences, and it is esthetically tempting to suppose that two of the great scientific mysteries we confront today, observer effects in quantum mechanics and conscious experience, are in fact the same. Roger Penrose is an admirable contributor to modern physics and mathematics, and his new book, Shadows of the Mind (SOTM) offers us some brilliant intellectual fireworks — which for me at least, faded rapidly on further examination.

1.2 I felt disappointed for several reasons, but one obvious one: Is consciousness really a physics problem? Penrose writes,

A scientific world-view which does not profoundly come to terms with the problem of conscious minds can have no serious pretensions of completeness. Consciousness is part of our universe, so any physical theory which makes no proper place for it falls fundamentally short of providing a genuine description of the world. I would maintain that there is yet no physical, biological, or computational theory that comes very close to explaining our consciousness … (emphasis added)

1.3 Having spent 17 years of my life trying to do precisely what Penrose suggests has not and cannot be done, this point was a bit disconcerting. But even more surprising was the claim that consciousness is a problem in physics. The conscious beings we see around us are the products of billions of years of biological evolution. We interact with them — with each other — at a level that is best described as psychological. All of our evidence regarding consciousness depends upon reports of personal experiences, and observation of our own perception, memories, attention, imagery, and the like. The evidence therefore would seem to be exclusively psychobiological. We will come back to this question.

1.4 The argument in SOTM comes down to two theses and a statement of faith. The first thesis I will call the “Turing Impossibility Proof,” and the second, the “Quantum Promissory Note”. The statement of faith involves classical Platonism of the mathematical variety, founded in a sense of certainty and wonder at the amazing success of mathematical thought over the last 25 centuries, and the extraordinary ability of mathematical formalisms to yield deep insight into scientific questions (SOTM, p. 413). This view may be captured by Einstein’s well-known saying that “the miraculous thing about the universe is that it is comprehensible.” While I share Penrose’s admiration for mathematics, I do not believe in the absolute nature of mathematical thought, which leads him to postulate a realm of special conscious insight requiring no empirical investigation to be understood.

1.5 After considering the argument of SOTM I will briefly sketch the current scientific alternative, the emerging psychobiology of consciousness (see Baars, 1988, 1994; Edelman, 1989; Newman and Baars, 1993; Schacter, 1990; Gazzaniga, 1994). Though the large body of current evidence can be stated in purely objective terms, I will strive to demonstrate the phenomena by appealling to the reader’s personal experience, such as your consciousness of the words on this page, the inner speech that often goes with the act of reading carefully, and so on. Such demonstrations help to establish the fact that we are indeed talking about consciousness as such.

2. Has Science Failed To Understand Consciousness?

2.1 Central to SOTM is Penrose’s contention that contemporary science has failed to understand consciousness. There is more than a little truth to that — if we exclude the last decade — but it is based on a great historical misunderstanding: It assumes that psychologists and biologists have tried to understand human experience with anything like the persistence and talent routinely devoted to memory, language, and perception. The plain fact is that we have not treated the issue seriously until very recently. It may be difficult for physicists to understand this — current physics does not seem to be intimidated by anything — but the subject of conscious experience, the great core question of traditional philosophy, has simply been taboo in psychology and biology for most of this century. I agree with John Searle that this is a scandalous fact, which should be a great source of embarassment to us in cognitive psychology and neuroscience. But no one familiar with the field could doubt it. As Crick and Koch (1992) have written, “For many years after James penned The Principles of Psychology (1890) . . most cognitive scientists ignored consciousness, as did almost all neuroscientists. The problem was felt to be either purely “philosophical” or too elusive to study experimentally. . . In our opinion, such timidity is ridiculous.”

2.2 Fortunately the era of avoidance is visibly fading. First-order theories are now available, and have not by any means been disproved (Baars,1983, 1988, and in press; Crick & Koch, 1992; Edelman, 1989; Gazzaniga, 1994; Schacter, 1990; Kinsbourne, 1993; etc.). In fact, there are significant commonalities among contemporary theories of consciousness, so that one could imagine a single, integrative hybrid theory with relative ease. But Penrose does not deal with this literature at all.

2.3 Has science failed, and do we need a scientific revolution? Given the fact that we have barely begun to apply normal science to the topic, Penrose’s call for a scientific revolution seems premature at best. There is yet nothing to revolt against. Of course we should be ready to challenge our current assumptions. But it has not been established by any means that ordinary garden-variety conscious experience cannot be explained through a diligent pursuit of normal science.

3. A Critique Of The Turing “Impossibility Proof”

3.1 Impossibility arguments have a mixed record in science. On one side is the proof scribbled on the back of an envelope by physicists in the Manhattan Project, showing that the first enriched uranium explosion would not trigger a chain reaction destroying the planet. But notice that this was not a purely mathematical proof; it was a physical-chemical-mathematical reductio, with a very well-established, indispensible empirical basis. On the side of pure mathematics, we have such historical examples as Bishop Berkeley’s disproof of Newton’s use of infinitesimals in the calculus. Berkeley was mathematically right but the point was empirically irrelevant; physicists used the flawed calculus for two hundred years with great scientific success, until just before 1900 the paradox was resolved by the discovery of converging series.

3.2 Even more empirically irrelevant was Zeno’s famous Paradox, which seemed to show that we cannot walk a whole step, since we must first cover half a step, then half of half a step, then half of the remaining distance, and the like, never reaching the whole intended step. Zeno of Elea used this clever argument to prove to the astonishment of the world that motion was impossible. But that did not paralyze commerce. Ships sailed, people walked, and camels trudged calmly on their way doing the formally impossible thing for a couple of thousand years until the formal solution emerged. And of course we have more than a century of mathematical reductios claiming that Darwinian evolution is impossible if you combine all the a priori probabilities of carbon chains evolving into DNA and ending up with thee and me. These reductios on behalf of divine Creation still appear with regularity, but the biological evidence is so strong that they are not even considered.

3.3 The problem is of course that a mathematical model is only as good as its assumptions, and those depend upon the quality of the evidence. The whole Turing Machine debate and its putative implications for consciousness is in my opinion a great distraction from the sober scientific job of gathering evidence and developing theory about the psychobiology of consciousness (e.g., Baars, 1988; 1994). The notion that the Turing argument actually tells us something scientifically useful is amazingly vulnerable. After all, the theory assumes an abstract automaton blessed with infinite time, infinite memory, and an environment that imposes no resource constraints. The brain is a massively parallel organ with 100 billion simultaneously active neurons, but the Turing Machine is at the extreme end of serial machines. This appears to be the reason why discussion of the Turing topic appears nowhere in the psychobiological literature. It seems primarily limited to philosophy and the general intellectual media.

3.4 Finally, it turns out that all current cognitive and neural models are formal Turing equivalents. That means the mathematical theory is useless in the critical task of choosing between models that are quite different computationally and on the evidence. It does not distinguish between neural nets and symbolic architectures for example, as radically different as they are in practice. But that is exactly the challenge we face today: choosing between theories based on their fit with the evidence. Here the theory of automata is no help at all.

3.5 A small but telling fact about Penrose’s book caught my attention: of its more than 400 references, fewer than forty address the psychology or biology of consciousness. But all our evidence on the subject is psychological and, to a lesser extent, biological! It appears that Penrose’s topic is not consciousness in the ordinary psychoneural sense, like waking up in the morning from a deep sleep or listening to music. How the positive proposals in SOTM relate to normal psychobiological consciousness is only addressed in terms of a technical hypothesis. Stuart Hameroff, an anesthesiologist at the University of Arizona currently working with Penrose, has proposed that general anesthetics interact with neurons via quantum level events in neural microtubules, which transport chemicals down axons and dendrites. It is an interesting idea, but it is by no means accepted, and there are many alternative hypotheses about anesthetics. But it is a real hypothesis: testable, relevant to the issue of consciousness, and directly aimed at the quantum level.

3.6 Penrose calls attention to the inability of Turing Machines to know when to stop a possibly nonterminating computation. This is a form of the Goedel Theorem, from which Penrose draws the following conclusion: “Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth.” That is to say, if humans can propose a Halting Rule which turns out to be demonstrably correct, and if we take Turing Machines as models of mathematicians, then the ability of mathematicians to come up with Halting Rules shows that their mental processes are not Turing-computable.

3.7 I’m troubled by this argument, because all of the cognitive studies I know of human formal reasoning and logic show that humans will take any shortcut available to find a plausible answer for a formal problem; actually following out formalisms mentally is rare in practice, even among scientists and engineers. Human beings are not algorithmic creatures; they prefer by far to use heuristic, fly-by-the-seat-of-your-pants analogies to situations they know well. Even experts typically use heuristic shortcuts. Furthermore, the apparent reductio of Penrose’s claim has a straightforward alternative explanation, namely that one of the premises is plain wrong. The implication psychologically is not that people are fancier than any Turing Machine, but that they are much sloppier that any explicit algorithm, and yet do quite well in many cases.

3.8 The fact that people can walk is an effective counter to Zeno’s Paradox. The fact that people can talk in sentences was Chomsky’s counter to stimulus-response theories of language. Now we know that people can in many cases find Halting Rules. It’s not that human processes are noncomputible by a real computer — numerous mental processes have been simulated with computers, including some formidable ones like playing competitive chess — but rather that the formal straightjacket of Turing Machinery is simply the wrong model to apply. This is the fallacy in trying to attribute rigorous all-or-none logical reasoning to ordinary human beings, who are pragmatic, heuristic, cost-benefit gamblers when it comes to solving formal problems.

3.9 Penrose proceeds to deduce that consciousness is noncomputable by Turing standards. But even this claim is based only on intuition; the argument has the form, “mathematicians have an astonishingly good record gaining fundamental insights into a variety of formal systems; this is obviously impossible for a Turing automaton; hence mathematicians themselves cannot be modeled by such automatons.” From a psychobiological point of view the success of mathematical intuition is more likely reflect the nervous system’s excellent heuristics for discovering patterns in the world. The brain appears to have sophisticated knowledge of space, for example, which may in turn allow deep geometrical intuitions to occur with great accuracy in talented individuals. In effect, we may put a billion years of brain evolution of spatial processing to good use if we are fortunate enough to be mathematically talented.

4. The Quantum Promissory Note

4.1 Having proved that Turing machines cannot account for mathematical intuition, Penrose develops the idea that Quantum Mechanics will provide a solution. QM is the crown jewel of modern theoretical physics, an endless source of insight and speculation. It shows extraordinary observer paradoxes. Consciousness is a mysterious something human observers have, and many people leap to the inference that the two observer mysteries must be the same. But this is at best a leap of faith. It is much too facile: observations of quantum events are not made directly by human beings but by such devices as Geiger counters with no consciousness in any reasonable sense of the word. Conscious experience so far as we know is limited to huge biological nervous systems, produced over a billion years of evolution.

4.2 There is no precedent for physicists deriving from QM any macrolevel phenomenon such as a chair or a flower or a wad of chewing gum, much less a nervous system with 100 billion neurons. Why then should we believe that one can derive psychobiological consciousness from QM? QM has not been shown to give any psychological answers. Conscious experience as we know it in humans has no resemblance to recording the collapse of a quantum wave packet. Let’s not confuse the mysteries of QM with the question of the reader’s perception of this printed phrase , or the inner sound of these words !

4.3 What can we make of Penrose’s Quantum Promissory Note? All scientific programs are promissory notes, making projections about the future and betting on what we may possibly find. The Darwin program was a promissory note, the Human Genome project is, as are particle physics and consciousness research. How do you place your bets? Is there a track record? Is there any evidence?

5. Treating Consciousness As A Variable: The Evidence For Consciousness As Such

5.1 We are barely at the point of agreeing on the real scientific questions, and on the kind of theory that could address them. On the matter of evidence, Baars (1983, 1988, 1994 and in press), Libet (1985) and others have argued that empirical constraints bearing on consciousness involve a close comparison of very similar conscious and unconscious processes. As elsewhere in science, we can only study a phenomenon if we can treat it as a variable. Many scientific breakthroughs result from the realization that some previously assumed constant, like atmospheric pressure, frictionless movement,the uniformity of space, the velocity and mass of the Newtonian universe, and the like, were actually variables, and that is the aim here. In the case of consciousness we can conduct a contrastive analysis comparing waking to sleep, coma, and general anesthesia; subliminal to supraliminal perception, habituated vs. novel stimuli, attended vs. nonattended streams of information, recalled vs. nonrecalled memories, and the like. In all these cases there is evidence that the conscious and unconscious events are comparable in many respects, so that we can validly probe for the essential differences between otherwise similar conscious and unconscious events (See Greenwald, 1992; Weiskrantz, 1986; Schacter, 1990).

5.2 This “method of contrastive analysis” is much like the experimental method: We can examine closely comparable cases that differ only in respect to consciousness, so that consciousness becomes, in effect, a variable. However, instead of dealing with only one experimental data set, contrastive analysis involves entire categories of well-established phenomena, summarizing numerous experimental studies. In this way we can highlight the variables that constrain consciousness over a very wide range of cases. The resulting robust pattern of evidence places major constraints on theory (Baars, 1988; in press).

6. Can Penrose Deal With Unconscious Information Processing?

6.1 Like many psychologists before 1900 Penrose appears to deny unconscious mental processes altogether. This is apparently because his real criterion is introspective access to the world of formal ideas. But introspection is impossible for unconscious events, and so the tendency for those who rely on introspection alone is to disbelieve the vast domain of unconscious processes.

6.2 Unconscious processing can be inferred from numerous sources of objective evidence. The simplest case is the great multitude of your memories that are currently unconscious. You can now recall this morning’s breakfast — but what happened to that memory before you brought it to mind? There is much evidence that even before recall the memory of breakfast was still represented in the nervous system, though not consciously. For example, we know that unconscious memories can influence other processes without ever coming to mind. If you had orange juice for breakfast today you may switch to milk tomorrow, even without bringing today’s juice to mind. A compelling case can be made for unconscious representation of habituated stimuli, of memories before and after recall, automatic skills, implicit learning, the rules of syntax, unattended speech, presupposed knowledge, preconscious input processing, and many other phenomena. In recent years a growing body of neurophysiological evidence has provided convergent confirmation of these claims. Researchers still argue about some of the particulars, but it is widely agreed that given adequate evidence, unconscious processes may be inferred.

6.3 What is the critical difference then between comparable conscious and unconscious processes? There are several, but perhaps the most significant one is that conscious percepts and images can trigger access to unanticipated knowledge sources. It is as if the conscious event is broadcast to memory, skill control, decision-making functions, anomaly detectors, and the like, allowing us to match the input with related memories, use it as a cue for a skilled actions or decisions, and detect problems in the input. At a broad architectural level, conscious representations seem to provide access to multiple knowledge source in the nervous system, while unconscious ones seem to be relatively isolated. The same conclusion follows from other contrastive analyses. (See Baars, 1988).

6.4 None of this evidence appears to fit in the SOTM framework, because it has no role for unconscious but vitally important information processing. This is a major point on which the great weight of psychobiological evidence and SOTM are fundamentally at odds.

7. The Emerging Psychobiology Of Consciousness

7.1 The really daring idea in contemporary science is that consciousness may be understandable without miracles, just as Darwin’s revolutionary idea was that biological variation could be understood as a purely natural phenomenon. We are beginning to see human conscious experience as a major biological adaptation, with multiple functions. It seems as if a conscious event becomes available throughout the brain to the neural mechanisms of memory, skill control, decision-makings, anomaly detection, and the like, allowing us to match our experiences with related memories, use them as a cue for skilled actions or decisions, and detect anomalies in them. By comparison, unconscious events seem to be relatively isolated. Thus consciousness is not just any kind of knowledge: It is knowledge that is widely distributed, that triggers off widespread unconscious processing, has multiple integrative and coordinating functions, aids in decision-making, problem-solving and action control, and provides information to a self-system.
8. Conclusion

8.1 I don’t know if consciousness has some profound metaphysical relation to physics. Science is notoriously unpredictable over the long term, and there are tricky mind-body paradoxes that may ultimately demand a radical solution. But at this point in the vexed history of the problem there is little question about the preferable scientific approach. It is not to try to solve the mind-body problem first — that effort has a poor track record — or to pursue lovely but implausible speculations. It is simply to do good science using consciousness as a variable, and investigating its relations to other psychobiological variables.

References

Baars, B.J. (1983). Conscious contents provide the nervous system with coherent, global information. In R. Davidson, G. Schwartz, & D. Shapiro (Eds.), Consciousness and self-regulation, 3, 45-76. New York: Plenum Press.

Baars, B.J. (1988) A cognitive theory of consciousness. Cambridge, UK: Cambridge University Press.

Baars, B.J. (1994) A thoroughly empirical approach to consciousness. PSYCHE 1(6) [80 paragraphs] URL:http://psyche.cs.monash.edu.au/volume1/ psyche-94-1-6-contrastive-1-baars.html

Baars, B.J. (in press) Consciousness regained: The new science of human experience. Oxford, UK: Oxford University Press.

Crick, F.H.C. & Koch, C. (1992) The problem of consciousness, Scientific American, 267(3), 153-159.

Edelman, G. (1989) The remembered present: A biological theory of consciousness. NY: Basic Books.

Gazzaniga, M. (1994) Cognitive neuroscience. Cambridge, MA: MIT Press.

Greenwald, A. (1992). New Look 3, Unconscious cognition reclaimed. American Psychologist, 47(6), 766-779.

James, W. (1890/1983). The principles of psychology. Cambridge, MA: Harvard University Press.

Kinsbourne, M. (1993). Integrated field model of conscousness. In G. Marsh & M. J. Brock (Eds.), CIBA symposium on experimental and theoretical studies of consciousness. (pp. 51-60). London: Wiley Interscience.

Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8, 529-66.

Newman, J., & Baars, B. J. (1993). A neural attentional model for access to consciousness: A Global Workspace perspective. Concepts In Neuroscience, 2(3), 3-25.

Penrose, R. (1994) Shadows of the mind. Oxford, UK: Oxford University Press.

Schacter, D. L. (1990). Toward a cognitive neuropsychology of awareness: Implicit knowledge and anosognosia. Journal of Clinical and Experimental Neuropsychology, 12(1), 155-178.

Weiskrantz, (1986) Blindsight: A single case and its implications. Oxford, UK: Clarendon Press.

Can Physics Provide a Theory of Consciousness? A Review of Shadows of the Mind by Roger Penrose by Bernard J. Baars, baars@cogsci.berkeley.edu, Copyright (c) Bernard J. Baars 1995

Social Neuroscince

Continuing with the theme of basic science, here’s another article that should be in the basic armoury of the rationalist.

What is social neuroscience?

Social neuroscience may be broadly defined as the exploration of the neurological underpinnings of the processes traditionally examined by, but not limited to, social psychology. This broad description provides a starting point from which we may examine the neuroscience of social behavior and cognition.

However, we see this definition as a guide, rather than as a rule and, as such, we see this field as inclusive, rather than exclusive. The behaviors and cognitions studied under the umbrella “social” are diverse. From complex human interactions to the most basic animal relationship, social research is an expansive, diverse, and complex domain. Likewise, exploring the neurological underpinnings allows for equally assorted and varied lines of research. The combination of the areas reflects such diversity, in which research is performed in domains as wide reaching as the maternal behavior of knockout mice and endocast examinations of early Australopithecus.

Guiding social neuroscience research, whatever one chooses as the definition, should be our desire to understand the complex and dynamic relationship between the brain (and its related systems) and social interaction. Historically, these fields (i.e., neuroscience and social psychology) would weakly interact, with few formal ties between the two. However, and not unnoticed by ourselves or our suspected readership, when the fields do combine, the resulting research is inevitably exciting and meaningful, not just to academics, but to the general public as well.

These underlying concepts are difficult to research. For instance, many social psychologists emphasize situationism (based on the belief in the significance of the situation) as opposed to personalities, although many recognize the combination of situation and personality as the best predictor of social behavior (Fiske, 2004). While the former aspect need not to be underestimated, it is not so easy to set ecological valid situations in the lab. Theoretically, assessing personality traits and correlating them with task performance and biological markers should pose no great challenge. These constructs, however, are rarely stable and they are dependent on so many mitigating variables that designing experiments remains a challenging process. It is often these mitigating variables that open up the field to other disciplines. Therefore, defining the “social” aspect of social neuroscience as including only the field of social psychology limits the true definition. From the ancient Greek and Egyptian philosophers to modern day mathematical and computational modelers, studying social behavior and cognition is an inclusive endeavor. This point is made clear when one casually surveys the educational background of a social neuroscientist. Further, the collaborative efforts in this field exist between biochemists and philosophers, anthropologists and neurologists, physicists and sociologists. While clichéd, it is true that social neuroscience succeeds because ideas exchange so freely.

One main challenge of social neuroscience is that social psychology and its related disciplines involve psychological constructs, such as moral dilemma, empathy, or self-regulation, that are difficult to map directly onto neural processes. These constructs often need to be deconstructed (Cacioppo, Berntson, Lorig, Norris, Rickett, & Nusbaum, 2003). Further, given the complexity of social interaction in humans, social neuroscience research needs to combine and integrate multiple-level analysis across different domains (Ochsner & Lieberman, 2001). Social neuroscience requires a system approach rather than a single level of analysis. We strongly believe that social and biological approaches when they are bridged can achieve a more accurate understanding of human behavior.

Ralph Adolphs has written an authoritative introduction about the foundational issues of social neuroscience (2003) that can serve as guidelines for state of the art research in this new field.

What are the tools of social neuroscience research?

The tools of the social neuroscientist are seemingly limited only by the imagination of the researcher. From WADA tests and split-brain studies to performing MRIs on chimpanzees and examining the hippocampal volumes of voles, social neuroscience encourages and invites creative uses of traditional methodologies and the development of new techniques. Those who perform research in our field become inventive, either by choice, necessity, or a combination of both. Such manipulations are not only our future, but our historic past as well. The saying, “It is a poor craftsman that blames his tools” is particularly relevant here. As our tools advance, we need not lose sight of the creativity of experimental design or how theoretical considerations can guide our next advance.

The development of functional neuroimaging, including positron emission tomography (PET), functional magnetic resonance imaging (fMRI), event-related potential (ERP), and magnetoencephalography (MEG) hold tremendous promise for the understanding of social cognition (Raichle, 2003). The task of functional brain imaging is to identify multiple regions and their temporal relationships associated with the performance of well-designed tasks. However, the real question is whether the function of these areas can be associated with computation (i.e., elementary operations) that is useful in the analysis of mental functions (see Posner, 2003). For instance, the right inferior parietal cortex, at the junction with the posterior temporal cortex (TPJ) seems to be not only involved in theory of mind (Saxe & Wexler, 2005) but also in distinguishing the perspectives of the self from those of others, an ability that is relevant to knowing the contents of other people’s minds can be different from our own (Decety & Grèzes, in press). Thus the basic computation performed by this region may be related to lower level processing of social relevant signals, and may be not specific to mental state attribution per se.

One drawback of neuroimaging research is that it can be perceived as the new phrenology (see Uttal, 2003) and it may give an over-simplistic account of the neuroscience of social cognition and behavior. With neuroimaging, there are gimmicks and trends, claims that extend beyond the research, and debates that can reach fever pitch levels over seemingly mundane differences. While hardly unique to our field, we encounter the danger of labeling parts of the brain as the “love center” or the area responsible for psychopathological behavior. In this sense, we are certainly flirting with a new phrenology. Therefore, we agree with our sensible colleagues who remind us to replicate and rely on all of the tools at our disposal.

Ironically, fMRI has been particularly plagued by these problems (and in some cases continues to be) because so many researchers realized its amazing potential. Early fMRI studies often took on the form of, “let’s throw the subjects in the scanner and see what happens”. Criticisms were harsh, and often justified. Yet, over the last decade, we have seen this technique mature and without those early studies, which often involved social questions, there would have been little interest in investing the technique. As unimaginable as it may have been 10 years ago, there are now countless fMRI scanners in the basements of psychology buildings. The maturing of fMRI has been exciting and wonderful to observe, and as social neuroscientists, it has helped address many questions previously unanswerable.

Functional MRI has become very popular with both the scientific communities and the general public. However it would be misleading to think it is the only neuroimaging tool. Certainly other neuroimaging techniques provide their own advantages. The spatial resolution in multiple dimensions puts fMRI in its own class. Yet, ERP research continues to be valuable because it too has its own advantages including temporal resolution and flexibility. PET and MEG carry their own advantages as well and, like fMRI, each has advanced social neuroscience. Anatomical MRI also plays a role in social neuroscience as does computerized tomography (CT).

The appeal of neuroimaging is clear. Humans can be non-invasively tested in various situations including on-line interaction with another partner, experiments can be designed, variables can be manipulated, studies can be replicated, etc. Yet, the disadvantages are significant. No technique has high spatial and temporal resolution (single-cell methods, which are invasive and limited in testing populations, require extensive a priori knowledge of where to look within the nervous system). In addition, most neuroimaging data are correlational and on their own they generally do not describe the causal role of the regions in a more general network.

Lesion studies in non-humans can fill the gap in establishing causal relationships. Thus animal studies are important and, in many senses, they remain the core of the field. Because of the variability of neuroimaging studies and methods, we deem this technique more important today than ever before. However, many of the social behaviors we are interested in, such as language and sexual behaviors, differ so significantly across species that these studies also limit our understanding.

Human case studies, as well as psychiatric, neurological, and psychological conditions can be additional methods for advancing our understanding of social neuroscience. These patient studies allow us to observe the relationship between social behavior and neurological systems. These studies often serve to spur neuroimaging research, or confirm the correlational findings of such research. While these studies are difficult (and many times impossible) to design with precisely manipulated variables, they too add to the tools of investigation for the social neuroscientist. Transcranial magnetic stimulation constitutes another approach to investigate the causal role of a region, and in many cases it can help to establish or confirm behaviors observed in patients. By employing a “virtual lesion” or creating a “virtual patient”, this technique can provide valuable information. Limited by penetration depth and spatial resolution, this technique should also be seen as one of many tools for understanding the complexities of brain–behavior relationships.

People and sexually reproducing animals need each other in order to survive, almost assuredly from the individual perspective, and as a certainty at the species level. It is clear that an assortment of social mechanisms are adaptive and that such mechanisms lend differential reproductive benefits. The individual that successfully passes on to future generations adaptive social neural mechanisms may secure a reproductive advantage. Yet, the variability of such abilities and capabilities and the proximate and ultimate origins of such abilities remain somewhat mysterious. In our species, the environment changes so rapidly that these relationships remain difficult to map. Yet, gaining an evolutionary perspective allows for a broader understanding of our current state. What may have been adaptive in one environment may not be so in another.

The gene encodes information that is usually (but not always) beneficial for the survival of the organism. DNA gives rise to brains, and brains give rise to behavior. Certainly the field of social neuroscience owes its past, but more importantly its future, to this field. Genetics continues to expand into our field in inventive and meaningful ways. The future will bring us inventive twin studies, correlational genome examinations in humans, and more exciting knockout studies. We, like many in the field, are particularly excited by the role that genetics will play in our future.

The impact of social neuroscience

Beyond the clear impact of social neuroscience in various academic domains, including education, for which we are all excited, we must carefully consider how society uses research findings from social neuroscience. There is a tendency in public journals to report over simplistic interpretations of complex issues. As Wolpe put it, “history has shown us again and again that society tends to use science to reinforce the moral assumptions and biases of the cultural moment. There is clearly a role for a thoughtful social neuroscience, where findings become part of considered policymaking around controversial issues. For example, research into addiction has provided new perspectives and tools for policymakers willing to use them. But if scientists are not clear about the scope and nature of their work, eager policymakers can seize preliminary and speculative findings and implement programs unsupported by the science itself” (Wolpe, 2004, p. 1032).

Importantly, Farah (2002) has raised a number of neuroethical issues for the future of neuroscience that should be of concern to all of us. Social neuroscience is already starting to track the neural signatures of sophisticated mental states such as truth versus lie, veridical versus false memory, style of moral reasoning or the likelihood of aggressive behavior. As social neuroscience develops, it will certainly challenge our ways of thinking about responsibility and blame, and have an impact on social policies. However, we must truly be responsible in this domain as well. While we should never have our research guided by politics, or other external pressures, we should apply the same standards to ourselves. Again, these issues are not unique to social neuroscience, but they are concerns certainly worthy of consideration as this field expands.

It is a pleasure and an honor to serve as members of this great community. The field has housed some of the most brilliant and dynamic academics, and it continues to attract the brightest scholars and thinkers. We look forward to where this field is headed, attracted to its future by the richness of its past. We also look forward to the growth of Social Neuroscience and, as such, we look forward to each reader’s contribution, be it through reporting of research, debate, or passing on a copy endlessly that eventually winds up in the hands of a student, inspiring him or her to enter the field. We thank all of you and we wish you continued success.

References

[1] Adolphs, R. (2003) Investigating the cognitive neuroscience of social behavior, Neuropsychologia, 41, pp. 119–126.
[2] Cacioppo, J. T and Berntson, G. G. (2005) Social neuroscience, Hove, UK: Psychology Press.
[3] Cacioppo, J. T., Berntson, G. G., Lorig, T. S., Norris, C. J., Rickett, E. and Nusbaum, H. (2003) Just because you’re imaging the brain doesn’t mean you can stop using your head: A primer and set of first principles, Journal of Personality and Social Psychology, 85, pp. 650–661.
[4] Decety, J. and, & Grèzes, J. The power of simulation: Imagining one’s own and other’s behavior. Brain Research.
[5] Farah, M. J. (2002) Emerging ethical issues in neuroscience, Nature Neuroscience, 5, pp. 1123–1129.
[6] Fiske, S. T. (2004) Social beings, Danvers, MA: Wiley.
[7] Frith, C. D. and, & Wolpert, D. (2004 ). The neuroscience of social interaction: Decoding, imitating and influencing the actions of others. New York: Oxford University Press.
[8] Ochsner, K. N. and Lieberman, M. D. (2001) The emergence of social cognitive neuroscience, American Psychologist, 56, pp. 717–734.
[9] Posner, M. I. (2003) Imaging a science of mind, Trends in Cognitive Sciences, 7, pp. 450–453.
[10] Raichle, M. E. (2003) Social neuroscience: A role for brain imaging, Political Psychology, 24, pp. 759–764.
[11] Saxe, R. and Wexler, A. (2005) Making sense of another mind: The role of the right temporo-parietal junction, Neuropsychologia, 43, pp. 1391–1399.
[12] Uttal, W. R. (2003) The new phrenology: The limits of localizing cognitive processes in the brain, Cambridge, MA: MIT Press.
[13] Wolpe, P. R. (2004) Ethics and social policy in research on the neuroscience of human sexuality, Nature Neuroscience, 7, pp. 1031–1033.

from Social Neuroscience, Jean Decety (Editor) and Julian Paul Keenan (Deputy Editor).© 2006 Psychology Press.

Spirituality & Science

British psychiatry has largely focused on the biology of mental disorder, supported over recent years by advances in the neurosciences. There has been a somewhat awkward fit with psychology, since psychology is based on the concept of mind, and how the mind and brain are related is far from clear. The view taken by many is to regard mind as epiphenomenal, on the basis that the brain itself is somehow generating consciousness.

In this model of the psyche, there is no need to postulate a soul. We are nothing but the product of our genes, as Richard Dawkins (1976) would have us believe. Such an assertion comes at the tail end of an epoch that began 300 years ago with the intellectual giants, René Descartes and Isaac Newton. Descartes set down a lasting blueprint for science, that he would hold nothing to be true unless he could prove to his satisfaction that it was true. Newton laid the foundation of a mechanical universe, in which time is absolute and space is structured according to the laws of motion, a cosmos of stars and planets all held in place by the forces of momentum and gravitation.

Both Descartes and Newton were deeply religious men. Descartes’ famous saying, “Cogito ergo sum”, led him simply to argue that God had created two classes of substance, a mental world and a physical world, while Newton spent more time engrossed in his alchemical researches than working out the laws of motion. Yet their discoveries led to an enduring split between religion and science with which we live to this day. The Church could no longer claim to understand how the universe worked, for its mediaeval cosmology had been swept aside. As the mental and physical worlds drifted further apart, God became a shadowy figure behind the scenes, whose only function was winding up the mainspring of the universe. In the past 100 years, the science of psychology has redefined the mental world along essentially humanist lines, a mind-set that can be traced back to Sigmund Freud (1927), who saw religion as a massive defence against neurosis. Even Carl Jung was careful to stay within the bounds of psychology when defining the soul as “the living thing in Man, that which lives of itself and causes life” (1959: p. 26).

Our patients have no such reservations. We know from a survey carried out by the Mental Health Foundation (Faulkner, 1997) that over 50% of service users hold religious or spiritual beliefs that they see as important in helping them cope with mental illness, yet do not feel free, as they would wish, to discuss these beliefs with the psychiatrist. Need there be such a divide between psychiatrists and their patients? If we care to look at some of the advances in physics over the past 75 years, we find good cause to think again.

In the light of quantum mechanics, Newton’s view of a physical world that is substantial, fixed and independent of mind is no longer tenable. For example, the famous wave–particle experiment shows that when a beam of light is shone through a narrow slit so that it falls on a particle detector, subatomic packets of light called quanta strike the detector screen like miniature bullets. Change the apparatus to two slits side by side and the light coming through the slits generates a wave interference pattern, just as ripples criss-cross when two stones are dropped side by side into a pond. Particles become waves and waves become particles. Both of these dimensional realities have equal validity and cannot be divorced from the consciousness of the participant–observer. This is but a window onto a greater vista, for current superstring theory postulates many more dimensions than our local space–time can accommodate.

No longer is the electron thought of as a particle that spins around the atom like a miniature solar system. Instead, it is conceptualised as ‘virtual’, being smeared throughout all space in a quantum wave that only collapses as a particle into our physical space–time when the consciousness of the observer is engaged in the act of measurement. Nor can its velocity and position ever both be known at the same time, for when the quantum wave collapses, there is only a statistical probability that the electron will turn up where it is expected. It may just materialise hundreds, thousands or even millions of miles away. When it does so, it arrives at that place instantaneously, transcending the limits of both space and time. Here is what three eminent physicists have to say.

“The fundamental process of nature lies outside space–time but generates events that can be located in space–time.” (Stapp, 1977: p. 202)

“Ultimately, the entire universe (with all its particles, including those constituting human beings, their laboratories, observing instruments, etc.) has to be understood as a single undivided whole, in which analysis into separately and independently existent parts has no fundamental status.” (Bohm, 1983: p. 174)

“The universe exists as formless potentia in myriad possible branches in the transcendent domain and becomes manifest only when observed by conscious beings.” (Goswami, 1993: p. 141)

When consciousness collapses the wave function into the space–time of our perceptual world, mind and matter arise simultaneously, like two sides of one coin. The brain, of course, is crucial in this; mind, the capacity for individual self-awareness, is constellated with each physical self. Consciousness is then perpetuated through repeated further collapse of the wave function. (The process can be compared with the individual frames of a film flowing together to create movement.) In this way, we are continually generating what we think of as ‘reality’, characterised by memories, our personal histories and an enduring sense of identity. (Fortunately for us, our shared world of sense perception has structural stability, not because it is independent of consciousness but because the probability wave from which it arises has been collectively generated by all conscious beings throughout time.)

Quantum effects show up most readily at the subatomic level, but empirical research into largescale systems has also demonstrated that mind can influence matter. For example, random number generators have been shown, over thousands of trials, to yield scores correlating with the mental intention of the experimenter (Schmidt, 1987). More striking still are those unaccountable events we call miracles. Since the wave function contains, in potentia, all that ever was, is and shall be, there is no limit in principle to what is possible. Why should not a mind of such exceptional power as that of Jesus collapse the wave uniquely and thereby turn water into wine?

Evidence for the non-locality of consciousness was first demonstrated over 25 years ago, when it was shown that experimental subjects who are emotionally attuned can synchronise their brain waves at a distance from each other (Targ & Puthoff, 1974). Remote viewing and precognition have since been firmly established on an empirical basis (Radin, 1997). The efficacy of prayer has been researched (Byrd, 1988), as have more than 150 controlled studies on healing (Benor, 1992). Such findings merit the epithet ‘paranormal’ only if we view them through Newtonian glasses. Who can therefore say what does not exist in the quantum domain, from the supreme consciousness we call God, to those sensed presences (often of the newly departed) that psychiatrists refer to as pseudo-hallucinations, down to unruly spirits that, according to the traditions of many societies, blight the lives of those they persecute?

When we enquire into the beliefs our patients hold, such matters deserve to be discussed with a genuinely open mind. We do not have the answers and indeed our patients may sometimes be closer to the truth than we know. Nor are we required to affirm a particular religious or spiritual viewpoint but simply to treat the often strange experiences told us by our patients as authentic. This can sometimes be uncomfortable, for we are trained to judge with confidence the difference between fantasy and reality and to diagnose accordingly. Yet it comes a whole lot easier once we concede the limitations of space–time, which we can do by taking an unprejudiced intellectual position or experientially through spiritual practice.

People in sound mental health, who sense that beyond the doors of perception lies a greater world, can use such awareness to enrich their lives, be it through prayer, mediumship or mystical reverie. But where there is mental turmoil, whatever its cause, that same sensitivity brings profound distress (Powell, 1988, 2000). Then the psychiatrist who takes into account biological, psychological and spiritual aspects alike is well placed to help. The stigma that so often burdens our patients is not only the result of social opprobrium. It is fuelled by the experience of estrangement from humankind, one that we as psychiatrists can surely help to overcome.

References

Benor, D. (1992) Healing Research: Holistic Energy Medicine and Spirituality. Munich: Helix.Bohm, D. (1983) Wholeness and the Implicate Order. London: Ark Paperbacks.

Byrd, R. C. (1988) Positive therapeutic effects of intercessory prayer in coronary care unit population. Southern Medical Journal, 81, 826–829.Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press.

Freud, S. (1927) The Future of an Illusion. Reprinted (1953–1974) in The Standard Edition of the Complete Psychological Works of Sigmund Freud (ed. and trans. J. Strachey), vol. 21. London: Hogarth Press.

Goswami, A. (1993) The Self-Aware Universe. New York: Putnam.

Jung, C. (1959) Archetypes and the collective unconscious. In The Collected Works of C. G. Jung (Eds H. Read, Fordham & G. Alder, trans. R. F. C. Hull), Vol. 9, Pt London: Routledge and Kegan Paul.

Faulkner, A. (1997) Knowing Our Own Minds. London: Mental Health Foundation.

Powell, A. (1998) Soul consciousness and human suffering: psychotherapeutic approaches to healing. Journal Alternative and Complementary Medicine, 4, 101–108. ––– (2000) Beyond space and time – the unbounded psyche. In Brain and Beyond. Edinburgh: Floris Books (in press).

Radin, D. (1997) The Conscious Universe: The Scientific Truth of Psychic Phenomena. New York: Harper Edge. Schmidt, H. (1987) The strange properties of psychokinesis. Journal of Scientific Exploration, 1, 103–118.

Stapp, H. P. (1977) Are superluminal connections necessary? Nuovo Cimento, 40B, 191–204.

Targ, R. & Puthoff, H. E. (1974) Information transmission under conditions of sensory shielding. Nature, 251, 602–607.

(Spirituality and science:a personal view by Andrew Powell. Andrew Powell is former consultant psychotherapist and honorary senior lecturer at the Warneford Hospital and University of Oxford. He is Chair of the Spirituality and Psychiatry Special Interest Group, Royal College of Psychiatrists (correspondence: c/o Sue Duncan, Royal College of Psychiatrists, 17 Belgrave Square, London SW1X 8PG)