How Could I Be Wrong? How Wrong Could I Be?

One of the striking, even amusing, spectacles to be enjoyed at the many workshops and conferences on consciousness these days is the breathtaking overconfidence with which laypeople hold forth about the nature of consciousness – their own in particular, but everybody’s by extrapolation. Everybody’s an expert on consciousness, it seems, and it doesn’t take any knowledge of experimental findings to secure the home truths these people enunciate with such conviction.

One of my goals over the years has been to shatter that complacency, and secure the scientific study of consciousness on a proper footing. There is no proposition about one’s own or anybody else’s conscious experience that is immune to error, unlikely as that error might be.  I have come to suspect that refusal to accept this really quite bland denial of what would be miraculous if true lies behind most if not all the elaboration of fantastical doctrines about consciousness recently defended. This refusal fuels the arguments about the conceivability of zombies, the importance of a first-person science of consciousness, intrinsic intentionality and various other hastily erected roadblocks to progress in the science of consciousness.

You can’t have infallibility about your own consciousness. Period. But you can get close – close enough to explain why it seems so powerfully as if you do. First of all, the intentional stance (Dennett, 1971, 1987) guarantees that any entity that is voluminously and reliably predictable as an intentional system will have a set of beliefs (including the most intimate beliefs about its personal experiences) that are mainly true.  So each of us can be confident that in general what we believe about our conscious experiences will have an interpretation according to which we are, in the main, right. How wrong could I be? Not that wrong. Not about most things. There has to be a way of nudging the interpretation of your manifold of beliefs about your experience so that it comes out largely innocent of error though this might not be an interpretation you yourself would be inclined to endorse.  This is not a metaphysical gift, a proof that we live in the best of all possible worlds. It is something that automatically falls out of the methodology: when adopting the intentional stance, one casts about for a maximally charitable (truth-rendering) interpretation, and there is bound to be one if the entity in question is hale and hearty in its way.

But it does not follow from this happy fact that there is a path or method we can follow to isolate some privileged set of guaranteed-true beliefs. No matter how certain you are that p, it may turn out that p is one of those relatively rare errors of yours, an illusion, even if not a grand illusion. But we can get closer, too.  Once you have an intentional system with a capacity for communicating in a natural language, it offers itself as a candidate for the rather special role of self-describer, not infallible but incorrigible in a limited way: it may be wrong, but there may be no way to correct it. There may be no truth-preserving interpretation of all of its expressed opinions (Dennett, 1978, 1991) about its mental life, but those expressed opinions may be the best source we could have about what it is like to be it. A version of this idea was made (in-)famous by Richard Rorty back in his earlier incarnation as an analytic philosopher, and has been defended by me more recently in The Case for Rorts (Dennett, 2000). There I argue that if, for instance, Cog, the humanoid robot being developed by Rodney Brooks and his colleagues at MIT, were ever to master English, its own declarations about its subjectivity would systematically tend to trump the third-person opinions of its makers, even though they would be armed, in the limit, with perfect information about the micro-mechanical implementation of that subjectivity. This, too, falls out of the methodology of the intentional stance, which is the only way (I claim) to attribute content to the states of anything.

The price we pay for this near-infallibility is that our heterophenomenological worlds may have to be immersed in a bath of metaphor in order to come out mainly true. That is, our sincere avowals may have to be rather drastically reconstrued in order to come out literally true. For instance, when we sincerely tell our interrogators about the mental images we’re manipulating,  we may not think we’re talking about convolutions of data-structures in our brain–we may well think we’re talking about immaterial ectoplasmic composites, or intrinsic qualia, or quantum-perturbations in our micro-tubules! but if the interrogators rudely override these ideological glosses and disclaimers of ours and forcibly re-interpret our propositions as actually being about such data-structure convolution, these propositions will turn out to be, in the main, almost all true, and moreover deeply informative about the ways we solve problems, think about the world, and fuel our subjective opinions in general. (In this regard, there is nothing special about the brain and its processes; if you tell the doctor that you have a certain sort of traveling pain in your gut, your doctor may well decide that you’re actually talking about your appendix whatever you may think you’re talking about and act accordingly.)

Since we are such reflective and reflexive creatures, we can participate in the adjustment of the attributions of our own beliefs, and a familiar philosophical move turns out to be just such reflective self-re-adjustment, but not a useful one. Suppose you say you know just what beer tastes like to you now, and you are quite sure you remember what beer tasted like to you the first time you tasted it, and you can compare, you say, the way it tastes now to the way it tasted then. Suppose you declare the taste to be the same. You are then asked: Does anything at all follow from this subjective similarity in the way of further, objectively detectable similarities? For instance, does this taste today have the same higher-order effects on you as it used to have? Does it make you as happy or as depressed, or does it enhance or diminish your capacity to discriminate colors, or retrieve synonyms or remember the names of your childhood friends or. . . . .? Or have your other, surrounding dispositions and habits changed so much in the interim that it is not to be expected that the very same taste (the same quale, one may venture to say, pretending to know what one is talking about) would have any of the same effects at this later date? You may very well express ignorance about all such implications. All you know, you declare, is that this beer now tastes just like that first beer did (at least in some ineffable, intrinsic regard) whether or not it has any of the same further effects or functions. But by explicitly jettisoning all such implications from your proposition, you manage to guarantee that it has been reduced to a vacuity.  You have jealously guarded your infallibility by seeing to it that you=ve adjusted the content of your claim all the way down to zero. You can’t be wrong, because there’s nothing left to be right or wrong about.

This move is always available, but it availeth nought. It makes no difference, by the way, whether you said the beer tastes the same or different; the same point goes through if you insist it tastes different now. Once your declaration is stripped of all powers of implication, it is an empty assertion, a mere demonstration that this is how you fancy talking at this moment. Another version of this self-vacating move can be seen, somewhat more starkly, in a reaction some folks opt for when they have it demonstrated to them that their color vision doesn’t extend to the far peripheries of their visual fields: They declare that on the contrary, their color vision in the sense of color experience does indeed extend to the outer limits of their phenomenal fields; they just disavow any implications about what this color experience they enjoy might enable them to do e.g., identify by name the colors of the objects there to be experienced! They are right, of course, that it does not follow from the proposition that one is having color experiences that one can identify the colors thus experienced, or do better than chance in answering same-different? questions, or use color differences to detect shapes (as in a color- blindness test) to take the most obvious further effects. But if nothing follows from the claim that their peripheral field is experienced as colored, their purported disagreement with the researchers= claim that their peripheral field lacks color altogether evaporates.

O’regan and Noë (2001) argue that my heterophenomenology makes the mistake of convicting naive subjects of succumbing to a grand illusion.   

But is it true that normal perceivers think of their visual fields this way [as in sharp detail and uniform focus from the center out to the periphery]? Do normal perceivers really make this error? We think not. . . . . normal perceivers do not have ideological commitments concerning the resolution of the visual field. Rather, they take the world to be sold, dense, detailed and present and they take themselves to be embedded in and thus to have access to the world. [pXXX]

 My response to this was:

Then why do normal perceivers express such surprise when their attention is drawn to facts about the low resolution (and loss of color vision, etc) of their visual peripheries?  Surprise is a wonderful dependent variable, and should be used more often in experiments; it is easy to measure and is a telling betrayal of the subject’s having expected something else. These expectations are, indeed, an overshooting of the proper expectations of a normally embedded perceiver-agent; people shouldn’t have these expectations, but they do. People are shocked, incredulous, dismayed; they often laugh and shriek when I demonstrate the effects to them for the first time.(Dennett, 2001, pXXXX)

O’regan and Noë (see also Noë, Pessoa, and Thompson, (2000) Noë (2001), and Noë and O’Regan, forthcoming) are right that it need not seem to people that they have a detailed picture of the world in their heads. But typically it does. It also need not seem to them that they are not zombies but typically it does.  People like to have ideological commitments. They are inveterate amateur theorizers about what is going on in their heads, and they can be mighty wrong when they set out on these paths.

For instance, quite a few theorizers are very, very sure that they have something that they sometimes call original intentionality. They are prepared to agree that interpretive adjustments can enhance the reliability of the so-called reports of the so-called content of the so-called mental states of a robot like Cog, because those internal states have only derived intentionality, but they are of the heartfelt opinion that we human beings, in contrast, have the real stuff: we are endowed with genuine mental states that have content quite independently of any such charitable scheme of interpretation.  That’s how it seems to them, but they are wrong.

How could they be wrong? They could be wrong about this because they could be wrong about anything because they are not gods. How wrong could they be?  Until we excuse them for their excesses and re-interpret their extravagant claims in the light of good third-person science, they can be utterly, bizarrely wrong. Once they relinquish their ill-considered grip on the myth of first-person authority and recognize that their limited incorrigibility depends on the liberal application of a principle of charity by third-person observers who know more than they do about what is going on in their own heads, they can become invaluable, irreplaceable informants in the investigation of human consciousness.

References:

  • Dennett, 1971, Intentional Systems, J.Phil, 68, pp87‑106
  • Dennett, 1978, How to Change your Mind, in Brainstorms, Cambridge, MA: MIT Press.
  • Dennett, 1987, The Intentional Stance,  Cambridge, MA: MIT Press.
  • Dennett, 1991, Consciousness Explained, Boston: Little, Brown, and London: Allen Lane 1992.
  • Dennett, 2000, The Case for Rorts, in Robert Brandom, ed., Rorty and his Critics, Oxford: Blackwells.
  • Dennett, 2001, Surprise, surprise, commentary on O’Regan and  Noë, 2001, BBS, 24, 5, pp.xxxx.
  • O’Regan and Noë, 2001. BBS, 24, 5, pp.xxxxx
  • Noë, A., Pessoa, L., Thompson, E. (2000) Beyond the grand illusion: what change blindness
  • really teaches us about vision. Visual Cognition.7, 2000: 93‑106.
  • Noë, A. (2001) Experience and the active mind. Synthese 129: 41‑60.
  • Noë, A. O’Regan, J. K. Perception, attention and the grand illusion.Psyche6 (15) URL:
  • http://psyche.cs.monash.edu.au/v6/psyche‑6‑15‑noe.html

Special issue of Journal of Consciousness Studies on The Grand Illusion, January 13, 2002, How could I be wrong? How wrong could I be? Daniel C. Dennett, Center for Cognitive Studies, Tufts University, Medford, MA 02155

Consciousness & Illusion

What is all this? What is all this stuff around me; this stream of experiences that I seem to be having all the time?

Throughout history there have been people who say it is all illusion. I think they may be right. But if they are right what could this mean? If you just say “It’s all an illusion” this gets you nowhere – except that a whole lot of other questions appear. Why should we all be victims of an illusion, instead of seeing things the way they really are? What sort of illusion is it anyway? Why is it like that and not some other way? Is it possible to see through the illusion? And if so what happens next.

These are difficult questions, but if the stream of consciousness is an illusion we should be trying to answer them, rather than more conventional questions about consciousness. I shall explore these questions, though I cannot claim that I will answer them. In doing so I shall rely on two methods. First there are the methods of science; based on theorising and hypothesis testing – on doing experiments to find out how the world works. Second there is disciplined observation – watching experience as it happens to find out how it really seems. This sounds odd. You might say that your own experience is infallible – that if you say it is like this for you then no one can prove you wrong. I only suggest you look a bit more carefully. Perhaps then it won’t seem quite the way you thought it did before. I suggest that both these methods are helpful for penetrating the illusion – if illusion it is.

We must be clear what is meant by the word ‘illusion’. An illusion is not something that does not exist, like a phantom or phlogiston. Rather, it is something that it is not what it appears to be, like a visual illusion or a mirage. When I say that consciousness is an illusion I do not mean that consciousness does not exist. I mean that consciousness is not what it appears to be. If it seems to be a continuous stream of rich and detailed experiences, happening one after the other to a conscious person, this is the illusion.

What’s the problem?

For a drastic solution like ‘it’s all an illusion’ even to be worth considering, there has to be a serious problem. There is. Essentially it is the ancient mind-body problem, which recurs in different guises in different times. Victorian thinkers referred to the gulf between mind and brain as the ‘great chasm’ or the ‘fathomless abyss’. Advances in neuroscience and artificial intelligence have changed the focus of the problem to what Chalmers (1995) calls the ‘hard problem’ – that is, to explain how subjective experience arises from the objective activity of brain cells.

Many people say that the hard problem does not exist, or that it is a pseudo-problem. I think they fall into two categories – those few who have seen the depths of the problem and come up with some insight into it, and those who just skate over the abyss. The latter group might heed Nagel’s advice when he says “Certain forms of perplexity—for example, about freedom, knowledge, and the meaning of life—seem to me to embody more insight than any of the supposed solutions to those problems.” (Nagel 1986 p 4).

This perplexity can easily be found. For example, pick up any object – a cup of tea or a pen will do – and just look, smell, and feel its texture. Do you believe there is a real objective cup there, with actual tea in it, made of atoms and molecules? Aren’t you also having a private subjective experience of the cup and the taste of the tea – the ‘what it is like’ for you? What is this experience made of? It seems to be something completely different from actual tea and molecules. When the objective world out there and our subjective experiences of it seem to be such different kinds of thing, how can one be caused by, or arise from, or even depend upon, the other?

The intractability and longevity of these problems suggests to me that we are making a fundamental mistake in the way we think about consciousness – perhaps right at the very beginning. So where is the beginning? For William James – whose 1890 Principles of Psychology is deservedly a classic – the beginning is our undeniable experience of the ‘stream of consciousness’; that unbroken, ever-changing flow of ideas, perceptions, feelings, and emotions that make up our lives.

In a famous passage he says “Consciousness … does not appear to itself chopped up in bits. … it flows. A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described. In talking of it hereafter, let us call it the stream of thought, of consciousness, or of subjective life.” (James, 1890, i, 239). He referred to the stream of consciousness as “… the ultimate fact for psychology.” (James 1890, i, p 360).

James took introspection as his starting method, and the stream of consciousness as its object. “Introspective Observation is what we have to rely on first and foremost and always. The word introspection need hardly be defined(it means, of course, the looking into our own minds and reporting what we there discover. Every one agrees that we there discover states of consciousness. …  I regard this belief as the most fundamental of all the postulates of Psychology, and shall discard all curious inquiries about its certainty as too metaphysical for the scope of this book.” (1890, i,  p 185).

He quotes at length from Mr. Shadworth Hodgson, who says “What I find when I look at my consciousness at all is that what I cannot divest myself of, or not have in consciousness, if I have any consciousness at all, is a sequence of different feelings. I may shut my eyes and keep perfectly still, and try not to contribute anything of my own will; but whether I think or do not think, whether I perceive external things or not, I always have a succession of different feelings. … Not to have the succession of different feelings is not to be conscious at all.” (quoted in James 1890, i, p 230)

James adds “Such a description as this can awaken no possible protest from any one.” I am going to protest. I shall challenge two aspects of the traditional stream; first that it has rich and detailed contents, and second that there is one continuous sequence of contents.

But before we go any further it is worth considering how it seems to you. I say this because sometimes people propose novel solutions to difficult problems only to find that everyone else says – ‘Oh I knew that all along’. So it is helpful to decide what you do think first. Many people say that it feels something like this. I feel as though I am somewhere inside my head looking out. I can see and hear and feel and think. The impressions come along in an endless stream; pictures, sounds, feelings, mental images and thoughts appear in my consciousness and then disappear again. This is my ‘stream of consciousness’ and I am the continuous conscious self who experiences it.

If this is how it seems to you then you probably also believe that at any given time there have to be contents of your conscious stream – some things that are ‘in’ your consciousness and others that are not. So, if you ask the question ‘what am I conscious of now?’ or ‘what was I conscious of at time t?’ then there has to be an answer. You might like to consider at this point whether you think there does have to be an answer.

For many years now I have been getting my students to ask themselves, as many times as possible every day “Am I conscious now?”. Typically they find the task unexpectedly hard to do; and hard to remember to do. But when they do it, it has some very odd effects. First they often report that they always seem to be conscious when they ask the question but become less and less sure about whether they were conscious a moment before. With more practice they say that asking the question itself makes them more conscious, and that they can extend this consciousness from a few seconds to perhaps a minute or two. What does this say about consciousness the rest of the time?

Just this starting exercise (we go on to various elaborations of it as the course progresses) begins to change many students’ assumptions about their own experience. In particular they become less sure that there are always contents in their stream of consciousness. How does it seem to you? It is worth deciding at the outset because this is what I am going to deny. I suggest that there is no stream of consciousness. And there is no definite answer to the question ‘What am I conscious of now?’. Being conscious is just not like that.

I shall try to explain why, using examples from two senses; vision and hearing.

The Stream of Vision

When we open our eyes and look around it seems as though we are experiencing a rich and ever-changing picture of the world; what I shall call our ‘stream of vision’. Probably many of us go further and develop some sort of theory about what is going on – something like this perhaps.

“When we look around the world, unconscious processes in the brain build up a more and more detailed representation of what is out there. Each glance provides a bit more information to add to the picture. This rich mental representation is what we see at any time. As long as we are looking around there is a continuous stream of such pictures. This is our visual experience.”

There are at least two threads of theory here. The first is the idea that there is a unified stream of conscious visual impressions to be explained, what Damasio (1999) calls ‘the movie-in-the-brain’. The second is the idea that seeing means having internal mental pictures – that the world is represented in our heads. People have thought this way at least for several centuries, perhaps since Leonardo da Vinci first described the eye as a camera obscura and Kepler explained the optics of the eye (Lindberg 1976). Descartes’ famous sketches showed how images of the outside world appear in the non-material mind and James, like his Victorian contemporaries, simply assumed that seeing involves creating mental representations. Similarly, conventional cognitive psychology has treated vision as a process of constructing representations.

Perhaps these assumptions seem unremarkable, but they land us in difficulty as soon as we appreciate that much of vision is unconscious.  We seem forced to distinguish between conscious and unconscious processing; between representations that are ‘in’ the stream of consciousness and those that are ‘outside’ it. Processes seem to start out unconscious and then ‘enter consciousness’ or ‘become conscious’. But if all of them are representations built by the activity of neurons, what is the difference? What makes some into conscious representations and others not.

Almost every theory of consciousness we have confronts this problem and most try to solve it. For example, global workspace (GW) theories (e.g. Baars 1988) explicitly have a functional space, the workspace, which is a serial working memory in which the conscious processing occurs. According to Baars, information in the GW is made available (or displayed, or broadcast) to an unconscious audience in the rest of the brain. The ‘difference’ is that processing in the GW is conscious and that outside of it is not.

There are many varieties of GWT. In Dennett’s (2001) ‘fame in the brain’ metaphor, as in his previous multiple drafts theory (Dennett 1991 and see below), becoming conscious means contributing to some output or result (fame is the aftermath, not something additional to it). But in many versions of GWT being conscious is equated with being available, or on display, to the rest of the system (e.g. Baars 1988, Dehaene and Naccache 2001). The question remains; the experiences in the stream of consciousness are those that are available to the rest of the system. Why does this availability turn previously unconscious physical processes into subjective experiences?

As several authors have pointed out there seems to be a consensus emerging in favour of GWTs. I believe the consensus is wrong. GWTs are doomed because they try to explain something that does not exist – a stream of conscious experiences emerging from the unconscious processes in the brain.

The same problem pervades the whole enterprise of searching for the neural correlates of consciousness. For example Kanwisher (2001) suggests that the neural correlates of the contents of visual awareness are represented in the ventral pathway – assuming, as do many others, that visual awareness has contents and that those contents are representations. Crick asks “What is the “neural correlate” of visual awareness? Where are these “awareness neurons”¾are they in a few places or all over the brain¾and do they behave in any special way?” One might think that these are rhetorical questions but he goes on ” … this knowledge may help us to locate the awareness neurons we are looking for.” (Crick 1994, 204). Clearly he, like others, is searching for the neural correlates of that stream of conscious visual experiences. He admits that  “… so far we can locate no single region in which the neural activity corresponds exactly to the vivid picture of the world we see in front of our eyes.” (Crick 1994, 159). Nevertheless he obviously assumes that there is such a “vivid picture”. What if there is not? In this case he, and others, are hunting for something that can never be found.

I suggest that there is no stream of vivid pictures that appear in consciousness. There is no movie-in-the-brain. There is no stream of vision. And if we think there is we are victims of the grand illusion.

Change blindness is the most obvious evidence against the stream of vision. In 1991 Dennett reported unpublished experiments by Grimes who used a laser tracker to detect people’s eye movements and then change the picture they were looking at just when they moved their eyes. The changes were so large and obvious that under normal circumstances they could hardly be missed, but when they were made during saccades, the changes went unnoticed. It subsequently turned out that expensive eye trackers are not necessary.  I suggested moving the whole picture instead, and this produced the same effects (Blackmore, Brelstaff, Nelson & Troscianko 1995) . Other, even simpler, methods have since been developed, and change blindness has been observed with brief blank flashes between pictures, with image flicker, during cuts in movies or during blinks (Simons 2000).

That the findings are genuinely surprising is confirmed in experiments in which people were asked to predict whether they or others would notice the changes. A large metacognitive error was found – that is, people grossly overestimated their own and others’ ability to detect change (Levin, Momen & Drivdahl 2000). James long ago noted something similar; that we fail to notice that we overlook things. “It is true that we may sometimes be tempted to exclaim, when once a lot of hitherto unnoticed details of the object lie before us, “How could we ever have been ignorant of these things and yet have felt the object, or drawn the conclusion, as if it were a continuum, a plenum? There would have been gaps¾but we felt no gaps” (p 488).

Change blindness is not confined to artificial laboratory conditions. Simons and Levin (1998) produced a comparable effect in the real world with some clever choreography. In one study an experimenter approached a pedestrian on the campus of Cornell University to ask for directions. While they talked, two men rudely carried a door between them. The first experimenter grabbed the back of the door and the person who had been carrying it let go and took over the conversation. Only half of the pedestrians noticed the substitution. Again, when people are asked whether they think they would detect such a change they are convinced that they would – but they are wrong.

Change blindness could also have serious consequences in ordinary life. For example, O’Regan, Rensink and Clark (1999) showed that dangerous mistakes can be made by drivers or pilots when change blindness is induced by mudsplashes on the windscreen.

Further experiments have shown that attention is required to notice a change. For example there is the related phenomenon of ‘inattentional blindness’ (Mack & Rock 1998) in which people attending to one item of a display fail to detect the appearance of unexpected new items, even when these are clearly visible or in the centre of the visual field. However, though attention is necessary to detect change, it is not sufficient. Levin and Simons (1997) created short movies in which various objects were changed, some in arbitrary locations and others in the centre of attention. In one case the sole actor in the movie went to answer the phone. There was a cut in which the camera angle changed and a different person picked up the phone. Only a third of the observers detected the change.

What do these results mean? They certainly suggest that from one saccade to the next we do not store nearly as much information as was previously thought. If the information were stored we would surely notice the change. So the ‘stream of vision’ theory I described at the start has to be false. The richness of our visual world is an illusion (Blackmore et al 1995).Yet obviously something is retained otherwise there could be no sense of continuity and we would not even notice if the entire scene changed. Theorists vary in how much, and what sort of, information they claim is retained.

Perhaps the simplest interpretation is given by Simons and Levin (1997). During each visual fixation we experience a rich and detailed visual world. This picture is only detailed in the centre, but it is nevertheless a rich visual experience. From that we extract the meaning or gist of the scene. Then when we move our eyes the detailed picture is thrown away and a new one substituted, but if the gist remains the same our perceptual system assumes the details are the same and so we do not notice changes. This, they argue, makes sense in the rapidly changing and complex world we live in. We get a phenomenal experience of continuity without too much confusion.

Slightly more radical is Rensink’s (2000) view. He suggests that observers never form a complete representation of the world around them – not even during fixations. Rather, perception involves ‘virtual representation’; representations of objects are formed one at a time as needed, and they do not accumulate. The impression of more is given because a new object can always be made ‘just in time’. In this way an illusion of richness and continuity is created.

Finally, O’Regan (1992) goes even further in demolishing the ordinary view of seeing. He suggests that there is no need for internal representations at all because the world can be used as an external memory, or as its own best model – we can always look again. This interpretation fits with moves towards embodied cognition (e.g. Varela, Thomson and Rosch, 1991) and towards animate vision in artificial intelligence (Clark 1999) in which mind, body and world work together, and sensing is intertwined with acting. It is also related to the sensorimotor theory of perception proposed by O’Regan and Noë (in press). On this view seeing is a way of acting; of exploring the environment. Conscious visual experiences are generated not by building representations but by mastering sensorimotor contingencies. What remains between saccades is not a picture of the world, but the information needed for further exploration. A study by Karn and Hayhoe (2000) confirms that spatial information required to control eye movements is retained across saccades. This kind of theory is dramatically different from existing theories of perception. It entails no representation of the world at all.

It is not yet clear which of these interpretations, if any, is correct but there is no doubt about the basic phenomenon and its main implication. Theories that try to explain the contents of the stream of vision are misguided. There is no stable, rich visual representation in our minds that could be the contents of the stream of consciousness.

Yet it seems there is doesn’t it? Well does it? We return here to the problem of the supposed infallibility of our own private experiences. Each of us can glibly say ‘Well I know what my experience is like and it is a stream of visual pictures of the world, and nothing you say can take away my experience’. What then do we make of the experiments that suggest that anyone who says this is simply wrong?

I suggest that we all need to look again – and look very hard, with persistence and practice. Experimental scientists tend to eschew personal practice of this kind. Yet I suggest we should encourage it for two reasons. First, we cannot avoid bringing implicit theories to bear on how we view our own experiences and what we say about them. So perhaps we should do this explicitly. As we study theories of consciousness, we can try out the proposals against the way it seems to us. As we do so our own experience changes – I would say deepens. As an example, take theories about change blindness. Many people find the evidence surprising because they are sure that they have rich visual pictures in their mind whenever they are looking at something. If you ask “What am I conscious of now?” again and again, this certainty begins to fall apart, and the change blindness evidence seems less surprising. This must surely help us to become better critics. At the very least it will help us to avoid dismissing theories of consciousness because of false assumptions we make about our own experiences.

The second reason is that this kind of practice can give rise to completely new hypotheses about consciousness. And this in turn can lead to testable predictions and new experiments. If these are derived from a deeper understanding of one’s own awareness then they are more likely to be productive than those based on the mistake of believing in the stream of conscious.

Note that what I am proposing here is first person practice – first person discipline – first person methods of inquiry. But the results of all this practice will be words and actions; saying things to oneself and others. This endeavour only becomes science when it is put to use in this way and it is then, of course, third person science.

How does one do it? There have been many methods developed for taking ‘the view from within’ (Varela and Shear 1999) but I am suggesting something quite simple here. Having learned about the results of the change blindness research we should look hard and persistently at our own visual experiences. Right now is there a rich picture here in my experience? If there seems to be, something must be wrong, so what is wrong? Look again, and again. After many years of doing this kind of practice, every day, it no longer seems to me that there is a stream of vision, as I described at the start. The research has changed not only my intellectual understanding of vision but the very experience of seeing itself.

The stream of sounds

Listening to what is going on it might seem as though there is a stream of sounds to match the stream of pictures. Suppose we are listening to a conversation, then turn our attention to the music in the background, and then to the conversation again. We may say that at first the conversation was in the conscious stream while the music remained unconscious, then they reversed and so on. If asked ‘what sounds were in your stream of consciousness at a particular time?’ you might be sure that there definitely was an answer, even if you can’t exactly remember what it was. This follows from the idea that there is a stream of consciousness, and sounds must either be in it or not.

Some simple everyday experiences cast doubt on this natural view. To take a much used favourite, imagine you are reading and just as you turn the page you become aware that the clock is striking. You hadn’t noticed it before but now you feel as though you were aware of it all along. You can even remember that it has struck four times already and you can now go on counting. What has happened here? Were the first three ‘dongs’ really outside the stream (unconscious) and have now been pulled out of memory and put in the stream? If so what was happening when the first one struck, while you were still reading? Was the sound out of the stream at the time, but after you turned the page it just felt as though it had been in there all along – with the contents of the previous page – even though it wasn’t really? Or have you gone back in time and changed the contents of the stream retrospectively? Or what? You might think up some other elaborations to make sense of it but I don’t think any will be very simple or convincing (in the same spirit Dennett (1991) contrasts Orwellian with Stalinesque revisions). The trouble all comes about because of the idea that there is a stream of consciousness and things are either in or out of it.

There are many other examples one could use to show the same thing. For example, in a noisy room full of people talking you may suddenly switch your attention because someone has said “Guess who I saw with Anya the other day – it was Bernard”. You prick up your ears – surely not – you think. At this point you seem to have been aware of the whole sentence as it was spoken. But were you really? The fact is that you would never have noticed it at all if she had concluded the sentence with a name that meant nothing to you.

Even simpler than this is the problem with all speech. You need to accumulate a lot of serial information before the meaning of a sentence becomes unambiguous. What was in the stream of consciousness while all this was happening? Was it just meaningless words? Gobbledegook? Did it switch from gobbledegook to words half way through? It doesn’t feel like that. It feels as though you listened and heard a meaningful sentence as it went along, but this is impossible.

Or take just one word, or listen to a blackbird trill its song. Only once the trill is complete, the word finished, can you know what it was that you heard. What was in the stream of consciousness before this point? Would it help to go even smaller? to try to break the stream down into its constituent bits? Perhaps there is a stream of raw feels, or indivisible bits of conscious stuff out of which the larger chunks are made. The introspectionists assumed this must be the case and tried – in vain – to find the units. James did a thorough job of disposing of such ideas in 1890, concluding “No one ever had a simple sensation by itself” (James 1890, i, 224) and there have been many objections since. There is no easy way to answer these questions about what really was in the stream of consciousness at a given time. Perhaps the idea of a stream of consciousness is itself the problem.

Of course we should have known all this. Dennett (1991) pointed out much the same using the colour phi phenomenon and the cutaneous rabbit. To produce colour phi a red light is flashed in one place and then a green light flashed a short distance away. Even on the first trial, observers do not see two distinct lights flashing, but one moving light that changes from red to green somewhere in the middle. But how could they have known what colour the light was going to turn into? If we think in terms of the stream of consciousness we are forced to wonder what was in the stream when the light seemed to be in the middle – before the second light came on.

There’s something backwards about all this. As though consciousness is somehow trailing along behind or making things up after the fact. Libet’s well-known experiments showed that about half a second of continuous cortical activity is required for consciousness, so consciousness cannot be instant. But we should not conclude that there is a stream of consciousness that runs along half a second behind the real world; this still wouldn’t solve the chiming clock problem. Instead I suggest that the problem lies with the whole idea of the stream.

Dennett (1991) formulated this in terms of the Cartesian Theatre – that non-existent place where consciousness happens – where everything comes together and I watch the private show (my stream of experiences) in my own theatre of the mind. He referred to those who believe in the existence of the Cartesian Theatre as Cartesian materialists. Most contemporary consciousness researchers deny being Cartesian materialists. Typically they say that they do not believe that ‘everything comes together’ at a point in the brain, or even a particular area in the brain. For example, in most GWTs the activity of the GW is widely distributed in the brain. In Edelman and Tononi’s (2000) theory the activity of groups of neurons in a widely distributed dynamic core underlies conscious experience.

However, many of these same theorists use phrases that imply a show in the non-existent theatre; such phrases as ‘the information in consciousness’, ‘items enter consciousness’, ‘representations become conscious’, or ‘the contents of consciousness’. But consciousness is not a container – whether distributed or not. And, if there is no answer to the question “what is in my consciousness now?” such phrases imply that people are assuming something that does not exist. Of course it is difficult to write clearly about consciousness and people may write this way when they do not really mean to imply a show in a Cartesian Theatre. Nevertheless, we should beware these phrases. If there is an answer to the question ‘what is in my consciousness now?’ then it makes sense to speak of things ‘entering consciousness’ and so on. If there is no answer it does not.

How can there not be an answer? How can there not be a stream of consciousness or a show in the theatre of the mind? Baars claims that “all of our unified models of mental functioning today are theater metaphors; it is essentially all we have.” (1997, 7) but it is not. It is possible to think about consciousness in other ways – I would say not just possible but necessary.

Dennett’s own suggestion is the theory of multiple drafts. Put simply it is this. At any time there are multiple constructions of various sorts going on in the brain – multiple parallel descriptions of what’s going on. None of these is ‘in’ consciousness while others are ‘out’ of it. Rather, whenever a probe is put in – for example a question asked or a behaviour precipitated – a narrative is created. The rest of the time there are lots of contenders in various stages of revision in different parts of the brain, and no final version. As he puts it “there are no fixed facts about the stream of consciousness independent of particular probes”.  “Just what we are conscious of within any particular time duration is not defined independently of the probes we use to precipitate a narrative about that period. Since these narratives are under continual revision, there is no single narrative that counts as the canonical version, … the events that happened in the stream of consciousness of the subject.” (Dennett 1991 p 136)

I would put it slightly differently. I want to replace our familiar idea of a stream of consciousness with that of illusory backwards streams. At any time in the brain a whole lot of different things are going on. None of these is either ‘in’ or ‘out’ of consciousness, so we don’t need to explain the ‘difference’ between conscious and unconscious processing. Every so often something happens to create what seems to have been a stream. For example, we ask “Am I conscious now?”. At this point a retrospective story is concocted about what was in the stream of consciousness a moment before, together with a self who was apparently experiencing it. Of course there was neither a conscious self nor a stream, but it now seems as though there was. This process goes on all the time with new stories being concocted whenever required. At any time that we bother to look, or ask ourselves about it, it seems as though there is a stream of consciousness going on. When we don’t bother to ask, or to look, it doesn’t, but then we don’t notice so it doesn’t matter. This way the grand illusion is concocted.

There are some odd implications of this view. First, as far as neuroscience is concerned we should not expect always to find one global workspace, or other unified correlate of the contents of consciousness. With particular sorts of probes there may, for a time, be such a global unification but at other times there may be several integrated patterns going on simultaneously, any of which might end up being retrospectively counted as contents of a stream of consciousness. Second, the backwards streams may overlap with impunity. Information from one ongoing process may end up in one stream, while information from another parallel process ends up in a different stream precipitated a bit later but referring to things that were going on simultaneously. There is no requirement for there really to be only one conscious stream at a time – even though it ends up seeming that way.

This is particularly helpful for thinking about the stream of sounds because sounds only make sense when information is integrated over appreciable lengths of time. As an example, imagine you are sitting in the garden and can hear a passing car, a bird singing, and some children shouting in the distance, and that you switch attention rapidly between them. If there were one stream of consciousness then each time attention switched you would have to wait while enough information came into the stream to identify the sound – to hear it as a passing car. In fact attention can switch much faster than this. A new backwards stream can be created very quickly and the information it uses may overlap with that used in another stream a moment later, and another, and so on. So at time t was the bird song really in your stream of consciousness or was it the children’s shouting? There is no answer.

Is it really this way? Do you want to protest that it doesn’t seem this way? As with vision it is possible to look harder into one’s own experience of sound and the results can be quite strange. Thinking about the chiming clocks, and listening as sounds come and go, the once-obvious linear stream begins to disappear.

Looking harder

I have suggested that we need to look hard into our own experience, but what does this mean? How can we look? If the models sketched above are correct then looking means putting in a probe and this precipitates a backwards stream. So we cannot catch ourselves not seeming to be having a stream of consciousness. As William James so aptly put it “The attempt at introspective analysis in these cases is in fact like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see how the darkness looks.” (James, 1890, i, 244).

The modern equivalent is the metaphor of the fridge door. Is the light always on inside the fridge?  You may keep opening the door, as quickly as you can, but you can never catch it out – every time you open it, the light is on.

Things, however, are not quite that bad for the stream of consciousness. We do, after all, have those obvious examples such as the chiming clock and the meaningless half a word to go on. And we can build on this. But it takes practice.

What kind of practice? A good start is calming the mind. There are many meditation traditions whose aim is to see the mind for what it really is, and all of these begin with calming the mind. You might say that at first it is more like a raging torrent or even a stormy ocean than a stream. To see whether there even is a stream we need to slow everything down. This is not easy. Indeed it can take many years of diligent practice, though some people seem to be able to do it much more easily than others. Nevertheless, with a calm mind it is easier to concentrate, and to concentrate for longer.

Now we can ask “What am I hearing now?”. At first there seems always to be an answer. “I am hearing the traffic” or “I am hearing myself ask the question in my head”. But with practice the answer becomes less obvious. It is possible to pick up the threads of various sounds (the clock ticking, the traffic, ones own breathing, the people shouting across the road) and notice in each case that you seem to have been hearing it for some time. When you get good at this it seems obvious that you can give more than one answer to the question “what was I hearing at time t”. When you can do this there no longer seems to be a single stream of sounds.

My purpose here is not to say that this new way of hearing is right, or even better than the previous way. After all, I might be inventing some idiosyncratic delusion of my own. My intention is to show that there are other ways of experiencing the world, and finding them can help us throw off the false assumptions that are holding back our study of consciousness. If we can find a personal way out of always believing we are experiencing a stream of consciousness, then we are less likely to keep getting stuck in the Cartesian Theatre.

I asked at the outset ‘What is all this? What is all this stuff – all this experience that I seem to be having, all the time?’. I have now arrived at the answer that all this stuff is a grand illusion. This has not solved the problems of consciousness, but at least it tells us that there is no point trying to explain the difference between things that are in consciousness and those that are not because there is no such difference. And it is a waste of time trying to explain the contents of the stream of consciousness because the stream of consciousness does not exist. 

References

  1. Baars, B.J. (1988) A Cognitive Theory of Consciousness, Cambridge, Cambridge University Press.
  2. Baars,B.J. (1997) In the Theatre of Consciousness: The Workspace of the Mind. New York, Oxford University Press
  3. Blackmore,S.J., Brelstaff,G., Nelson,K. and Troscianko,T. 1995 Is the richness of our visual world an illusion? Transsaccadic memory for complex scenes. Perception, 24, 1075-1081
  4. Chalmers, D.J. (1995) Facing up to the problem of consciousness. Journal of Consciousness Studies, 2, 200-219
  5. Clark, A. (1997) Being There: Putting brain, body, and world together again. Cambridge, MA, MIT Press
  6. Crick,F. (1994) The Astonishing Hypothesis. New York, Scribner’s
  7. Damasio, A. (1999) The Feeling of What Happens: Body, emotion and the making of consciousness. London, Heinemann
  8. Dehaene, S. and Naccache, L. (2001) Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79, 1-37
  9. Dennett, D.C. (1991) Consciousness Explained. London, Little, Brown & Co.
  10. Edelman,G.M. and Tononi, G. (2000) Consciousness: How matter becomes imagination. London, Penguin
  11. James,W. (1890) The Principles of Psychology, London; MacMillan
  12. Kanwisher, N. (2001). Neural Events and Perceptual Awareness. Cognition, 79, 89-113
  13. Karn, K. and Hayhoe, M. (2000) Memory representations guide targeting eye movements in a natural task. Visual Cognition, 7, 673-703
  14. Levin, D.T., Momen, N. and Drivdahl, S.B. (2000) Change blindness blindness: The metacognitive error of overestimating change-detection ability. Visual Cognition, 7, 397-412
  15. Levin, D.T. and Simons, D.J. (1997) Failure to detect changes to attended objects in moton pictures. Psychonomic Bulletin and Review, 4, 501-506
  16. Levine, J. (1983) Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64, 354-361
  17. Lindberg, D.C. (1976) Theories of Vision from Al-Kindi to Kepler, University of Chicago Press
  18. Mack, A. and Rock, I. (1998) Inattentional Blindness, Cambridge MA, MIT Press
  19. Nagel,T. (1974) What is it like to be a bat? Philosophical Review 83, 435-450
  20. Nagel,T. (1986) The View from Nowhere, New York; Oxford University Press
  21. O’Regan, J.K. (1992) Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology, 46, 461-488
  22. O’Regan, J.K. and Noë, A. (in press) A sensorimotor theory of vision. Behavioral and Brain Sciences.
  23. O’Regan, J.K., Rensink, R.A. and Clark, J.J. (1999) Change-blindness as a result of “mudsplashes”. Nature, 398, 34
  24. Rensink, R.A. (2000) The dynamic representation of scenes. Visual Cognition, 7, 17-42
  25. Simons, D.J. (2000) Current approaches to change blindness. Visual Cognition, 7, 1-15
  26. Simons, D.J. and Levin, D.T. (1998) Failure to detect changes to people during real-world interaction. Psychonomic Bulletin and Review, 5, 644-649
  27. Varela, F.J. and Shear, J. (1999) The view from within: First person approaches to the study of consciousness, Thorverton, Devon, Imprint Academic
  28. Varela,F.J., Thomson,E. and Rosch,E. (1991) The Embodied Mind. London, MIT Press
     

There is no stream of consciousnes – This paper is published in the Journal of Consciousness Studies, Volume9, number 5-6, which is devoted to the Grand Illusion.  See http://www.imprint.co.uk/jcs/. This paper is based on a conference presentation by Dr Susan Blackmore at ‘Towards a Science of Consciousness 2001, in Skövde, Sweden, 7-11 August 2001.

Twelve Varieties of Subectivity

 

Subjectivity is a theme common to many of those philosophers eager to deflate the ambitions of cognitive science. The claim is that persons differ from all other things in that they cannot be exhaustively described in the third person. Any attempt to do so will fail to capture something about every human being that is essentiallysubjective. This expression covers many things, and the word sounds all the more impressive for the fact that the things it purportedly designates are lumped into a very mixed bag. When lumped together as if they constituted one hugely complex problem, they tend to induce a sense of hopelessness. Which is exactly what some of the champions of subjectivity count on to preserve its mystery and irreducibility.

Among the champions in question, some of the most famous are Taylor (1989), Nagel (1986), and Searle (1992, 1997)).

Here is sampling of some of their claims. First, Nagel:

“[T]he purely objective conception will leave something out [viz., the subjective content of “I am Thomas Nagel”] which is both true and remarkable” (Nagel 1986, 64).

Next, Charles Taylor:

There are certain things which are generally held true of objects of scientific study which don’t hold of the self:….
1. The object of study is to be taken “absolutely”, that is, not in its meaning for us or any other subject,… (“objectively”).
2. The object is what it is independent of any descriptions or interpretations offered by any subjects
3. The object can in principle be captured in explicit description;
4. The object can in principle be described without reference to its surroundings. (Taylor 1989, 33-34).

Next, Searle:

Conscious mental states and processes have a special feature not possessed by other natural phenomena, namely subjectivity…..[M]uch of the bankruptcy of most work in the philosophy of mind and a great deal of the sterility of academic psychoanalysis over the past fifty years,… have come from a persistent failure to recognize and come to terms with the fact that the ontology of the mental is an irreducibly first-person ontology. (Searle 1992, 93,95).Consider, for example, the statement ‘I now have a pain in my lower back.’ That statement is completely objective in the sense that it is made true by the existence of an objective fact…. However, the phenomenon itself, the actual pain, itself, has a subjective mode of existence. (Searle 1992, 94).

To these claims about the irreducibility of subjectivity, two forms of resistance are possible. One is to claim that the multifarious problems posed by consciousness and subjectivity actually all reduce to one. That one may or may not be currently soluble, but at least one has one problem and not many. That strategy is adverted to (though not adopted) in a recent book by Lycan, who defends “a weak version of Brentano’s doctrine that the mental and the intentional are one and the same…. It would follow that once representation itself is (eventually) understood, … I do not think there will be any ‘problem of consciousness’ left” (Lycan 1996, 11)[1]. A similar strategy may be implied in the recent books by (Tye 1995) and (Dretske 1995) defending a representationalist theory of consciousness.The line I propose to pursue here is the opposite. It starts from the consideration that big mysteries are sometimes made of a lot of little tricks, and so might yield to a divide-and-conquer strategy. I suspect this is true of the mysteries of consciousness: if the “problem of consciousness” is not one, but many, and if each one can be successfully dismissed or solved along naturalistic lines, then by this different route we shall reach the same goal, of bringing it about that there not be any ‘problem of consciousness’ left.

I do not aim to demonstrate this large claim here. I concentrate only on the term “subjectivity”, and propose merely to make a start on the first phase, consisting in drawing up a list of ostensibly different problems of subjectivity. If some should turn out to be reducible to others, so much the better. But if not, then each variety of subjectivity might be tackled singly, and this might indeed contribute to a “natural history” of the human mind, in such a way as to bring it all under the aegis of science.

In his perceptive comments on the version of the present paper as presented to the ISCC conference, Jean-Michel Roy urged that by concentrating on the diversity of claims using the word ‘subjective’ made by philosophers, I risked missing the point which only a proper conceptual or phenomenological analysis could reveal. But the concept lives in what people use it to mean. No conceptual analysis, therefore, can avoid taking into account what those who use the concept have used it to do. Phenomenological analysis suffers from similar problems[2]. Still, one might ask what the root word ‘subject’ itself suggests as to what the core of subjectivity might mean. Two sufficient conditions then suggest themselves: either that we are talking about items to which only the subject has epistemic access, or that we are talking about items that are ontologically distinct in somehow pertaining only to the subject, as claimed by Searle in the passage just cited. The former can happily be conceded by any materialist. The latter begs the essential question of whether it makes sense to speak of an ontological category which is essentially defined in terms of conditions on epistemic access to it. But these features do not exhaust the claims made for subjectivity and its consequences for our understanding of the mental. That is my justification for undertaking the sort of botanizing I propose in what follows.

To give the flavour of the strategy to which this botanizing is supposed to contribute, here is an example of how confusion between various senses of subjectivity can be misleading.

Berkeley argued against the distinction between primary and secondary qualities that are all equally “ideas existing only in the mind”: (Berkeley 1957, 27, 30). The subjectivity of secondary qualities, in the sense of their relativity to the observer’s mind, can be shown to attach equally to primary qualities. If we resist the idealist conclusion, we can re-interpret this remark as implying that the perception of all qualities depends on the interaction between the external world and state of the subject’s sense-organs. This assumes that if the appearances of things are relative to the sensory and conceptual apparatus of the perceiver, this entails that their attribution to the outside world is mere projection, with no objective correlates beyond themselves. The argument conflates phenomenology — the quality of experience — relativity to an observer, and projection — the attribution of a property to the outside world which is actually entirely resident in or manufactured by the observer. This conflation is plausible in the extreme case in which some quality attributed by an observer to a target depends totally on the perceiver and not at all on the target. For there is then nothing to the property in question except the observer’s experience of it, and relativity collapses into projection. But no lesser degree of relativity can effect this collapse. At most, Berkeley’s arguments show that the conflation of these different senses of subjectivity leads to idealism. This is not what the modern champions of subjectivity intend, but it may turn out to be the logical consequence of their strategy nevertheless. To some of us, this is reason enough for avoiding the conflation.

Phenomenology, relativity and projection are only three of the possibly distinct senses of subjectivity that have been adduced against materialism. In what follows I distinguish twelve basic varieties — senses, readings, interpretations, or aspects — of subjectivity or ‘the subjective’. Some, as we shall see, might easily be further divided. Moreover, I am not confident that they are exhaustive. But I remain unconvinced that any form of “irreducible subjectivity” presents an obstacle to physicalism, and I offer the hope that tackling each variety singly will may make it easier to pre-empt their use as a medieval mace to whack wicked reductionists over the head with. Here then, is my list.

1. Perspective.

An individual is somewhere in space-time, and not somewhere else. Except for God, of course, who was invented to instantiate all contradictions in blessed harmony. He’s everywhere and everywhen, though at the same time, as it were, not in time or space[3]. But the upshot of this is that every individual has a point of view, a perspective, and apprehends the world, so far as it can apprehend the world, from somewhere and not nowhere[4] (Nagel 1986). If taken in isolation, the feature of being somewhere in particular affects all kinds of individuals, not just humans. But only those individuals that can view something can presumably have a point of view. Thus Searle again:

Subjectivity has the further consequence that all of my conscious forms of intentionality that give me information about the world independent of myself are always from a special point of view. The world itself has no point of view, but my access to the world through my conscious states is always perspectival. (ibid. 95).[5]

In itself, however, that could be true of any other living thing. Nor is it a requirement to be alive: an artificial eye has a point of view. More generally, as shown in the excellent discussion of this subject in (Proust 1997), aspectuality can be seen as a consequence of mere differences of informational channels, and doesn’t therefore require any level of consciousness.

Perspective might itself be of two kinds. This can be seen by asking: Does a still camera have a genuine point of view? One reason to deny this is that for a still camera there is nothing that corresponds to the difference between locality in time, and locality in space. For a living individual, these pose slightly different problems. For there are different ways in which we might care about the effects of our actions in distant space, and in different times. Time is asymmetrical in this sense (among others): we care more, or quite differently, about what happens in the future than about what happened in the past. But although the things we care about may, of course, be unevenly distributed, space has no uniformly privileged direction. So temporal perspectivity appears seems to constitute a more serious species of subjectivity than the spatial kind.

Now perspectivity is sometimes equated with subjectivity in general, as suggested in the last quotation from Searle above. Yet subjectivity is also associated with the self, and the temporal form of perspectivity actually causes problems for the view that my self is my subjectivity. This is because changes in perspective, especially in temporal perspective, change the relative value of different prospects. For example, as (Ainslie 1992) has pointed out, we seem to discount the future at a hyperbolic rate, so that the closer prospect can surpass the more distant in apparent value, rather as a low building can loom higher than a tall one when one is up close to the former. Where such changes occur, which perspective is the right one, that is, truly mine? Are there as many individual selves as there are perspectives? In a recent article, Galen Strawson answers in the affirmative: each of us is many brief, material, successive selves, he says, strung like pearls on a string (Strawson 1997). Before him, Derek Parfit (1971, 1984), is famous for advocating a similar view.

Suppose I get my friends solemnly to promise to put me gently to death when I become gaga, because I would rather die than be gaga. What if, once I become gaga, my priorities change? Now I don’t want to die: I would rather live and be gaga. Do my friends still “owe” me euthanasia, against my present wishes? Actually, the answer is always No, but for a different reason. If I’m a different self from what I was when they made their promise, then you can’t be bound by him (i.e. me-then) to do anything to or for me (i.e. me-now). But if I’m the same person, then I can now relieve you of your obligation to me if I change my mind. The facts about perspective, then, appear to be neutral in practice between the Parfitean and a traditional concept of the self, but they seem to be significantly different in theory.[6]

Note, however, that in articulating the problem of the asymmetry of time we have to introduce an additional factor: what’s involved is not just being at a certain place and time, but envisaging what is seen from that point of view as affording possibilities for agency. Make that our second form or aspect of subjectivity.

2. Agency

Agency is presumably not an aspect of human individual subjectivity that we concede to inanimate individuals. As a human, one experiences oneself as having the power to choose and act. The fact of being an agent, as has been often stressed,[7] is a form of subjectivity in the precise sense that I, the subject, and I alone can decide what I will do, although — depending on your own particular stance on the tricky antinomies of free-will — all sorts of circumstances can determine what I in fact end up deciding. Whether I am free to decide to do or not to do A, compatibly with your ability to predict which I will do, is a conundrum that I won’t discuss. I’ll only assert the obvious, namely that whether or not you can predict what I will do does not change the fact that I do in fact experience myself as deciding.

Perhaps this apparent fact about the irreducibility of decision is really just an effect of perspective. From my own point of view as an agent, I can’t take my reason for action to causally-determine my action without failing to decide; but failing to decide is just another decision. (Compare: I never can directly see my own face: would I be right in concluding that my face is different from everyone else’s in some crucial way that makes it invisible?)

The first locus of the claim that agency is a form of subjectivity is probably Descartes[8], though he didn’t say it in so many words. But it takes just a little teasing out to get from the claim that the will is infinite to the present thesis. The infinitude of the will is unfortunately compatible with complete powerlessness. So the measure of the will’s freedom has nothing to do with its effectiveness in bringing about any change in the world. Furthermore, this infinite freedom says nothing about the origins of our desires. An admittedly simplistic argument suggests that the infinitude of the will’s freedom is also compatible with there being absolutely none of my desires that originates in myself. For whatever my desire, I cannot deny that it might have been different had I different genes or had I had a different life. In other words, my desire must have come from causes ultimately traceable to my genes and to my environment. But since I am not in any sense the author of either my genes or my environment, it seems to follow that I’m not the author of my own desires either. Whatever one may think of this somewhat fishy argument, it remains true that the “freedom of the will,” which I’m equating with the subjectivity of agency, cannot be denied: whenever I am made conscious of a set of possible choices, choosing is not so much something I can do, as something I cannot forbear to do, regardless of the origins of my grounds for making it or of whether my choice makes any difference to what results.

3. Titularity or ownness.

One of the specific ways in which my power of agency is “essentially subjective” is that my actions are mine in a peculiar sense of the word. Sergio Moravia (1995) has labeled “titularity” the fact that my mental attributes (including but not only including qualia) are my own in a unique sense of ownership. This sense of ownership is indeed peculiar. It is different from the sense in which I own my bicycle, different from the sense in which I own my hair; different from the sense in which, on some views, I own myself and no other person can logically own me; and different again from the sense in which I “own myself” and no one else can (ethically) own me. For if it is unethical for some person to own another, then it is not metaphysically impossible. But it would seem to be not merely unethical, but metaphysically (or logically) impossible for the slave-owner to own his slave’s experiences.[9] The point has been made by Tye (1995, 10-11,71ff), who distinguishes two problems raised for materialism by this feature. We might call these the two conditions of special ownership. One is that every mental state necessarily belongs to someone or other; and the second is that every mental state necessarily belongs to whoever it belongs to and not anyone else.

This is a particularly good example of the mystifying function of these declarations of subjectivity. For the special sense of ownership involved here is not, in fact, exclusive to mental states, but belongs to a large class of predicates. It was described long ago by Aristotle, in connection with what commentators have named “dependent particulars”. There are two senses in which we can talk about the whiteness of this paper: one refers to a specific shade of white, and in that sense the whiteness of this paper might also belong to, or characterize, some other surface. But in another sense it is logically impossible that the whiteness of this paper should belong to anything else. We can reidentify this paper, even if it has changed colour, but there is no way that we can reidentify its whiteness independently of it. The paper exists independently of its whiteness, but not vice versa[10] (Aristotle 1963, 1a25-27). Tye points out that non-mental actions, such as one person’s laughter, or her walk, also meet both the conditions of special ownership. Events, even those involving no agency at all, exhibit the same feature. My pen’s falling to the floor is not something that logically could pertain to nothing, nor is it something that could pertain to anything other than my pen.

Besides titularity, the legal notion of ownership involves two features that have figured prominently in characterizations of what it means for something to belong to me. One is that I have a special right to use it: I have, as the phrase goes, priviledged access to it. The other is that I have the right to exclude others from my property. You might call this the right of privacy, and where it concerns my beliefs about myself, it amounts to their incorrigibility by any one else. These two, then, constitute the next two forms of subjectivity. They are commonly confused, at least in the terms used to describe them. But if we keep in our minds the difference between access and exclusion, it seems plain that they are indeed separable doctrines. I might enforce my right of access to my property, while not excluding anybody else. The converse seems to make less psychological sense, but is not logically impossible.

4. Privileged access.

It was long a dogma of the philosophy of mind that one of the defining characteristics of mental states was their privacy, that is, their inaccessibility to other observers. Tye sees this as one of the aspects of ownership: “My pains, for example, are necessarily private to me. You could not feel any of my pains.” (Tye 1990, 71). But clearly privacy is distinct from that other feature of ownership, privileged access. This may well be one of the features that the champions of subjectivity have in mind, but nowadays the issue of access is not generally regarded as clear in either direction. This is partly due, no doubt, to the influence of Wittgenstein’s attack on private languages. On the one hand, we have gotten used to talking about mental states which are clearly enough my own, but to which I have no access either because they are repressed in the “Freudian Unconscious” or because they pertain to the “Helmholzian Unconscious” (Johnson-Laird 1988, 354). Conversely, in the light of some recent thought experiments, the impossibility of accessing another person’s mental states can no longer be asserted without begging just the sorts of question at issue in debates about materialism.[11]

5. The incorrigibility of appearance.

One of the political privileges of privacy, in sense in which we speak of a right to privacy, is the right to keep others out. Under the last rubric I have focused on the subject’s access (which turns out to be dubious). What then of the subject’s converse right to exclude others?

Objective reality is more than meets the eye. No one subject, it seems, is ever in a position to exclude others from all facets of Reality. “Mere” appearances, as we call them, on the other hand, are subjective. We inherit from Plato one of the reasons for drawing this contrast: appearances change, while reality supposedly stays the same. But this is quite wrong-headed. If you were aware of an “appearance” which never changed, it would be a pretty sure sign that you were having a hallucination. Part of the changes brought to appearances are due to perspective, which I’ve already talked about; and if you didn’t see something in perspective and from your own point of view that would prove that it was not objective. If I thought I saw a circle from the side and it appeared circular, then I’d have to conclude it wasn’t really a circle. The supposed subjectivity of appearance, then, lies in its incorrigibility. It is only I that am incorrigible about what appears to me; anyone else fails to have equal authority with mine. Conversely, what seems to be the case is the only thing on which (for example) Descartes allows that I am incorrigible (Descartes 1984-1985, 29). Incorrigibility thus emerges as a form of subjectivity independent of those already listed, because it is logically possible that propositions concerning perspective, agency, ownership, and even privacy (and also qualitative experience and seeing-as, which we shall get to in a moment) might all be corrigible on the basis of objective evidence. Incorrigibility has a low status these days: most philosophers agree that if any candidates presented itself it would turn out to be an illusion. (Gopnik 1993) An inverse relationship holds between the empirical content of a claim and the degree of its certainty. It is a plausible principle, even if we do not cling to strict verificationist or falsificationist dogmas, that there is a direct correlation between corrigibility and content. (Sellars 1963)

6. Proprioceptive sense.

Among the things I own in some peculiar sense, though not in the peculiar sense just discussed, is my own body. Herein lies one more trademark of subjectivity. In the most common case, the proprioceptive “sense” designates the awareness we have of the position of one’s limbs. Try this: close your eyes and touch your nose with your index finger. You may miss, but not by much. Ramachandran and Hirstein have described a delightful experiment in which I can actually find the tip of my nose to be displaced to where the tip of your nose actually is actually displaced:

[T]he subject sits in a chair blindfolded, with an accomplice sitting at his right side…. facing in the same direction. The experimenter then stands near the subject, and with his left hand takes hold of the subject’s left index finger and uses it to repeatedly and randomly tap and stroke the nose of the accomplice, while at the same time, using his right hand, he taps and strokes the subject’s nose in precisely the same manner, and in perfect synchrony. After a few seconds of this procedure, the subject develops the uncanny illusion that his nose has either been dislocated, or has been stretched out several feet…..” (Ramachandran and Hirstein 1997, 452).

What I find particularly intriguing about this illusion, is that in fact this “sense” which guided your hand is not a sense at all, insofar as it has no “organ”. What’s more, it is clearly a form of subjectivity, insofar as only the subject can make the relevant observation. We can’t have someone else’s phantom nose illusion. But it’s not just a quale or bundle of qualia.

Here again one might divide even more finely. For the special proprioceptive consciousness of one’s own face seems to form a distinct class by itself. It is not so easily explained as the nose-displacement illusion, and even more interesting, Jonathan Cole has described severe disturbances in self-concept and interaction with others in patients suffering from “Möbius syndrome”, which involves an inability to move any of the muscles of facial expression (Cole 1998). This inability is described as inhibiting the development of a sense of self, no doubt largely because it makes impossible facial imitation which, from the earliest days of a baby’s life, establishes one’s sense of one’s own emotions in some sort of concert with the emotions of others. Cole cites (Meltzoff and Gopnik 1993)’s observation of imitation in infants as suggesting “that in early experience babies learn something of emotion, and how it is experienced, by taking the facial expressions of others and, by imitation, feeling their own faces to be like others” (Cole 1997, 481). More of this under a later heading (see “The subjectivity of intersubjectivity” below). At this point I wish only to point out that no such mechanism could make sense unless there existed the sort of pseudo-sixth sense that is proprioceptive perception of one’s own face.

But is this a problem comforting to the mysterians? No. On the contrary, all these proprioceptive phenomena, both common and exotic, are highly suggestive about the physical, neurological origins that are likely to give rise to them.

7. Ipseity.

When I refer to myself, I am not just referring to the person who happens to be me. This is a point developed in a number of papers by H.-N. Castaneda (e.g. Castaneda 1988) and by John Perry (1979). The latter’s vivid example has him noticing a trail of sugar in the supermarket. He identifies its source as someone whose cart contains a leaking bag of sugar, who is unaware of it, and who has apparently been all over the supermarket. But for a long time fails to identify the person thus “identified” with himself.[12]

Is knowing that I (Ronnie) am I a real piece of knowledge? God couldn’t know it, though he could know that the writer of the previous sentence is Ronnie, or any number of other statements identifying me with myself under two different descriptions.

A similar point could be made about perspective: since God is everywhere, he necessarily lacks perspective. Is this a limitation on God’s supposed omniscience? Whatever the answer, it is tempting to think that ipseity is merely a side-effect of perspective. Tempting, but wrong: for the facts of perspective are entailed by the existence of spatio-temporal particulars in space-time; not so ipseity, since it would be theoretically possible for all information I have about myself to be devoid of perspective, and for all my desires to be formulated in entirely general terms. Sober and Wilson have suggested that ipseity is an adaptive trait which allows a self-interested individual to channel benefits to itself without having to burden itself with large amounts of discriminatory information. “This speculation,” they add, “entails a small irony. People use the concept of “I” to formulate the thought that they are unique. Yet, part of the reason that people have this concept is that they are not unique….” (Sober and Wilson 1998, 214, 350). Sober and Wilson also correctly point out that ipseity (which they call “self-recognition”) differs from “self-awareness” in that “self-recognition does not require that the individual be a psychologist”, i.e. think of themselves as having beliefs and desires. (Sober and Wilson 1998, 216).

8. Tone or colour.

What is it like to be you? It’s not obvious either that Descartes was right about the transparency of that consciousness, nor that there isn’t anything it’s like to be me, nor that it’s somehow reducible to all the others, or to some subset of qualia. An individual tone, or colour is thus subjective in what seems to be yet another irreducible sense. Nevertheless, the colour of my life may supervene on many physical properties, just as the colour of a surface supervenes on a number of properties of texture, light, and relational properties computed by our visual system in ways determined by complex ecological factors (Thompson 1995). What is distinct about this form of subjectivity is that it concerns not sensory experience in general, but one’s experience of oneself in particular. It is precisely not reducible to ipseity, however, if the contrast I just borrowed from Sober and Wilson between ipseity and uniqueness is a real one. My feeling-of-being-me may well be different from anyone else’s analogous feeling, and indeed is likely to be so insofar as it supervenes on a number of factors that determine different aspects of our experience of ourselves.

9. The subjectivity in intersubjectivity.

My identity is, in part, intersubjective. I mean by this that it is causally constituted by my being able to gauge the state of my own mind, and particularly my own emotions, in interaction with others. I note three aspects of this interaction. First, grown-ups tell children what they feel, more or less effectively, resulting in adults who know more or less what they feel. I am not sure quite how to analyse the capacity to be so trained to recognize one’s own emotions; obviously it presupposes that there must be something that one is being trained to recognize, but it doesn’t follow that there must already be full-fledged emotions awaiting recognition. For it may be, as Sue Campbell has stressed, that expressing emotions contributes crucially to the determination of the emotion, both in the sense of bringing the emotion into being and in the sense of making it to be just this emotion and not another (Campbell 1998). Again, there would be no such possibility were there not some subjective side to the interpersonal transaction which is an expression of emotion. One creates one’s subjective sense of one’s own emotions by comparing them to others. But this suggests something of a paradox, since subjectivity seems to be a precondition, or presupposition, of its own cause. For there surely can’t be inter-subjective engagement if there are no subjectivities between which the engagement takes place. The solution to this puzzle no doubt lies in a developmental and dynamic perspective, allowing us to see how the intersubjective and the subjective appreciation of one’s emotional self develop together. One piece of the puzzle may lie in this third observation: Meltzoff and Gopnik (1993) have shown that infants are able to imitate facial expressions in the very earliest days of their lives, at a stage when one would be disinclined to ascribe to them anything like a sense of self. Nevertheless it would be surprising if this capacity didn’t play a role in the acquisition of emotional expressions and through them of a sense of self. Jonathan Cole puts it this way:

Through imitation a face can be assimilated from visual experience, through proprioception, into felt experience: something can be taken from being “out there” in another, to being “in me” (Cole 1998, 110)

If so, then certain forms of interaction would lie at the base of the acquisition of subjectivity in some of the other senses I have sketched.

10. Projection.

Though most mentions of subjectivity are intended to prove our species superiority, it does sometimes happen that the connotation of subjectivity is negative. One such case is where what’s intended is projective illusion or “projection” in the Freudian sense. Now projection is actually a pathological condition, to the extent that it represents a mistake, an illusion based on a sort of confusion between a characteristic of oneself (which isn’t acknowledged) and which is ascribed to others though it isn’t actually there. But there are closely related phenomena that carry no such stigma. Indeed, some people have suggested that simulation (a sort of systematic but non-self-deceptive projection) plays a crucial role in our understanding of other people. (Gordon 1986, 1992; Goldman 1992; contra, see Stich and Nichols 1992).[13]

11. Seeing-as.

Among the subtleties of my perceptual point of view and of my experiences (to which I return in the next section), lies a facet of subjectivity which does not appear to be exhausted by the previous descriptions. This is the fact that what I perceive, I commonly perceive-as-something-or-other. When I see a duck-rabbit, how many qualia do I see? In the paper cited above, Ramachandran and Hirstein go so far as to propose the following unabashedly functional hypothesis: when I look at a duck-rabbit figure, I can “see-it-as” only one (at least one at a time), because that’s precisely among the functions of consciousness: to fix “irrevocably” what we see so as to make it possible for us to undertake unambiguous action. Though Hamlet says conscience makes cowards of us all, yet it is consciousness, on this hypothesis, that has the job of keeping us from the Hamlet syndrome.

To see something as a so-and-so, is to see it in a way that involves intentionality. Yet some sensations are sometimes held to be experienced non-intentionally: the typical example is pain.

Pains, however, are sensed as painful. Indeed, (Kripke 1980) used this fact to revive an old argument against the identity theory of sensations and brain states. That classical argument went as follows:

A painful sensation S is not just painful, but essentially painful. Any neural process N with which we might claim to identify it, however, could only be contingently painful. Therefore, by Leibniz’s law, N couldn’t be identical with S.

In order to appreciate the modifications that Kripke brought to this argument, recall that he loosened the bonds linking the analytic, the a priori, and the necessary, and their contraries, the synthetic, the a posteriori and the contingent. His analysis requires us to appreciate that some statements, such as the identity statement ‘Water is H2O’, or ‘heat is mean molecular momentum’, might be both necessary and a posteriori. The necessity involved is ontological, not epistemic: Kripke allows that there is nothing wrong with the statement that for all we knew there might have been some different physical process responsible for the feeling of heat. But as it happens, there isn’t and there couldn’t be compatibly with the laws of nature (Kripke 1980, 333).

Kripke’s doctrine actually constitutes a minor puzzle in the history of contemporary philosophy. For to have thus relaxed the link between what can be known a priori and what is necessary opens the door to an identity theory immune to the old arguments about what can be imagined to be identical. Yet no sooner did Kripke open this door than he tried to push it shut: nothing is more certain, he argued, than the fact that to be painful is a necessary property of any pain. So if N is really identical to S, then it must be the case that N is necessarily painful too. And that Kripke finds incredible, on the ground that while in the case of heat there is something, namely the process of molecular motion, between the sensation and the heat, there is no analogue to this “something in the middle” in the case of pain. For the essence of pain is nothing but quale felt by the sufferer. (339)

But what’s wrong with the other possibility? Why not say that the neural process N is indeed necessarily painful? Since Kripke has duly shown that some necessary truths can be known a posteriori, the burden of proof is now on him to show that this isn’t one such necessary truth. All that we need grant his argument is the analyticity of ‘pain is painful’, and the fact that any physiological state’s painfulness, by contrast, is synthetic. Given that, however, it remains perfectly possible that some physiological state really is necessarily painful just as water is necessarily H2O, regardless of the fact that neither, in advance of scientific knowledge, can be expected to seem analytic.

12. Phenomenal experience.

I have left till last the most hotly contested of subjectivity’s battlefields. Chalmers puts the centrality of qualia in these terms:

The problem of explaining these philosophical qualities is just the problem of consciousness. This is the really hard part of the mind-body problem (Chalmers 1996, 4).

But it seems to be characteristic of those who take this form of subjectivity as central that any attempt at explaining our talk about qualia in materialist term is taken as a refusal to take the hard problem seriously. Dennett, for example, has repeatedly been accused of denying that we are conscious: “Dennett thinks there are no such things as qualia, subjective experiences, first-person phenomena, or any of the rest of it.” ((Searle 1997, 99) Apart from being admirably unambiguous about the “charge” against Dennett, this is a nice example of how the vast and unspecified “rest of it” is thrown into the same bag as qualia. Dennett has repeatedly denied denying that we have conscious experience, but since he has indeed also “quined qualia” (Dennett 1990), it’s clear without assessing the argument that Searle’s mere assertion of the intuition that qualia just are the “data” doesn’t settle the matter. Dennett doesn’t deny that we are conscious, or that we have experiences, “and the rest of it.” He just claims that the philosophical mystery made about qualia can be dispelled once it is kept separate from other theoretical issues that surround it. He has also argued that once one focuses on the functions of qualia — which can perfectly well be discussed in the third person — there is nothing left for the ineffably private qualia to be.

The issue of qualia seems to me open to a vice-grip strategy, which consists in squeezing the irreducibility of qualia between two complementary poles, the materialist identity theory and functionalism. Both are reductive in the sense that they propose third-person accounts of the qualia in question.

(1) Materialism: Churchland has recently argued that the identity theory of qualia can be rehabilitated. His strategy is illustrated by showing how the colour solid is isomorphic to the solid generated by the 3 dimensional structure of the antagonistic receptors which receive input from the three types of cones. The essential strategy here consists in the challenge: “what more do you want than full formal coherence between the physiological mechanisms and the phenomenological structure of the colour solid (or mutatis mutandis)?” (Churchland and Churchland 1998). If it is then objected that correlations don’t establish identity, the objector owes an account of what more is required. The answer can only be that two objects, however perfectly correlated, might differ in their causes and effects. The argument is then ready to be turned over to the functionalist.

(2) Functionalism: This is also best summed up in a challenge, the zombie challenge: if you can imagine some being whose reactions to a given scene (sound, sight, stimulus, or whatever) are like yours in every possible way, including synesthetic, associative, recollections evoked, etc.) can you really also imagine that this being might not be like you merely in lacking qualitative experience? If you can, then subjective consciousness, as such, is strictly epiphenomenal in a sense so strong as to make it “a concept that has no utility whatsoever” (Dennett 1991 402.)

At this point, an objector might suggest that functional equivalence is not enough, since two functionally equivalent items might be substantially different. But this objector is one that can safely be left in the hands of the identity theorist, who can once again appeal to the structural correlations between qualia and their physiological underpinnings.

At the risk of dismissing this huge debate with cavalierly short shrift, this vice-grip strategy seems to me sufficiently promising to suggest that the subjectivity of qualia does not actually constitute an insolubly “hard problem”.

Conclusion

Some of these “senses” or “aspects” of subjectivity may be redundant. I certainly am not confident that none could, by means of some ingenious argument, be reduced or assimilated to others. But to establish conclusively that they are all distinct would involve 11! or 65 pairwise comparison: I leave this as a rich mine of thesis topics for future doctoral students to explore. All I will rest on now is the thought that there are plausible, non-mystifying avenues of research open on each of the twelve forms of subjectivity I have described, and that in such piece-meal solutions lies the hope of solving the so-called problem of subjective consciousness.

NOTES
Lycan’s own strategy, however, is much like the one I espouse, though he concentrates on anatomizing the term consciousness rather than the term subjectivity. See Lycan (1996, 2-7).
See “Contre la phénoménologie”, forthcoming, available at http://www.chass.utoronto.ca/sousa/contrephen.html.
Christians solved this problem with the usual combination of ingenuity and absurdity: God, while eternal, becomes incarnate, just to show that it’s not logically impossible for Him to say things like “Thank Goodness it’s Friday.”
Nagel, in The View from Nowhere, conflates several types of subjectivity, though I haven’t ascertained that all twelve can be found there. Not surprisingly, subjectivity turns out to be something of a mystery, and an irreducible one at that.
Note how the language here supports the idea that we are dealing with a single and mysterious “ontological mode”: by speaking of a “consequence” of subjectivity, rather than a component, type, or sense of subjectivity, Searle suggests that we can have a prior knowledge of this “ontological mode” and that we are merely drawing out its implications.
The dilemma I have just articulated leaves out what may appear to be the most poignant case: namely where I am already too gaga to express any opinion. In those cases, however, we might justify giving a “living will” authority by default even in a Parfitean world, as a sort of legal fiction, just as we grant authority by default to nearest relatives for certain other decisions affecting the welfare of the incompetent.
By the existentialists, by Stuart Hampshire in his Thought and Action (Hampshire 1983) and by Charles Taylor, in Sources of the Self (Taylor 1989) among others.
Descartes’s cogito can also be credited with stressing subjectivity in a sense I’m not sure how to classify. This emerges from the consideration that if we try to paraphrase his argument in Med. II as a syllogism with a universal major premise, “Whatever thinks, exists” the argument will collapse because the premise is false, since thinking is done admirably well by many a fictional character.
In the phenomenological literature, there is supposed to be a “second phenomenological reduction” that has to do with ownness; but since I have never understood what that is, I confine myself to the non-phenomenological sources literature.
Cf. Aristotle, Categories 2: “There is, lastly, a class of things which are neither present in a subject nor predicable of a subject, such as the individual man or the individual horse. But, to speak more generally, that which is individual and has the character of a unit is never predicable of a subject. Yet in some cases there is nothing to prevent such being present in a subject. Thus a certain point of grammatical knowledge is present in a subject.”
One set of thought experiments are attributed to Zuboff in (Tye 1995, 78 ff). (Ramachandran and Hirstein 1997) offer similar speculations, in which judicious rewiring of brain connections results in two people sharing or exchanging experiences. They infer that sensations are actually not in principle unobservable by others. The contrary appearance, they argue, is due simply to the fact that in ordinary situations I can only have access to the experiences of others through “translation”. But if my brain were wired in just the right way to yours I would have direct access to your mental states.
A novel the name of which I’ve forgotten tells the story of a detective who suffers from amnesia and gradually comes to the conclusion that the criminal he is tracking is he himself.
For further debate see (Davies and Stone 1995).
REFERENCES

Ainslie, G. (1992). Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person. Cambridge: Cambridge University Press.

Aristotle. (1963). Categories and de Interpretatione (J. Ackrill, trans and notes). Oxford: Oxford University Press.

Berkeley, G. (1957). A Treatise Concerning the Principles of Human Knowledge (C. M. Turbayne, ed & introd by). Liberal Arts Press. Indianapolis, New York: Bobbs-Merril.

Campbell, S. (1998). Interpreting the Personal: Expression and the Formation of Feeling. Ithaca: Cornell University.

Castaneda, H.-N. (1988). Self-consciousness, demonstrative reference, and self-ascription. In J. E. Tomberlin (Ed.), Philosophical Perspectives, pp. 405-454. Atascadero: Ridgeview.

Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford; New York: Oxford University Press.

Churchland, P. M. & Churchland, P. (1998). On the Contrary: Critical Essays. Cambridge, MA: MIT Press.

Cole, J. (1997). On ‘being faceless’: Selfhood and facial embodiment. Journal of Consciousness Studies, 4(5-6), 467-484.

Cole, J. (1998). About Face. Cambridge, MA : MIT Press Davies, M. & Stone, T. (eds). (1995). Mental Simulation: Evaluations and Applications. Readings in Mind and Language. Oxford: Blackwell.

Dennett, D. (1990). Quining qualia. In Mind and Cognition: A Reader, pp. 519-547. Oxford : Blackwell.

Dennett, D. C. (1991). Consciousness Explained. Boston, Toronto, London: Little, Brown.

Descartes, R. (1984-85) [1649]. The Philosophical Writings of Descartes. (J. Cottingham, R. Stoothoff & D. Murdoch, Trans.). Cambridge: Cambridge University Press. Original work published 1649.

Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.

Goldman, A. I. (1992). In defense of the simulation theory. Mind and Language, 7, 104-119.

Gopnik, A. (1993) How we know our own minds: the illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences, 16, 145-171.

Gordon, R. M. (1986). Folk psychology and simulation. Mind and Language, 1, 158-171.

Gordon, R. M. (1992). The simulation theory: Objections and misconceptions. Mind and Language, 7, 11-34.

Hampshire, S. (1983). Thought and Action, 2nd ed. Notre Dame, Indiana: University of Notre Dame Press.

Johnson-Laird, P. N. (1988). The Computer and the Mind: An Introduction to Cognitive Science. Cambridge, MA: Harvard University Press.

Kripke, S. A. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.

Lycan, W. (1996). Consciousness and Experience. Cambridge, MA: MIT Press.

Meltzoff, A. & Gopnik, A. (1993). The role of imitation in understanding persons and developing a theory of mind. In S. Baron-Cohen, H. Tager-Flusberg & D. Cohen (eds), Understanding Other Minds: Perspectives from Autism. Oxford: Oxford University Press.

Moravia, S. (1995) [1986]. The Enigma of the Mind, Tr. from the Italian L’enigma della mente. Cambridge: Cambridge University Press. Original work published 1986.

Nagel, T. (1986). The View from Nowhere. Oxford: Oxford University Press.

Parfit, D. (1971). Personal identity. Philosophical Review, 80.

Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press .

Perry, J. (1979). The problem of the essential indexical. Nous, 3, 3-21.

Proust, J. (1997) Comment l’esprit vient aux bêtes : essai sur la représentation. Paris: NRF Essais.

Ramachandran, V. & Hirstein, W. (1997). Three laws of qualia: What neurology tells us about the biological functions of consciousness. Journal of Consciousness Studies, 4(5-6), 429-457.

Searle, J. R. (1992). The Rediscovery of the Mind. Cambridge, MA : MIT Press. A Bradford Book.

Searle, J. R. (1997). The Mystery of Consciousness, And exchanges with Daniel C. Dennett and David J. Chalmers. London: Granta.

Sellars, W. (1963). Science, Perception and Reality. New York: Humanities Press.

Sober, E. & Wilson, D. S. (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press.

Stich, S. & Nichols, S. (1992). Folk psychology: Simulation or tacit theory? Mind and Language, 7, 35-71.

Strawson, G. (1997). The self. Journal of Consciousness Studies, 4(5/6), 405-428.

Taylor, C. (1989). Sources of the Self. Cambridge, MA: Harvard University Press.

Thompson, E. (1995). Colour vision: A study in cognitive science and the philosophy of Perception. London: Routledge.

Tye, M. (1990). A representational theory of pains and their phenomenal character. Philosophical Perspectives, 9, 223-239, J. Tomberlin (Ed.). Atascadero: Ridgeview Publishing.

Tye, M. (1995). Ten Problems of Consciousness. Cambridge, MA: MIT Press.

Twelve Varieties of Subjectivity: Dividing in Hopes of Conquest, ©Ronald de Sousa, University of Toronto, Penultimate draft of paper now published in Knowledge, Language, and Representation Proceedings of ICCS conference, San Sebastian, Spain, May 15 1999, ed. J. M. Larrazabal and L. A. Pérez Miranda,  Kluwer 2002.

Role of Consciousness

In this post, a theoretical account of the functional role of consciousness in the cognitive system of normal subjects is developed. The account is based upon an approach to consciousness that is drawn from the phenomenological tradition. On this approach, consciousness is essentially peripheral self-awareness, in a sense to be duly explained. It will be argued that the functional role of consciousness, so construed, is to provide the subject with just enough information about her ongoing experience to make it possible for her to easily obtain as much more information as she may need. The argument for this account of consciousness’ functional role will proceed in three main stages. First, the phenomenological approach to consciousness as peripheral self-awareness will be expounded and endorsed. Second, an account of the functional role of peripheral perceptual awareness will be offered. Finally, the account of the functional role of peripheral self-awareness will be obtained by straightforward extension from the functional role of peripheral perceptual awareness.

For many, the ultimate goal of scientific research into consciousness is to identify the neural correlate of consciousness – to uncover the neurological “seat” of consciousness in the brain. There are many ways scientific investigation can proceed in pursuit of such a goal. Perhaps the most straightforward way is as follows: first find out what it is that consciousness does, then find out what structure or process in the brain does just that; one would then be justified in identifying the structure or process in question as the seat of consciousness.[1]

            This approach requires, as a first order of business, a comprehensive account of what consciousness does, that is, of the functional role of consciousness in the cognitive system of a normal subject. In order to understand what consciousness does, however, we must have an agreement on what consciousness is. In what follows, I adopt a specific view of this matter, a view drawn from the phenomenological tradition. On this view, consciousness is a form of peripheral self-awareness. What is meant by the concept of peripheral self-awareness, and what the emerging conception of consciousness is, will be elucidated in due course. In any event, on the phenomenological approach to consciousness adopted here, the functional role of consciousness is given by that of peripheral self-awareness. The latter is what I propose to discuss in the present paper.

Various accounts of the functional significance of consciousness already exist, both in the scientific literature and in the philosophical one. Most of these accounts, however, rest content with pointing out a number of cognitive functions consciousness is somehow involved in. But this falls short of precisely distilling the singular functional contribution of consciousness to any process or state in which it is present. An account attempting to do that will be offered in §5 below. On this account, the precise functional role of consciousness is to provide the subject with just enough information about her ongoing experience to make it possible for her to quickly and effortlessly obtain as much more information as she may happen to need.

            The argument will proceed as follows. In §§1-2, three constraints on the adequacy of an account of the functional role of consciousness will be set out. In §2, the phenomenological approach to consciousness in terms of peripheral self-awareness will be expounded and endorsed. In §3, I will expand on the notion of peripheral awareness, and in particular peripheral self-awareness. In §4, the functional role of peripheral awareness in general will be discussed. This will naturally lead to a discussion, in §5, of the functional role of peripheral self-awareness in particular. The account developed in §5 will be compared and contrasted with several other accounts of the functional role of consciousness in §6.

 

1.      The Functional Role of Consciousness and Functionalism About Consciousness

Mental states and events are rarely (if ever) idle. They normally bring about other mental states and events, as well as certain actions, and they are themselves brought about by other mental states and events, as well as certain physiological conditions. The set of causes and effects that surround a mental state is commonly referred to as the state’s functional role.

            The functional role of a mental state depends on how the state is. The picture is this: the state has various properties, F1, …, Fn, and each property Fi contributes something to (or modifies somehow) the state’s fund of causal powers. One of the properties that some mental states have and some do not is consciousness. We should expect consciousness to contribute something to the fund of causal powers of the mental states that exemplify it. It is not incoherent, of course, to maintain that the property of being conscious does not contribute anything to a mental state’s fund of causal powers – that consciousness is causally inert, or epiphenomenal.[2] But that is an extremely unlikely possibility, a non-starter to say the least. In all likelihood, consciousness has some functional significance, and there is a contribution it makes to mental states that have it.

            In this paper, I will assume that consciousness does have a functional role.[3] As such, consciousness adds something to the mental states that exemplify it. On the other hand, it is implausible to suppose that consciousness is nothing but that “addition.” In other words, it is implausible that a functionalist approach to consciousness could be made to work. In general, functionalism is the view that mental states and properties can be identified with their functional role in the subject’s cognitive economy (Putnam 1967, Lewis 1972).[4] With regard to consciousness, the thesis is that consciousness can be identified with its functional role, that is, that a mental state’s property of being conscious is just the property of having the kind of functional profile we find in conscious states but not in unconscious states (Dennett 1981).

A principled problem for functionalism is that functional role is a dispositional notion, whereas many mental states are categorical. Functional role is a dispositional notion, in that the causal powers of a mental state are what they are independently of whether the state actually manifests them. A mental state’s functional role is a matter of its subject’s disposition to do (or undergo) certain things, not a matter of the subject’s actually doing (or undergoing) those things. But where there is a disposition there must be a categorical basis for it. When an object or state is disposed a certain way, there is a reason why it is so disposed. There must be something about it that grounds the disposition. Now, many mental states appear to be precisely the categorical bases for certain dispositions, rather than the dispositions themselves. It is because the subject is in the mental state she is in that she is disposed the way she is, not the other way round. Such mental states are not just functional role, then; they are what plays, or grounds, the functional role.

There may be some mental states that are plausibly construed as nothing but the relevant bundles of dispositions. A subject’s tacit belief that there are birds in China is plausibly identified with a set of dispositions; there appears to be no need to posit a concrete item that underlies those dispositions. This is because nothing needs to actually happen with a subject who tacitly believes that there are birds in China. But many mental states are not like that. A subject’s conscious experience of the blue sky is more than a set of dispositions. Here there is a concrete item that underlies the relevant dispositions. Something does actually happen with a subject when she has the experience. In virtue of having a conscious experience of the blue sky, the subject is disposed to do (or undergo) certain things. But there is more to the subject’s having the conscious experience than her being so disposed. Indeed, it is precisely because the subject has her experience that she is disposed the way she is. The experience is the reason for the disposition, it is its categorical basis.

There are two points to retain from the foregoing discussion. First, to engage in a search for the functional role of consciousness is not to subscribe to a functionalist approach to consciousness. Second, understanding the functional role of consciousness requires two things. It requires, first of all, understanding how a subject’s having a conscious mental state disposes her (in ways that having an unconscious mental state does not). That is, it requires that the functional role of consciousness be correctly identified. And it requires, on top of that, understanding what it is about a mental state’s being conscious that endows it with this particular functional role. That is, it requires understanding why consciousness has just the functional role it does. This latter requirement is of the first importance. Our conception of consciousness must make it possible for us to see what it is about consciousness that yields the kinds of dispositions associated with conscious states and not with unconscious states. It must allow us not only to identify the functional role of consciousness, but also to explain it.

If consciousness was nothing more than a bundle of dispositions, there would be no question as to why consciousness is associated with just those dispositions. Consciousness would just be those dispositions. But because consciousness is more than a bundle of dispositions – because it is the categorical basis of those dispositions – there are two separate questions that arise in relation to its functional role: What does consciousness do?, and Why is that what consciousness does? The latter arises because, when we claim that consciousness underlies certain dispositions, we assume that there is a reason why these are the dispositions it underlies. The matter can hardly be completely arbitrary, a fluke of nature. Therefore, unless functionalism about consciousness is embraced, both questions must be answered. Conversely, functionalism about consciousness necessarily fails to explain why consciousness has the functional role it does, and is to that extent unsatisfactory. A more satisfactory account of consciousness would meet both our theoretical requirements: it would both identify and explain the functional role of consciousness.[5] Let us call the former the identification requirement and the latter the explanation requirement.[6]

 

2.      A Phenomenological Approach to Consciousness

When discussing the functional role of consciousness, it is important to distinguish the role of conscious states from the role of consciousness proper. As noted in the previous section, the causal powers of mental states are determined by these states’ properties. Each property a mental state exemplifies contributes something to the state’s fund of causal powers. Clearly, then, some of the causal powers of a conscious state are not contributed to it by its property of being conscious, but by its other properties. They are powers the state has, but not in virtue of being conscious. It would have them even if it were not conscious. Therefore, it is important that we distinguish between the causal powers that a conscious state has and the causal powers it has precisely in virtue of being conscious. Let us refer to the latter as the causal powers of consciousness proper. These are the powers contributed to a conscious state specifically by its property of being conscious.

            Consider a subject’s conscious perception of the words “terror alert” in the newspaper. Such a conscious experience is likely to raise the subject’s level of anxiety. But it is unclear that the rise is due to the fact that the subject’s perception is conscious. Indeed, data on the effects of subliminal perception on emotion suggests that an unconscious perception of the same stimulus would also raise the subject’s level of anxiety.[7] This suggests that while the subject’s perception of the words “terror alert” has the causal powers to raise the level of anxiety, it is not in virtue of being conscious that it has those causal powers. The conscious perception’s power to raise the level of anxiety is not a function of consciousness proper.

            An account of the functional role of consciousness must target the causal powers of consciousness proper. It must distill the singular contribution of consciousness itself to the fund of causal powers of conscious states. Our concern is not with the causal powers of mental states that happen to be conscious, but with the causal powers conscious states have because they are conscious. This constitutes a third requirement on an adequate account of the functional role of consciousness; let us call it the singularity requirement.

            To meet the singularity requirement, we must get clear on what consciousness proper is. What is the property mental states have when and only when they are conscious, and in virtue of which they are conscious? Oceans of ink have been spilled in recent years in search of an answer. A thorough discussion of the matter will require that we focus exclusively on it. For this reason, in this paper I adopt somewhat dogmatically a view of what consciousness is. Although I will do the minimum to justify that adoption, my main goal is to explore the implications of the view for the question of functional role.

            The view I will adopt is drawn from the phenomenological tradition. It is well known that Brentano (1874) proposed intentionality as the mark of the mental. It is less well known that he proposed self-directed intentionality as the mark of the conscious. For Brentano, a mental state is conscious when, and only when, it is intentionally directed at itself. Moreover, it is in virtue of being thus directed at itself that the state is conscious.[8] When a person is consciously aware of, say, a tree, she has a mental state that is intentionally directed both at the tree and at itself. Thus every conscious state includes within it an awareness of itself.

Normally, when a person is consciously aware of a tree, the focus of her awareness is the tree, not her awareness of the tree. In this respect, the self-directed intentionality enjoys a lower status, in a sense, than the outward-directed intentionality. To accommodate this fact, Brentano distinguished between primary intentionality and secondary intentionality.[9] Primary intentionality is a conscious state’s directedness at the main object of awareness, whereas secondary intentionality is its directedness toward objects that are outside the focal center of awareness.

The upshot is that for Brentano, a mental state is conscious when it exhibits secondary self-directed intentionality, that is, when it is secondarily directed at itself. This conception of consciousness has subsequently become commonplace in the phenomenological tradition, through Brentano’s influence on Husserl (1928), who defended a similar view.[10] The view was then embraced by Sartre (1943), Henry (1963), Gurwitsch (1985), and the members of the Heidelberg School in Germany.[11]

As I said above, I will not present a detailed defense of the phenomenological conception of consciousness. But let me indicate the main source for its plausibility. At first approximation, a conscious state is a state the subject is aware of having.[12] When I have a conscious experience of the blue sky, I am aware of having my experience. The experience does not just take place in me, it is also for me – in the sense that I am aware of its taking place. If I were completely unaware of perceiving the sky, the perception would have been unconscious. Conscious mental states are not sub-personal states, which we “host” in an impersonal sort of way, without being aware of them.

To be sure, we can readily have conscious experiences without becoming wholly consumed with them. Thus, I can have my conscious experience of the sky when glancing at it inadvertently. In that case, I am not aware of my experience in a very focused way. However, I am necessarily aware of my experience someway; otherwise it would not be conscious. Therefore, in this case I am aware of my experience in some sort of unfocused way. Upon reflection, most our conscious experiences are of this sort: they are not experiences we dwell on in a very focused and deliberate way. Normally, when we have a conscious experience of the sky, we do not concentrate on our experience, but on the sky itself. Normal conscious states are thus states of which we are aware in an unfocused way.

By way of clarifying the matter, let us distinguish three ways in which a subject may be related to one of her mental states, M. A subject may be either (i) completely unaware of M, or (ii) focally aware of M, or (iii) peripherally aware of M. Mental states the subject is completely unaware of are unconscious states. Only mental states the subject is aware of are conscious. Normally, the subject is only peripherally aware of her conscious mental states, though it may also happen that she is focally aware of a conscious state.[13]

Observe, however, that when a subject becomes focally aware of one of her mental states, it is not only the state in question that is conscious, but also that very state of focal awareness.[14] Since every conscious state is a state one is aware of having, this focal awareness – being a conscious state – must be itself a state the subject is aware of having. So the subject must be either focally aware of this focal awareness or peripherally aware of it; she cannot be completely unaware of her focal awareness. However, if the subject is focally aware of this focal awareness, her focal awareness of the focal awareness would also be conscious, and therefore the subject would have to be aware of it too. To avoid  an infinite regress of focal awarenesses, at some point one of the subject’s states of focal awareness must be such that the subject is not focally aware of having it. Yet being a conscious state it would have to be a state the subject is aware of. Therefore, the subject would have to be peripherally aware of that state. This peripheral awareness will cap the regress of focal awarenesses. It appears, then, that in every episode of our mental life in which we harbor a conscious state, we must be peripherally aware of at least one of our mental states. The same is not true of focal awareness: when I have my inadvertent experience of the sky, I am not focally aware of any of my mental states. Therefore, it is peripheral awareness of one of the subject’s mental states that is present when and only when the subject harbors a conscious state. So an account of the functional role of consciousness proper would have to identify and explain the functional role of this sort of peripheral awareness.

            In the next section, we will have occasion to clarify further the notion of peripheral awareness. As we will see, a subject can be peripherally aware not only of her own mental states, but of external stimuli as well. To distinguish peripheral awareness of external stimuli from peripheral awareness of one of one’s own mental states, let us call the latter peripheral self-awareness. On the phenomenological conception of consciousness, such peripheral self-awareness is constituted by secondary self-directed intentionality.[15]

In conclusion, an adequate account of the functional role of consciousness must not only meet the identification requirement and the explanation requirement, but also the singularity requirement. If peripheral self-awareness is indeed what is present when and only when a subject is undergoing a conscious episode, then meeting the singularity requirement would involve accounting for the functional role of peripheral self-awareness. That is, the identification and explanation of the singular contribution of consciousness to the fund of causal powers of conscious states would require the identification and explanation of the functional role of peripheral self-awareness.

 

3.      Focal Awareness and Peripheral Awareness

The distinction between focal and peripheral awareness does not apply only to awareness of one’s own mental states. It applies to awareness of external stimuli as well.

Consider the phenomenon of peripheral vision. When I look at the laptop in front of me, I am focally aware of the laptop. But in the periphery of my visual field appear other objects: books on the right side of my desk, printouts on the left side of my desk, etc. My awareness of these objects is not nearly as clear or as accurate as my awareness of the laptop I am focusing on, but it would be a mistake to say that I am completely unaware of these objects. The status of the books and printouts on my desk vis-à-vis my perceptual experience is unlike the status of the table in the living room, which I cannot perceive and am completely unaware of. To distinguish among the status of the laptop, the status of the books and printouts, and the status of the living-room table, we must again introduce a distinction between focal and peripheral awareness, and say that I have focal awareness of the laptop, peripheral awareness of the books and the printouts, and no awareness of the living-room table.[16]

The same tripartite distinction applies to perceptual experiences in non-visual modalities. Suppose you are listening to Brahms’ Piano Concerto No. 1. Your auditory perception of the piano is bound to be more focused than your perception of the cellos, or for that matter, of the cars driving by your window. That is, you are focally aware of the piano and only peripherally aware of the cellos and the cars.

Competition for the focus of awareness is not restricted to stimuli from the same modality. My current conscious experience is focused (visually) on the laptop before me, but it has many peripheral elements, only some of which are visual. I have visual peripheral awareness of the books and printouts on my desk, but also auditory peripheral awareness of the cars outside my window, olfactory peripheral awareness of burned toast, tactual peripheral awareness of the chair I am sitting on, etc. All these bits of awareness form part of a single overall experience. The focus of my overall awareness is the laptop, which is presented visually, but I am peripherally aware of a myriad of external stimuli presented in other modalities.

It was to capture the richness of peripheral awareness and its place in normal conscious experience that James (1890) introduced the notion of the fringe of consciousness. Similar notions have been developed by other psychologists, including within the phenomenological tradition. Brentano’s notion of secondary awareness, Husserl’s notion of non-thematic consciousness, Sartre’s notion of non-positional consciousness, and Gurwitsch’s notion of marginal consciousness are all supposed to capture the same phenomenon.[17]

Interestingly, some of the elements in the fringe of consciousness are altogether non-perceptual. Particularly conspicuous are emotional and mood-related elements. If I am in a good mood as I am having my conscious experience of the laptop, the experience will include, in its periphery, a certain feeling of cheerfulness. There are also intellectual elements in the fringe of consciousness, such as the so-called “feeling-of-knowing” and “rightness” phenomena (Mangan 2001).

On the phenomenological conception of consciousness proper laid out in the previous section, another important element in the fringe of consciousness is awareness of the subject’s current experience. When I have my conscious experience of my laptop, I am peripherally aware of the books and printouts on my desk, the cars outside my window, the chair I am sitting on, etc., but I am also peripherally aware of having that very experience. This sort of self-awareness is a peripheral element in my conscious experience; it is peripheral self-awareness.[18]

Some readers may object that they cannot find anything like peripheral self-awareness in their phenomenology. Now, it is quite difficult to see how to erect an argument for the very existence of peripheral self-awareness, but let me note two things. First, in §5 I will argue that the functional role of peripheral self-awareness is such that there are good reasons to expect that something like it would emerge over the course of evolution. Second, rejecting the notion of peripheral self-awareness would force us into an unhappy dilemma: either we allow that there can be conscious states whose subject is unaware of having, or we claim that all conscious states are states the subject is focally aware of having. To my mind, both horns of this dilemma are worse options than admitting the existence of peripheral self-awareness.

 

4.      The Functional Role of Peripheral Awareness

Even those disinclined to countenance peripheral self-awareness admit the existence of peripheral visual awareness. Yet the latter should not be taken for granted. The fact that our visual system employs peripheral awareness is not a brute, arbitrary fact. There are reasons for it.[19]

            Our cognitive system handles an inordinate amount of information. The flow of stimulation facing it is too torrential to take in indiscriminately. The system must therefore develop strategies for managing the flux of incoming information. The mechanism that mediates this management task is, in effect, what we know as attention.[20] There are many possible strategies the cognitive system could adopt – many ways the attention mechanism could be designed – and only some of them make place for peripheral visual awareness.

Suppose a subject faces a scene with five distinct visual stimuli: A, B, C, D, and E. The subject’s attention must somehow be distributed among these stimuli. At the two extremes are the following two strategies. One would have the subject distribute her attention evenly among the five stimuli, so that each stimulus is granted 20% of the subject’s overall attention resources; let us call this the “20/20 strategy.” The other would have the subject devote the entirety of her attention resources to a single salient stimulus to the exclusion of all others, in which case the relevant stimulus, say C, would be granted 100% of the subject’s resources, while A, B, D, and E would be granted 0%; let us call this the “100/0 strategy.” In-between these two extremes are any number of more flexible strategies. Consider only the following three: (i) the “60/10 strategy,” in which C is granted 60% of the resources and A, B, D, and E are granted 10% each; (ii) the “28/18 strategy,” in which C is granted 28% of the resources and A, B, D, and E are granted 18% each; and (iii) the “35/10 strategy,” in which two different stimuli, say C and D, are treated as salient and granted 35% of the resources, while A, B, and E are granted 10% each.

The strategy our visual system actually employs is something along the lines of the 60/10 strategy. This strategy has three key features: it allows for only one center of attention; the attention it grants to the elements outside that focal center is more or less equal; and it grants considerably more attention to the center than to the various elements in the periphery. When I look at the desktop before me, my visual experience has only one center of attention, namely, the desktop; it grants more or less equal attention to the two elements in the periphery, namely, the books on the right side of the desk and the printouts on the left side; and the attention it grants to the desktop is considerably greater than that it grants to the books and the printouts. Each of the other models misrepresents one feature or another of such an ordinary experience. The 20/20 strategy implies that my awareness of the books and printouts is just as focused as my awareness of the desktop before me, which is patently false. The 100/0 strategy implies that I am completely unaware of the books and printouts, which is again false. The 28/18 strategy misrepresents the contrast between my awareness of the desktop and my awareness of the books or printouts: the real contrast in awareness is much sharper than suggested. And the 35/10 strategy wrongly implies that my visual experience has two separate focal centers.[21] (There may – or may not – be highly abnormal experiences in which there are two independent centers of attention – say, one at 36 degree on the right side of the subject’s visual field and one at 15 degree on the left side of the visual field – but a normal experience is clearly unlike that. Normal experience has a single focal center.)[22]

The above treatment of the possible strategies for managing the information overload facing the visual system (and perforce the cognitive system) is of course oversimplifying. But it serves to highlight two important things. First, the existence of peripheral visual awareness is a contingent fact. In the 100/0 strategy, for instance, there is no such thing as peripheral awareness: the subject is either focally aware of a stimulus or completely unaware of it.[23] In a way, the 20/20 strategy likewise dispenses with peripheral awareness, as it admits no distinction between focal center and periphery.[24] Only the three other strategies make place for the notion of peripheral awareness.

Second, if the 60/10 strategy (or something like it) has won the day over the other possible candidates, there must be a reason for that. The 60/10 strategy has apparently been selected for, through evolution (and perhaps also learning), and this suggests that there must be some functional advantages to it.[25]

What are these functional advantages? It is impossible to answer this question without engaging in all-out speculation. In the remainder of this section, I offer my own hypothesis, but doing full justice to the issue at hand would be impossible here. I will only pursue the hypothesis to the extent that it may help illuminate, in the next section, the question of the functional role of peripheral self-awareness.

The distribution of attention resources in the 60/10 strategy accomplishes two things. First, with regard to the stimuli at the attentional periphery, it provides the subject with just enough information to know where to get more information. And second, by keeping the amount of information about the periphery to the minimum needed for knowing where to get more information, it leaves enough resources for the center of attention to provide the subject with rich and detailed information about the salient stimulus. On this hypothesis, the functional role of peripheral awareness is to give the subject “leads” as to how to obtain more detailed information about any of the peripheral stimuli, without encumbering the system overmuch. By doing so, peripheral awareness enhances the availability of rich and detailed information about those stimuli. Peripheral visual awareness thus serves as a gateway, as it were, to focal visual awareness: it smoothes out – facilitates – the process of assuming focal awareness of a stimulus (Mangan 1993, 2001).

Consider the subject’s position with regard to stimulus E, of which she is peripherally aware, and an object F, of which she is completely unaware. If the subject suddenly requires fuller information about E, she can readily obtain it simply by turning her gaze onto it. That is, the subject has enough information about E to be able to quickly and effortlessly obtain more information about it. By contrast, if she is in need of information about F, she has to engage in a “search” of some sort after the information needed. Her current visual experience offers her no leads as to where she might find the information she needs about F. (Such leads may be present in memory, or could be extracted by reasoning, but they are not to be found in the subject’s visual experience itself.) Peripheral awareness of a stimulus thus allows the subject to spend much less energy and time to become focally aware of the stimulus and obtain detailed information about it. It makes that information much more available and usable to the subject.

 

5.      The Functional Role of Peripheral Self-Awareness

The hypothesis delineated above, concerning the functional significance of peripheral visual awareness, suggests a simple extension to the case of peripheral self-awareness. The subject’s peripheral awareness of her ongoing experience makes detailed information about the experience much more available to the subject than it would otherwise have been. More specifically, it gives the subject just enough information about her current experience to know how to get more information quickly and effortlessly, should the need arise.

More accurately stated, the suggestion is that when, and only when, a mental state M is conscious, so the subject is peripherally aware of M, the subject possesses just enough information about M to make it possible for her to easily (i.e., quickly and effortlessly) obtain fuller information about M. Compare the subject’s position with regard to some unconscious state of hers, a state of which she is completely unaware. If the subject should happen to need detailed information about that unconscious state, she would have to engage in certain energy- and time-consuming activities to retrieve that information.

            It is important to stress that the information provided by peripheral self-awareness concerns the experience itself, not the objects of the experience. Consider again my laptop experience. In having my experience, I am focally aware of the laptop and peripherally aware of at least three things: the books on the right side of my desk, the printouts on the left side, and my very experience of all this. My peripheral awareness of the books provides me with just enough information about the books to know how to get more information about them. My peripheral awareness of having the experience provides me with just enough information to know how to get more information – not about the laptop or books, but about the very experiencing of the laptop and books.[26]

            Peripheral self-awareness is a constant element in the fringe of consciousness: we are at least minimally aware of our ongoing experience throughout our waking life. This continuous awareness we have of our experience multiplies the functional significance of the awareness. The fact that at every moment of our waking life we have just enough information about our current experience to get as much further information as we should need means that our ongoing experience is an “open source” of information for all other modules and local mechanisms in the cognitive system. This is the basis of the idea that consciousness makes information globally available throughout the system. Baars (1988) puts it in what I think is a misleading way by saying that consciousness “broadcasts” information through the whole system; I would put it the other way around, saying that consciousness “invites” the whole system to grab that information.

It is not hard to see, on this picture, why peripheral self-awareness is a good thing to have. Consciousness is often described as a monitoring device, a device that allows us to gather and process detailed information about our very mechanisms of gathering and processing information (Lycan 1996). On the picture here defended, this is inaccurate: consciousness is not the monitoring device itself, but a gateway to the monitoring device. Consciousness does not give us detailed information about our inner goings-on, but rather makes it easy for us to get such detailed information whenever we want, by giving us just enough information about our concurrent inner goings-on to know how to get fuller information.[27] However, even though consciousness is not itself the monitoring device, the functional benefits of having a monitoring device – detecting malfunction in the processes of information gathering and processing, integrating disparate bits of information into a coherent whole, etc.[28] – explain also the benefit in having a gateway to the monitoring device. Whatever the function of the monitoring device itself, the function of consciousness is to give the subject “leads” that would prompt and facilitate the deployment of monitoring as need arise.

            The fact that peripheral self-awareness is a good thing to have may help us counter the objection, brought up at the end of §3, that there is no such thing as peripheral self-awareness. If peripheral self-awareness is a good thing to have, it is unsurprising that it should appear in the course of evolution. To be sure, the fact that a feature is good to have does not necessitate its evolution. But given that the existence of neither peripheral awareness itself nor self-awareness itself is in contention, it is hard to motivate the idea that something like peripheral self-awareness would not come into existence.[29]

            The account I have defended offers the following answer to the question of identification: the functional role of consciousness proper is to give the subject just enough information to know how to easily obtain fuller information about her concurrent experience. Against the background of §§3-4, the answer to the question of explanation should be clear: the reason consciousness has just this sort of functional role is that consciousness is essentially peripheral self-awareness, and peripheral self-awareness involves just this sort of functional role; the reason peripheral self-awareness involves just this sort of functional role is that it is a form of peripheral awareness, and this is the kind of functional role peripheral awareness in general has; and the reason peripheral awareness in general has just this kind of functional role has to do with the cognitive system’s strategy for dealing with the information overload it faces.

(This model explains both why there is such a thing as peripheral self-awareness and why peripheral self-awareness plays the functional role of giving the subject just enough information about her ongoing experience to be able to easily obtain fuller information. The key point is that providing the subject with just this sort of information is not what consciousness is, but what consciousness does. What consciousness is is peripheral self-awareness, that is, peripheral awareness of one’s concurrent experience. So in this account consciousness is not identified with the providing of the information, but is rather the categorical basis for it.)

            In conclusion, the account of the functional role of consciousness here proposed may be summarized in terms of the following three tenets:

 

  1. A mental state M is conscious when and only when the subject is peripherally aware of M.[30]
  2. The functional role of consciousness is to give the subject just enough information to know how to quickly and effortlessly obtain rich and detailed information about her concurrent experience.
  3. The reason this is the functional role of consciousness is that the cognitive system’s strategy for dealing with information overload employs peripheral awareness, a variety of which is peripheral self-awareness (hence consciousness), and the functional role of peripheral awareness in general is to give the subject just enough information to know how to get fuller information about whatever the subject is thereby aware of.

 

The three tenets satisfy our three requirements on an account of the functional role of consciousness. (1) is intended to meet the singularity requirement: it says what consciousness proper is. (2) is intended to meet the identification requirement: it says what the functional role of consciousness is. (3) is intended to meet the explanation requirement: it makes a claim as to why it is that consciousness has just the functional role attributed to it in (2).[31]

 

6.      Other Approaches to the Functional Role of Consciousness

Before closing, I would like to situate the account I have defended in relation to other central accounts of the functional role of consciousness. The purpose is not so much to argue against these other accounts as to illustrate the force of the present account.

            According to Baars (1997), consciousness does a good number of things: it prioritizes the cognitive system’s concerns, facilitates problem-solving, decision-making, and executive control, serves to optimize the trade-off between organization and flexibility, helps recruit and control actions, detects errors and edits action plans, creates access to the self, facilitates learning and adaptation, and in general “increase[s] access between otherwise separate sources of information.”[32] (1997: 162-3)

            There are two problems with Baars’ account. First, the functions he cites are not peculiar to consciousness. There is no question that conscious mental states are involved in all those things. But it is far from clear that conscious states perform any of these functions precisely in virtue of being conscious. By putting together this list, Baars is not distilling the singular functional significance of consciousness proper, but simply enumerating the functions performed by mental states which happen to be conscious. That is to say, Baars’ account fails to meet the singularity requirement. Second, all the specific functions Baars cites are monitoring functions. If the account offered in the previous section is correct, monitoring functions do not characterize consciousness proper, although consciousness does enhance the performance of those functions (by serving as a gateway to monitoring).

            Another common error is to misconstrue the relation between consciousness and its functional role. Consider Block’s (1995) distinction between what he calls phenomenal consciousness and access consciousness. Phenomenal consciousness is consciousness proper, the truly mysterious phenomenon we all want to understand. Access consciousness is, by contrast, a functional notion: a mental state “is access-conscious if it is poised for free use in reasoning and for direct ‘rational’ control of action and speech.” (1995: 382)

One problem with Block’s distinction is that any function we may wish to attribute to phenomenal consciousness would be more appropriately attributed to access consciousness, leaving phenomenal consciousness devoid of functional significance (Chalmers 1997). The source of this unhappy consequence is the notion that phenomenal and access consciousness are two separate phenomena sitting side by side at the same theoretical level. In reality, access consciousness appears to be the functional role of phenomenal consciousness. The relation between phenomenal and access consciousness is therefore the relation of player to role: phenomenal consciousness plays access consciousness, if you will. Once we construe access consciousness as the functional role of phenomenal consciousness, we can attribute again any function we may wish to phenomenal consciousness: the function is construed as part of access consciousness and is therefore performed by phenomenal consciousness. The conceptual confusion caused by Block’s distinction is overcome.

Another problematic aspect of Block’s views here is his particular characterization of access consciousness, the functional role of consciousness proper. On the account offered in the previous section, it is quite true that conscious states are poised for free use in reasoning and control. But this is a secondary function of theirs. The primary function of consciousness is to give the subject just enough information to know how to easily obtain detailed information about her concurrent experience. The secondary function identified by Block is a result of two factors: the primary function and the fact that peripheral self-awareness is constant throughout our waking life. That is to say, Block’s account offers an incorrect identification of the functional role of consciousness and therefore fails to meet the identification requirement.

Tye (2000) also identifies the functional role of consciousness in terms of poise for use in rational control and deliberation. More specifically, he claims that “experiences and feelings, qua bearers of phenomenal character…stand ready and available to make a direct impact on beliefs and/or desires.”[33] (2000: 62)

If the account defended in §5 is on the right track, then Tye’s identification of the functional role of consciousness is at least incomplete, as it leaves out the function consciousness has in giving the subject basic information about her concurrent experience. Furthermore, unless a lot rides on the phrase “stand ready and available,” the role identified by Tye is routinely played by unconscious perceptions (which do of course make an impact on beliefs and desires). So Tye’s account appears to fail the identification requirement as well.

According to Tye’s representational theory of consciousness, conscious states are essentially representational, in that what makes them the conscious states they are is their representational content. One major difficulty facing the representational theory is that, on the face of it, every stimulus can be represented either consciously or unconsciously, so the difference between conscious and unconscious states is not found in their representational properties (Kriegel 2002). Tye’s response is to claim that conscious representations, unlike unconscious representations, are functionally poised in the way described above.[34] The problem with this response is that it leaves Tye with no way to explain the functional role of conscious states. By claiming that what distinguishes conscious from unconscious states is functional role, Tye is effectively embracing a functionalist account of consciousness proper. But as we saw in §1, a functionalist account of consciousness proper is incapable of explaining why consciousness has just the functional role it has, since it identifies consciousness with the role in question, rather than construing consciousness as the categorical basis for it. Therefore, Tye’s account also fails to meet the explanation requirement.

One of the most interesting empirical findings about the function of consciousness is Libet’s (1985). Libet instructed his subjects to flex their right hand muscle and pay attention when their intention to flex the muscle is formed, with the goal of finding out the temporal relationship between (i) muscle activation, (ii) onset of the neurological cause of muscle activation, and (iii) the conscious intention to flex one’s muscle. Libet found that the neurological cause of muscle activation precedes conscious intention to flex the muscle by about 350 milliseconds and the muscle activation itself by 550 milliseconds. That is, the conscious intention to flex one’s muscle is formed when the causal process leading to the muscle activation is already well underway. This suggests that consciousness proper does not have the function of initiating the causal process leading to the muscle activation, and is therefore not the cause of the intended act. According to Libet, the only thing consciousness can do is undercut the causal process at its final stages. That is, the only role consciousness has is that of “vetoing” the production of the act or allowing it to go through.

The phenomenological approach to consciousness proper we have taken in §2 starts from the assumption that conscious states are states we are aware of having. This means that a mental state must exist for some time before it becomes conscious, since the awareness of the state in question necessarily takes some time to form. Now, it is only to be expected that the state in question should be able to perform at least some of its functions before it becomes conscious. In many processes, the state can readily play a causal role independently of the subject’s awareness of it. So it is unsurprising that consciousness proper should have a small role to play in such processes (Rosenthal 2002b). What would be surprising is for consciousness to play that limited role in all or most cognitive processes. But this cannot be established by Libet’s experiment. One overlooked factor in Libet’s experiment is the functional role of the subjects’ conscious intention to follow the experimenter’s instructions (Flanagan 1992). This introduces two limitations on Libet’s findings. First, we do not know what the causal role of the conscious intention to follow the experimenter’s instructions is in the production of muscle activation. Second, we do not know what causal role a conscious intention to flex one’s muscle plays when it is not preceded by a conscious intention to follow certain instructions related to flexing one’s muscle. Given that the majority of instances of muscle flexing involve a single conscious intention (rather than a succession of two separate but related conscious intentions), we do not as yet know what the functional role of conscious intention to flex one’s muscle is in the majority of instances.

In any case, observe that Libet’s findings bear only on the role of consciousness vis-à-vis motor output. But internal states of the cognitive system can bring about not only motor output, but also further internal states.[35] On the account defended here, the latter is more central to the functional role of consciousness. The fact that a subject is peripherally aware of her mental states plays a role in bringing about states of focal awareness of those mental states, and more generally a role in the operation of internal monitoring processes. 

The account of the functional role of consciousness I defended in §5 is thus different in clear and significant ways from other accounts to be found in the literature on consciousness, including some leading accounts in the psychological, philosophical, and neuroscientific literature.

 

7.      Conclusion

In this article, I have developed a novel account of the functional role of consciousness. This account identifies a very specific function which it claims characterizes the singular contribution of consciousness to the fund of causal powers of conscious states, and embeds this identification in a larger explanatory account of the purpose and operation of attention. According to the account I have offered, when a mental state M is conscious, its subject has just enough information about M to be able to easily obtain fuller information about it.

The account is grounded in empirical considerations but is quite speculative, in that it depends on a number of unargued-for assumptions. As such, it is a “risky” account, an account whose plausibility may be undermined at several junctures. At the same time, none of the assumptions made above is flagrantly implausible. So at the very least, the account of the functional role of consciousness here defended offers a viable alternative to the accounts currently on offer in the literature on consciousness.

In any event, if one does accept the phenomenological conception of consciousness, the account proposed here of its functional role is hard to deny. Conversely, the fact that a clear and precise account of the functional significance of consciousness follows rather straightforwardly from the phenomenological conception of consciousness in terms of peripheral self-awareness is a testimony to the theoretical force of the phenomenological conception.
References

  • Baars, B. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge UP.
  • Baars, B. 1997. In the Theater of Consciousness: The Workspace of the Mind. Oxford and New York: Oxford UP.
  • Baron-Cohen, S. 1995. Mindblindness. Cambridge MA: MIT Press.
  • Block, N. J. 1995. “On a Confusion About the Function of Consciousness.” Behavioral and Brain Sciences 18: 227-247. Reprinted in N. J. Block, O. Flanagan, and G. Guzeldere (eds.), The Nature of Consciousness: Philosophical Debates, Cambridge MA: MIT Press, 1997.
  • Brentano, F. 1874. Psychology from Empirical Standpoint. Ed. O. Kraus. Ed. of English edition L. L. McAlister, 1973. Translation A. C. Rancurello, D. B. Terrell, and L. L. McAlister. London: Routledge and Kegan Paul.
  • Broadbent, D. E. 1958. Perception and Communication. London: Pergamon Press.
  • Brough, J. B. 1972. “The Emergence of an Absolute Consciousness in Husserl’s Early Writings on Time-Consciousness.” Man and World 5 (1972): 298-326.
  • Carruthers, P. 2000. Phenomenal Consciousness. Cambridge: Cambridge UP.
  • Carruthers, P. 2002. “The Evolution of Consciousness.” In P. Carruthers and A. Chamberlin (eds.), Evolution and the Human Mind, Cambridge: Cambridge UP.
  • Chalmers, D. J. 1997. “Availability: The Cognitive Basis of Consciousness?” Behavioral and Brain Sciences 20: 148-149.
  • Dennett, D. C. 1981. “Towards a Cognitive Theory of Consciousness.” In his Brainstorms, Brighton: Harvester.
  • Dixon, N. F. 1971. Subliminal Perception: The Nature of a Controversy. London: McGraw-Hill.
  • Flanagan, O. 1992. “Conscious Inessentialism and the Epiphenomenalist Suspicion.” In his Consciousness Reconsidered, Cambridge MA: MIT Press.
  • Frank, M. 1995. “Mental Familiarity and Epistemic Self-Ascription.” Common Knowledge 4 (1995): 30-50.
  • Gennaro, R. 2002. “Jean-Paul Sartre and the HOT Theory of Consciousness.” Canadian Journal of Philosophy 32: 293-330.
  • Gurwitsch, A. 1985. Marginal Consciousness. Athens, OH: Ohio UP.
  • Henrich, D. 1966. “Fichte’s Original Insight.” Translation D. R. Lachterman. Contemporary German Philosophy 1 (1982): 15-53.
  • Henry, M. 1963. The Essence of Manifestation. Translation G. Etzkorn. The Hague: Nijhoff, 1973.
  • Husserl, E. 1928. Phenomenology of Internal Time-Consciousness. Ed. M. Heidegger, trans. J. S. Churchill, Bloomington IN: Indiana UP, 1964.
  • James, W. 1890. The Principles of Psychology (2 vols.). London: McMillan (second edition, 1918).
  • Kim, J. 1998. Mind in a Physical World. Cambridge MA: MIT Press.
  • Kriegel, U. 2002. “PANIC Theory and the Prospects for a Representationalist Theory of Phenomenal Consciousness.” Philosophical Psychology 15: 55-64.
  • Kriegel U. 2003a. “Consciousness as Sensory Quality and as Implicit Self-Awareness.” Phenomenology and the Cognitive Sciences 2 (2003): 1-26.
  • Kriegel, U. 2003b. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” Synthese 134: 477-504.
  • Levine, J. 2001. Purple Haze: The Puzzle of Consciousness. Oxford and New York: Oxford UP.
  • Lewis, D. 1972. “Psychophysical and Theoretical Identifications.” Australasian Journal of Philosophy 50: 249-258.
  • Libet, B. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” Behavioral and Brain Sciences 8: 529-566.
  • Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: MIT Press.
  • Mangan, B. 1993. “Taking Phenomenology Seriously: The ‘Fringe’ and its Implications for Cognitive Research.” Consciousness and Cognition 2: 89-108.
  • Mangan, B. 2001. “Sensation’s Ghost: The Non-Sensory ‘Fringe’ of Consciousness.” Psyche 7(18). http://psyche.cs.monash.edu.au/v7/psyche-7-18-mangan.html
  • Moray, N. 1969. Listening and Attention. Harmondsworth: Penguin Books.
  • Natsoulas, T. 1996b. “The Case for Intrinsic Theory: II. An Examination of a Conception of Consciousness4 as Intrinsic, Necessary, and Concomitant.” Journal of Mind and Behavior 17: 369-390.
  • Natsoulas, T. 1999. “The Case for Intrinsic Theory: IV. An Argument from How Conscious4 Mental-Occurrence Instances Seem.” Journal of Mind and Behavior 20 (1999): 257-276.
  • Nichols, S. and S. Stich 2003. “How to Read Your Own Mind: A Cognitive Theory of Self-Consciousness.” In Q. Smith and A. Jokic (eds.), Consciousness: New Philosophical Perspectives. Oxford and New York: Oxford UP.
  • Putnam, H. 1967. “The Nature of Mental States.” Originally published as “Psychological Predicates,” in W. H. Capitan and D. D. Merrill (eds.), Art, Mind, and Religion. Reprinted in D. M. Rosenthal (ed.), The Nature of Mind. Oxford: Oxford UP.
  • Rosenthal, D. M. 1986. “Two Concept of Consciousness.” Philosophical Studies 94: 329-359.
  • Rosenthal, D. M. 1990. “A Theory of Consciousness.” ZiF Technical Report 40, Bielfield, Germany. Reprinted in N. J. Block, O. Flanagan, and G. Guzeldere (eds.), The Nature of Consciousness: Philosophical Debates. Cambridge MA: MIT Press, 1997.
  • Rosenthal, D. M. 2002a. “Explaining Consciousness.” In D. J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. Oxford and New York: Oxford UP.
  • Rosenthal, D. M. 2002b. “The Timing of Consciousness.” Consciousness and Cognition 11: 215-220.
  • Sartre, J.-P. 1943. L’Être et le néant. Paris: Gallimard.
  • Silverman, L. H., A. Martin, R. Ungaro, and E. Medelsohn 1978. “Effect of Subliminal Stimulation of Symbiotic Fantasies on Behavior Modification Treatment of Obesity.” Journal of Consultative Clinical Psychology  46: 432-441.
  • Smith, D. W. 1986. “The Structure of (Self-)Consciousness.” Topoi 5: 149-156.
  • Smith, D. W. 1989. The Circle of Acquaintance. Dordrecht: Kluwer Academic Publishers.
  • Sokolowski, R. 1974. Husserlian Meditations. Evanston, IL: Northwestern UP.
  • Sturma, D. 1995. “Self-Consciousness and the Philosophy of Mind: A Kantian Reconsideration.” Proceedings of the Eighth International Kant Congress, Vol. 1, Milwaukee WI: Marquette UP.
  • Thomasson, A. L. 2000. “After Brentano: A One-Level Theory of Consciousness.” European Journal of Philosophy 8 (2000): 190-209.
  • Tye, M. 2000. Consciousness, Color, and Content. Cambridge MA: MIT Press.
  • Van Gulick, 2002. “Consciousness May Still Have a Processing Role to Play.” Behavioral and Brain Sciences 14: 699-700.
  • Velmans, M. 1992. “Is Human Information Processing Conscious?” Behavioral and Brain Sciences 14: 651-669.
  • Weiskrantz, L. 1986. Blindsight. Oxford: Oxford UP.
  • Wider, K. 1997. The Bodily Nature of Consciousness: Sartre and Contemporary Philosophy of Mind. Ithaca, NY: Cornell UP.
  • Zahavi, D. 1998a. “Brentano and Husserl on Self-Awareness.” Êtudes Phénomènologiques 27-8 (1998): 127-169.
  • Zahavi, D. 1998b. “The Fracture in Self-Awareness.” In D. Zahavi (ed.), Self-Awareness, Temporality, and Alterity. Dordrecht: Kluwer Academic Publishers.
  • Zahavi, D. 1999. Self-awareness and Alterity. Evanston, IL: Northwestern UP.

 

[1] According to Kim (1998), this is how all scientific reduction proceeds. Thus, the reduction of water to H2O proceeded according to the same “plan”: in a first stage, water was “functionalized,” meaning that its causes and effects were studied; in a second stage, H2O was studied till it was known to have just those causes and effects singled out in the first stage; finally, water was identified with H2O on this basis.
[2] This seems to be Velmans’ (1992) view, for instance.
[3] For concrete argumentation in favor of the causal efficacy of consciousness, see Flanagan 1992, and Van Gulick 1992. According to Kim (1998), all phenomena must be causally efficient, hence not epiphenomenal, because of what he calls “Alexander’s dictum”: to be is to be causally efficient. If Alexander’s dictum is correct, nothing can be completely causally inert. If so, either consciousness is not epiphenomenal, or there is no such thing as consciousness.
[4] Functionalism is not the view that mental states and events have a functional role – that is almost beyond dispute. What functionalism claims is there is nothing more to a mental state or event beyond its functional role.
[5] In other words, the discussion of this section paves the way for a certain argument against functionalism about consciousness, namely, the argument that functionalism necessarily fails to explain the functional role of consciousness.
[6] In this paper, however, I am less interested in the causes of consciousness and more in its effects. The notion of functional role relates equally to the causes and effects of whatever plays the role, but the ‘causes’ part is of lesser interest to me here.
[7] For very concrete effects of subliminal perception on anxiety, see Silverman et al. 1978. For more general discussion of subliminal perception and its functional significance, see Dixon 1971. Another well known form of unconscious perception which retains some of the causal powers of conscious perception is blindsight (see Weiskrantz 1986). Unless the function of consciousness is implausibly duplicated, such that another mechanism has exactly the function consciousness has, any function a blindsighted subject can execute in response to her blindsighted perceptions must thereby not be part of the function of consciousness proper.
[8] For close interpretations of Brentano along these lines, see Smith (1986, 1989), Zahavi (1998a, 1999), Thomasson (2000), and Kriegel (2003a, 2003b).
[9] He writes (Brentano 1874: 153-4): “[Every conscious act] includes within it a consciousness of itself. Therefore, every [conscious] act, no matter how simple, has a double object, a primary and a secondary object. The simplest act, for example the act of hearing, has as its primary object the sound, and for its secondary object, itself, the mental phenomenon in which the sound is heard.”
[10] This is not to say that there are no important differences between Husserl’s and Brentano’s views. For a comparison of their respective views, see Zahavi (1998a). For other discussions of Husserl’s view, see Brough (1972), Sokolowski (1974), Smith (1989), and Zahavi (1999).
[11] Again, each of these views is importantly dissimilar to Brentano’s original view and to each other. But they all share the same general outlook. For discussion of Sartre’s view, see Wider (1997), Zahavi (1999), and Gennaro (2003). For discussion of Henry’s view, see Zahavi (1998b, 1999). For discussion of Gurwitsch’s view, see Natsoulas (1999). For work by members of the so-called Heidelberg School, see Henrich (1966), Frank (1995), and Sturma (1996).
[12] See Smith 1986, Rosenthal 1986, 2002a, Lycan 1996, Carruthers 2000, and Levine 2001.
[13] Focal awareness of our conscious states characterizes the more reflective, or introspective, moments of our mental life. When a person introspects, she focuses on her conscious state. When she starts focusing on something else, her state either becomes unconscious, or she retains a peripheral awareness of it.
[14] I am assuming that focal awareness is always conscious (i.e., that states of focal awareness are conscious states). This is admittedly not an indubitable assumption, but a full defense of it would take us too far afield.
[15] In the sense in which I am using the term, peripheral self-awareness is not necessarily peripheral awareness of oneself. Rather, it is peripheral awareness of a mental state, event, or process going on within oneself. This does not mean that peripheral self-awareness cannot be awareness of the self. Self-awareness in the sense in which I am using the term may be either awareness of oneself or merely awareness of one of one’s mental states – or both. We need not commit to any particular view here, although there are good independent reasons to think that peripheral self-awareness does involve awareness of the self (see Rosenthal 1990 and Kriegel 2003b). In any event, it is clear that peripheral self-awareness as construed in the phenomenological tradition, does include reference to the self.
[16] In the case of visual perception, the distinction between focal and peripheral awareness is what cognitive scientists refer to as the distinction between foveal vision and peripheral vision. Foveal vision is vision of stimuli presented to the fovea, a tiny central part of the retina with an angle on about two degrees of the visual field; peripheral vision is vision of stimuli outside that central part of the visual field.
[17] The same phenomenon was referred to by Husserl (1928) as non-thematic consciousness and by Sartre (1943) as non-positional consciousness.
[18] Indeed, peripheral self-awareness seems to be a constant element in the fringe of consciousness. This must be the case if peripheral self-awareness is indeed what consciousness proper is. Peripheral self-awareness is then necessarily an element in every conscious state, since it is what makes the state conscious.
[19] The functional analysis of peripheral awareness that I will develop in this section owes much to the work of Bruce Mangan (1993, 2001).
[20] At least this conception of attention has been widely accepted since Broadbent’s (1958) seminal work on attention. See also Moray 1969.
[21] It may happen that two adjacent stimuli form part of a single center of focus for the subject, but this situation is not a case in which the experience has two independent focal centers. To make sure that the example in the text brings the point across, we may stipulate that A, B, C, D, and E are so distant from each other that no two of them could form part of a larger, compound stimulus which would be the focal center of attention.
[22] There are other possible strategies that would misrepresent other features of normal experience. Consider the strategy that grants 60% of attention to C, 2% of attention to A, 8% to B, 8% to D, and 22% to E. It violates the principle that all elements in the periphery are more or less granted equal attention, which is a feature of the 60/10 strategy. We need not – should not – require that the amount of attention granted to all peripheral elements would be exactly identical, of course, but the variations seem to be rather small.
[23] Note, furthermore, that there are conditions under which peripheral awareness is actually extinguished. When a subject comes close to passing out, for instance, more and more of her peripheral visual field goes dark, starting at the very edge and drawing nearer the center. The moment before passing out, the subject remains aware only of foveated stimuli (i.e., stimuli presented in foveal vision), while her entire peripheral visual field lies in darkness. It appears that the system, being under duress, cannot afford to expend any resources whatsoever on peripheral awareness. The presence of peripheral awareness is the norm, then, but hardly a necessity.
[24] Although we might understand the notion of peripheral awareness in such a way that the 20/20 strategy entails that all (or at any rate most) awareness is peripheral. I think this would be a mistake, but let us not dwell on this issue. The possibility of the 100/0 strategy is sufficient to establish that there is no deep necessity in the existence of peripheral awareness.
[25] It does not matter for our purposes whether the 60/10 strategy is based in a mechanism that is cognitive in nature or biologically hardwired. It is probably a little bit of both, but in any event the mechanism – whether cognitive, biological, or mixed – has been selected for due to its adaptational value.
[26] There is a question as to what precisely one is aware of in peripheral self-awareness. Am I peripherally aware of my entire experience, including the peripheral elements in it, or only of the focal center of the experience? For instance, am I peripherally aware of my peripheral awareness of the books, or only of my focal awareness of the desktop? I will not broach this issue here, as it does not seem to bear on the issue of the functional role of peripheral self-awareness (at least not at the level at which I am interested in it).
[27] I am construing here the notion of a monitoring device in a relatively restrictive way, i.e., as describing a mechanism that gives the subject focused, rich information on its own processes and states. There is also a more relaxed usage, in which any mechanism that gives the subject some sort of information on its own states and processes is a monitoring mechanism. In this more relaxed sense, consciousness as portrayed in this paper does qualify as a monitoring mechanism.
[28] For a fuller list, see the discussion of Baars’ (1997) account of the functional role of consciousness at the beginning of §6. For more on the functional significance of a monitoring module, see Baron-Cohen 1995, Carruthers 2000, 2002, Nichols and Stich 2003.
[29] If we accept the common conception of evolution as a process of variation-and-retention, we may say that the fact that a feature is good to have does suggest that it will be retained, although it does not guarantee that it will appear through variation in the first place. The fact that peripheral awareness and self-awareness surely exist, however, suggests that the basic building blocks for peripheral self-awareness have been in place, so that the appearance of peripheral self-awareness through variation should be expected.
[30] At least this is normally or typically so. In some cases, M may be conscious when the subject is peripherally aware of a chain of focal awarenesses leading up to M.
[31] It might be objected that the sort of functional role attributed to consciousness in the present paper could in principle be performed by an unconscious mechanism, and this would defy the singularity requirement. This objection would be misguided, however. The singularity requirement is intended to rule out functions that conscious states have, but not in virtue of being conscious. It is not intended to rule out function that unconscious states could also but do not in fact have.
[32] This list is obtained by bringing together the titles of different sections in Chapter 8 of Baars 1997.
[33] Note that Tye stresses that this is the functional role of conscious experience precisely qua conscious experiences – suggesting that he has the singularity requirement in mind.
[34] About blindsighted perception, Tye writes: “It is worth noting that, given an appropriate elucidation of the ‘poised’ condition, blindsight poses no threat to the representationalist view… What is missing, on [my] theory, is the presence of appropriately poised, nonconceptual, representational states. There are nonconceptual states, no doubt representationally impoverished, that make a cognitive difference… But there is no complete, unified representation of the visual field, the content of which is poised to make direct difference in beliefs.” (Tye 2000: 62-3)
[35] Thus, a thought that it is raining can play a causal role in taking an umbrella, which is a motor output, but it can also play a causal role in producing the thought that it has been raining for the past week, which is a not a motor output but a further internal state.

 

The Functional Role of Consciousness (A Phenomenological Approach), Uriah Kriegel, University of Arizona, Phenomenology and the Cognitive Sciences 4 (2004): 171-193.

Evolution of Consciousness

Even the simplest organisms, such as those consisting of but a single cell, interact with their environments. As metabolic systems in a balanced steady-state, all organisms must obtain nutrition from their surroundings. As they do not live in a vacuum, organisms are also in constant contact with the water or air around them, and they are also exposed to solar radiation and other electromagnetic and chemical influences. The long-term interaction between organisms and environmental stimuli resulted in development of various sensory systems for detecting the diverse external stimuli on which the organisms rely for food or which they must avoid as dangerous. In both cases, a sensory apparatus had to be developed which, via the interneurons , automatically provided signals to the motoric cells for inherent responses of flight or approach.

The Phylogenesis of Symbolic Information

It is necessary to recall these ancient interactions between organisms and their surroundings because they gave rise to the development of sensory systems appropriate for the physical stimuli. However, whereas environmental stimuli in the form of energy and food were ingested, the sensory apparatus evolved into organs which did not take in the stimulus itself, but rather received information about it. Only in plants do photoreceptors still serve as a source of energy. As the environment of multicellular organisms expanded, and stimuli to which organisms had to react in order to survive became more varied, the processes of trial-and-error and natural selection led to development of stimulus filters in the form of receptor systems which reacted only to combinations and sets of stimuli that were of importance to the organism. These combinations of stimuli relationships were embodied by a sensory apparatus capable of selecting stimuli according to certain categories, determined by biological factors. During development of sense qualities in the course of evolution, the formation of invariants played a key role, for recognition of food or predators under varied conditions of light and the surroundings was essential for survival. Therefore, it was advantageous to have a sensory apparatus capable of identifying stimuli by means of a filter consisting of signals generated by the apparatus itself. This mechanism, in turn, was capable of evolution.
Very early in the course of evolution, we encounter the colorful world of flowers, colors, sounds, shapes, and scents which grew out of the interactions between insects and their environments. The question as to whether bees respond only to certain electromagnetic wavelengths, that is, whether they react to physical stimuli or actually to certain colors, was resolved by von Frisch, whose experiments showed that they really do respond to the same colors, even under changing conditions of light and wavelength.

To be sure, neither color nor light nor other sense qualities really exist in the environment: They are products of the sensory apparatus, which selects them by means of its filter. The sense qualities perceived by insects and other invertebrates are projected by the sensory filter onto the physical stimulus. Thus, the latter serves as vehicle carrying symbolic information to the sensory system. The sensory filter serves both as the projector and the receiver of sense qualities. The sensory apparatus uses its own analyzers to process the stimulus signals in such a way that it responds only to certain colors or sound sequences.

With these filters and analyzers, the sensory systems “invented” an entirely new form of information: Instead of physical properties that cannot be transferred to sensory channels, a representation of them was selected and produced, namely, the filtered sense qualities. Such a representation is also referred to as a “symbol”; therefore, one may refer to sense qualities as elements or signs of symbolic information.

As implied by the aforementioned insect’s world of colors, sounds, and scents, the sensory filters of sense qualities not only filter, but also project sense qualities onto the environmental physical stimuli, which animals take up only through the “eyeglasses” of sensory qualities. In other words, insects take up their surroundings in a form they develop themselves. The symbolic information requires a material carrier. When a sense quality is projected onto a physical stimulus, the stimulus also becomes a carrier of sense qualities, so that in this guise they may be picked up and processed by the senses. Otherwise, it is difficult to conceive of how the colors, flowers, and scents in an insect’s world might have originated.

The entire visual world is based on this type of projection: The eyes, instead of picking up electromagnetic waves which a physical object has absorbed and assimilated, receive only waves which are reflected or deflected without having penetrated the physical object. Therefore, it is not the object itself which meets the eye, but only a projection of the waves the object failed to absorb.

The sensory filter, too, functions in a way similar to that in which vision is affected by eyeglasses, through which the surroundings may be perceived as distorted or sharp, red or dark. The filter evolved by interaction with the environment and natural selection. Even though stimuli passing the sensory filter take on properties of the latter, the sense qualities still are not states of the organism whose sensory systems interact with the stimulus to produce them. At this level, the symbolic information contained in sense qualities is the product of two material systems or mechanisms, namely, the environmental stimulus and the sensory apparatus. The information achieves an existence separate from that of the filter only in that the filter projects it onto the physical stimulus, which then becomes a carrier of information to the sensory apparatus. The symbolic information exists solely in a material carrier, which thus becomes an indispensible component. If the series of material carriers in the recoding chain, to be described below, is interrupted, the information is lost.

This preconscious origin of symbolic information in the interaction of the sensory system with environmental stimuli, of which the symbolic elements or signs are the sense qualities, is also a critical factor in the development of consciousness and its “language”. The highly developed mammalian brain with its cognitive apparatus or organs is capable of obtaining the information about the external surroundings needed for central control of behavior only in preexisting terms of the symbols of sense qualities. In other words, an organism does not have to reinvent symbolic information about physical properties of environmental stimuli from scratch. “Consciousness” becomes an unsolvable conundrum if its origin is attributed only to the neural network without regard to antecedent developments. The symbols of information, that is, the sense qualities, are not derived from the neural network, which communicates with nervous impulses and neuronal potentials and stores and encodes the information contained in patterns of neuronal excitation.

Neurons and neuronal patterns are not the information itself; rather, they merely convey information. Thus, symbolic information originates outside its carriers. The sources of information for the neuronal network are the sensory systems with their receptors. A neuronal network that is cut off from the sensory system is incapable of creating symbolic information in and of itself; even to obtain information about its own state of excitation, the nervous system requires a sensory apparatus. Without a sensory apparatus, the nervous system receives no symbolic information, either about events within itself or about outside stimuli. Actually, an organism is unaware of processes which transpire subconsciously and automatically. Many neuroscientists ignore this fact and attribute their expertise to the nervous system. Notwithstanding, the nervous system is unsurpassed as a storage unit and processor of signals it obtains from the sensory apparatus and as a carrier of information.

In invertebrates, the sensory apparatus is directly connected to effectors by way of interneurons. The sense qualities of signals elicited by stimuli are analyzed, then signals are transmitted directly to the motoric cells, which react to the signals with genetically determined patterns of motility.

Even invertebrates are capable of reinforcing the connections among heavily used pathways of excitation, and thus of learning, despite lack of cognition, within narrow limits. However, aside from genetically programmed sensory filter and analysis cells, invertebrates lack the ability to store newly acquired information, to be recalled for later use. The memory of invertebrates still consists of the variable strength of interneuronal synaptic connections.
The Development of Cortical Information Storage and the Neural Code

Organisms had to develop a cognitive apparatus in order to utilize information about the outer environment to adjust their activities, thus using learning processes to expand the less adaptable behavioral program established by the genes. A long period of development was necessary before organisms were able to store and analyze information in the cortical network and centralize their controls in the reticulo-thalamo-cortical system. Only the organisms equipped with such a system became capable of taking up symbolic information and storing it.
In the course of time and evolution, organisms developed a neural apparatus that enabled them not just to react to symbolic information, but to utilize the sense qualities as elements of an internal language. This internal language opened unlimited possibilities for new symbols designating objects and events, such as the human language.

This purpose was served by the neocortical network, among others, whose primary and secondary sensory areas represent the peripheral sensory receptory system in the cortex, and continue its functions of analysis and filtering in a more refined way. For example, the visual system in the occipital and temporal brain lobes comprises six different fields, V1 to V6, in which light differences, colors, orientation and movement as well as shape and contours of objects are analyzed separately in specialized fields and neuronal assemblies. This analysis of incoming signals from the receptor fields of sense organs is a continuation of the sensory system’s filtering function, by means of which the manifold sense qualities are selected before the act of seeing can take place. This subconscious analysis of cortical sensory fields, unlike the organization of the invertebrate brain, is not directly connected to motoric functions or effectors. The neural representations or cortical sensory detectors are the neural carrier or code for the sense qualities, which must be decoded into the original symbolic information in order to be invested with semantic meaning.
The Preattentive Phase

Preconscious, preattentive analysis precedes the first storage of information and conscious perception; it has a latency period of about 60 ms. The signals are transmitted to the sensory fields of the cortex by way of the lemniscate tract of the spinal cord, crossing two synapses. This process has been most precisely studied for the visual system.

During the preattentive orientation phase, the organism (more precisely, its central control system) and the stimulus excite primary arousal of the activation system itself and and the sensory fields. The body and its senses become aligned with the stimulus via the sensomotoric aminergic and cholinergic paths of the reticular brain stem, which probably releases the neurotransmitters noradrenaline, dopamine, serotonin and acetylcholine into the extracellular cortical fields, raising the excitation level of certain areas in preparation for uptake and processing of sensory signals. Furthermore, by way of branches of the sensory tracts to the reticular system, the stimulus induces a higher state of excitation in select groups of neurons. In the cerebral cortex, this leads to so-called expectation potentials, which increase gradually until the level of activation of the sensory areas becomes high enough to receive and process sensory signals. With a latency period of 70 to 500 ms, this preattentive preactivation phase then proceeds with the components N 100 to P 300 of the endogenous or exogenous event-related potentials to a state of conscious attention. During the preattentive phase, the subconscious transformation of sensory cells to sensory detectors by the sensory signals sets in, and the sensory neuronal groups must be primed for this function. Only after such preparation can the sensory apparatus be aligned with the stimulus and turned to it centrifugally, so that perception may occur. Experts still disagree about the latency period that elapses between the stimulation and conscious perception; in contrast to the 60 ms mentioned above, Libet found a latency of 500 ms. In any case, it is certain that more time elapses between stimulus and conscious perception than the signal needs to travel from the periphery to the cortex, even if it must cross two or three synapses. The brain needs this time in order to transform the signals into detectors and align them centrifugally with the stimulus.

During the preconscious sensory impression of the preattentive phase of perception, the sensory stimulus triggers the formation of detectors in the cortex. In other words, a neuron or group of neurons is attuned by signals of the sensory system to a certain sense quality, for which the cell or cell group may then function as a detector. Since this detector function is stored both by facilitation and in a pattern of excitation, it may be referred to as a code for and carrier of sense qualities.

Preattentive orientation proceeds subconsciously at the level of the nervous system. Not until sensory perception is attained can attention focus upon information as an object with which it can operate; only when this level is reached does preattention make the transition to the conscious attention of a cognitive system.
The Reticulo-Thalamo-Cortical System (= Activation System)

The task of the sensory system, which includes the sensory fields of the cortex, in the preattentive phase is to analyze stimuli, so that the sensory system can filter the stimuli and align the filtered sense qualities with the stimulus. Preattentive orientation precedes conscious sensation; it is the focussing, concentration, or strengthening of the excitation or activation of a neuronal field with sensomotoric functions. This activation of attention proceeds from the activating system and the nonspecific excitation which turns sensomotoric fields on and off, and involves activated groups of neurons in its functional unit. The relationship between the activation system and attention is so close that they are referred to as the attention system. Some of its manifold, reciprocal pathways of excitation extend from the brain stem across the limbic system to the prefrontal cortex; another path runs from the reticular system of the brain stem across the intralaminary or nonspecific thalamic nuclei to the upper layers and to layer VI of the cortical columns, which are joined by the lemniscate sensory tracts in layer IV (Newman/Baars 1993).

Since the activation system has been mentioned several times, a brief introduction to this neuroanatomic innovation in vertebrates is necessary. As recently as 1949, G. Moruzzi and H. W. Magoun discovered in the brain stem a structure apparently devoid of specific sensory or motoric function, which was the reason why it had been overlooked for so long. However, the role it plays is a crucial one. Gradually it became evident that this structure serves as a central activating system that both monitors and regulates the level of excitation of the entire organism. It is conjoined with the limbic system, and through it with the autonomic nervous system and the hypothalamus to form a functional unit extending to the nonspecific and intralaminary thalamic nuclei and communicating via two tracts with cortical structures, especially the limbic prefrontal brain. The activating system contains its own nonspecific excitation tracts, by way of which it monitors and regulates not only itself, but also sensory and motoric functions. Because of its preeminence and the control function it exerts, it is a sort of metasystem within the central nervous system.

The attention system is served by neurons in the parietal, temporal, and frontal cortex as well as in the region of the supplementary motoric areas in field 6; the best-known example is the frontal visual field. In the immediate vicinity of these sensory fields with attention functions are the sensory hand-arm field and the like, all of which serve to align the body and sensory systems to the stimulus. There are several visual fields (prefrontal, supplementary, and parietal fields); the same is true of the other sensory systems. There are also several hand-arm fields in the immediate vicinity of the visual fields. This proximity suggests a coupling of eye-hand-arm control by the activation system. The premotoric cells of the hand-arm field (the anterior part of field 6) discharge during intentional hand movements, such as conscious grasping and when the mouth is used for similar intentional movements. The neurons also fired even if the ipsilateral arm or the mouth was used, indicating that the neurons do not reflect muscular activity; as further evidence, when the muscles were used for motoric actions, the neurons remained silent. Stimulation of the arm-hand fields elicited coordinated, stereotypic movements of the contralateral arm. These fields of selective attention serve to align the body and senses toward the stimulus (G. M. Edelman et al. 1990). These and the observations described above support the notion that the activation system has a whole roster of secondary sensomotoric fields at its disposal for vision, hearing, etc., distributed all over the cortex, when exercising its function of sensomotoric attention and coordination. The process of sensory perception and awareness begins in such secondary fields, which are subordinated to the metasystem. By way of these cortical fields, which are connected to the superior colliculi and the reticular nuclei of the brain stem, muscles of the sensory receptors are aligned toward the stimulus and adjusted so as to be able to follow the moving stimulus. This has been studied in detail for visual processes (Ch. J. Bruce 1990). The next question is how visual processes become seeing, and how other senses elicit conscious awareness and perception.

The development of symbolic information was possible only in organisms with some degree of central concentration of drive and behavior in the reticulo-thalamo-cortical activating system to make them capable of activity.

The contention that the activating system truly participates in conscious sensory perception and recognition, memory, and imagination is supported by several uncontroversial findings:

If the nonspecific impulses between the intralaminary thalamic nuclei and the cortical sensory fields are blocked, consciousness is lost; the same thing also happens when the reticular system of the brain stem and the nonspecific thalamic nuclei are completely interrupted.

If the collaterals, i.e., the branches, of the sensory tract to the reticular nuclei of the mammalian brain stem is interrupted, the animal ceases to react to stimuli, although signals still reach the intact cortex, where they can be detected (D. B. Lindsay 1957).

If the reticular system of the midbrain is severed, the decerebrated animals lose the capability of attentive, conscious, centrally regulated behavior (S. Grillner 1990).

The prerequisite for conscious behavior in humans is simultaneous activation of the cortical columns of the sensory fields, i.e., of the upper layers or of layer VI by the nonspecific excitation of the activating system, and of layer IV by specific sensory excitation. If any one of these tracts is interrupted, conscious perception ceases (J. Newman and B. J. Baars 1993).
Therefore, conscious behavior evidently results from the synchronous interaction of two systems, namely, the reticulo-thalamo-cortical activating system (also referred to as the metasystem) and the specific sensomotoric system.

Most neurophysiologists concerned with explaining consciousness now recognize the role of the reticular activation system in conscious processes of attention, sensory perception, and memory. However, instead of explaining how the neural network and its processes elicit conscious behavior, Edelman, Crick and many others offer masterly descriptions of the neural events that accompany conscious behavior. These descriptions are still within the confines of psychophysical parallelism, which lacks appropriate categories to which the role of the reticulo-thalamo-cortical activation system, for example, may be assigned within the more comprehensive system of the organism of the whole. Such descriptions and analyses remain at the level of the neuronal network and its processes, which run in parallel to conscious processes. In other words, it is not enough to verify with psychophysical parallelism the existence of synchronous interaction between nonspecific activation system and the specific sensory system during conscious behavior. It is essential to demonstrate the active regulatory and monitoring functions exercised by the reticulo-thalamo-cortical sensory fields on specific sensory apparatus, including the cortical sensory fields involved in conscious processes (e.g., feeling, perception, memory, etc.), in order to supercede the level of psychophysical parallelism, since these systemic properties overstep the limitations imposed by the properties of the neuronal network.

Without Interaction with the External Stimulus, the Neural Code Cannot Be Deciphered

Although the preattentive sensory impression that precedes conscious perception and serves in formation of cortical sensory detectors and neuronal carriers of information by analyzing input signals in the various sensory fields has frequently been studied, documented, and proven by neuropsychologists and neurophysiolgists, the significance of this event has largely escaped attention. Nevertheless, the explanatory model for perception presented here stipulates preattentive analysis of stimuli before the activating system is able to align the sensory system with its appropriately attuned filters centrifugally toward the stimulus, from which it may decode the sense qualities. Many reputable researchers believe that the sensory fields of the cortex not only represent the indispensable analyzers of the stimulus signals, but go beyond that to actually generate sense qualities, for example, the categories of color in the visual system. In support of this notion, they refer to the observation that malfunction of the sensory fields causes the corresponding sense qualities to disappear. This observation, of course, is unquestioned, but the interpretation is subject to doubt; for although the cortical analyzer may be an indispensable prerequisite for sensory perception, it is not the only one. The sensory system, with its cortical sensory detectors attuned to the stimulus, still must be aligned with the physical stimulus in order to decode the sense qualities. Sensory qualities are generated and perceived by the system as a whole only when the physical stimulus meets the detector and information carrier attuned to it in a feedback excitation circuit.

In contrast, S. Zeki, among others, attribute to the sensory fields of the cortex the ability to generate various sense qualities such as light, color, tonality, and scent (“transforming the signals reaching it to generate constructs that are the property of the brain, not of the world outside, and thus in a sense labeling the unlabeled features of the world in its own code”). Naturally, this would be the simplest explanation; but it is refuted by the fact that people born blind or deaf cannot be made to see or hear by electrical stimulation of their intact sensory fields. In other words, it is not enough for stimulus signals to simply arrive at the sensory fields of the brain, be analyzed there, and be transformed into detectors of selected sense qualities by the cortical filters. In addition, the sensory detectors and neural carriers of informaiton thus produced must be confronted with the stimulus, which must be present if the sensory system with its adjusted filters is to extract the sense qualities from the physical stimulus. This applies, of course, only to the elementary, nonspatial sense qualities.

When the sensory system and the reticular activation system report a stimulus and simultaneously activate the corresponding cortical sensory detector, the activation system directs aligns the cortical detector and its sensory system toward the stimulus. Corticofugal influences modulating the afferent impulses from the periphery have been reported in a number of publications (G. D. Dawson 1958; K. E. Hagbarth and D. J. B. Kerr 1954; G. E. Mangun and S. A. Hillyard 1990, pp. 271 ff.). This control center of centrifugal excitation involves the following events: The sensory system permits the stimulus to appear only through its filter, that is, the sensory system understands only its own projection of the stimulus, namely, the sense qualities it generates itself. However, these are not arbitrary products of the brain, as some presume. The symbolic information, that is, the sense qualities, can be generated by the sensory system only if the the physical stimulus is actually present to interact with it. Symbols invented by the brain would be self-contradictory, for they would represent no other physical reality. As already mentioned, electrical stimulation of cortical sensory cells fails to elicit perception of the respective sense qualities in persons born blind or deaf, even if their cortical sensory fields are intact. However, if the organism has already had such sensory experience, e.g., once seen colors or heard sounds, these experiences can be elicited again by electrical stimulation of the cortical storage, as experiments by Penfield, Libet, and others have shown. The initial sensory experience must therefore be gathered in the confrontation and interaction of the sensory system with stimuli from the outside world. This also applies to the so-called internal stimuli of the limbic system, which must first make a detour through interoceptive tracts of the peripheral or autonomic nervous system before they can be felt and perceived as sense qualities by the cortical detectors.

In addition to this evidence, several other observations also contradict the view that stimulus signals are transformed into sense qualities by the brain alone. Finnish researchers found the primary visual field of the cortex in blind people to be utilized by the sense of hearing. “In the deaf, the areas of the temporal lobe in which sounds are normally processed are used instead for processing visual information” (R. Ornstein, R. F. Thompson). In Paris, Michel Imbert and Chr. Matin of Pierre et Marie Curie University interrupted the neural tracts connecting the thalamus (lateral geniculate body) and the visual cortex in a newborn hamster, since in these mammals the brain development is not yet complete at birth. The visual nerves were then attached to the somatosensory tracts, which had been likewise been cut, so that visual signals were sent to the somatosensory fields of the parietal cortex. After the animal recovered, the researchers were able to derive visual signals from the parietal field; the visual behavior of the hamster did not differ from that of normal animals.

These experiments clearly indicate that light, color, sound, and other sense qualities cannot be generated solely by the sensory fields of the cortex. The properties of analysis and filtering in the cortical fields are developed by interaction with peripheral sensory receptors by way of connections between the receptor fields and the cortical representations. Actual deployment of the filter function of the sensory system is possible only with an external stimulus, and the filter can switch to a generator of sense qualities only by interacting with this complementary part.

The Mechanisms of Generating Information

The symbolic information is generated by the interaction of two material systems, namely, the physical stimulus and the sensory system. In the course of evolution, they have become assimilated and adapted to each other and developed two complementary systems: both the physical properties of stimulation that must enter the receptor system and the filters of the sensory systems are adjusted to each other. The sense qualities emerge as products of the interaction between the physical stimulus and the sensory system. When sense qualities are projected onto the physical stimulus, the latter becomes their carrier, for symbolic information needs a material carrier. The sensory system reads or scans the carrier in order to obtain symbolic information generated within itself.
In mammals, the preconscious generation and transmission of information has been transmuted in that the sensory system is now part of an organism capable of self-regulating behavior. After preconscious adjustment to the stimulus, the central neural governor once again confronts the sensory system with the stimulus, but this time as an organ of attention under the control of the organism’s central regulatory system, i.e., the activating system.

The condition of the sense qualities in the carrier of the physical stimulus is also the only decoded condition of the sense qualities to which the brain, by way of the senses it controls, has direct access to sensation and perception. Without these sensory events, the brain fails to perceive any decoded sense qualities, and without perception of sense qualities there can be no psychological or mental world; that is, there is no differentiation between subject and object until sense qualities are perceived. The self-generated conditions of the sense qualities are hidden from the brain or kept at an unconscious level until they confront the sensory system in a physical information carrier as an external object, rendering them accessible. This is made possible, as it were, by a trick of evolution, which has unlimited inventiveness: The same sensory filters that permit the sensory system both to project sense qualities onto the physical stimulus and to utilize the stimulus as its carrier of information also read and perceive the self-generated sense qualities from it, because they fit it like lock and key.

The sensory receptors and the sensory filters are not the only ones having a lock-and-key mechanism consisting of their self-generated sense qualities projected onto the physical stimulus; the cortical sensory detectors, too, are attuned to the sense qualities projected onto the physical stimulus as a key to a lock. The cortical detectors and the sensory filters are complementary systems, and form a functional unit themselves. For the transmission of symbolic information from the outside into the brain, evolutionary processes have led to a chain of complementary systems, along which symbolic information is transmitted and recoded from one level to the next higher one, without ever losing the material carrier, even temporarily. Sensory receptors and cortical sensory detectors are examples of such complementary systems, across which the same symbolic information in the decoded state is transmitted from the physical carrier to its neural code in the cortex. Since the complementarity or tuning between the peripheral receptor and the cortical detector systems is determined during embryonic development and in the subsequent period of learning, the simplest neural frequency code of all is sufficient: on or off, excited or inhibited. If complementary systems are activated, they are tuned in to each other, related to each other, or self-referent.

In principle, sensation is decoded when the central neural metasystem utilizes the nonspecific activation to align the sensory detector and the sensory system to the stimulus. Upon meeting it, the detector “recognizes” the physical information carrier by means of the tuned-in sense qualities, because they fit together. The long-established lock-and-key mechanism lives on in a more advanced form in this process of recognition, which is reminiscent of recognition of a receptor by a ligand. The information is transmitted by its original carrier, the physical stimulus, to the neural carrier, the detector, by way of an activity circuit with manifold feedback between the peripheral sensory receptors and the cortical sensory detectors.

The stimulus instigates a periodic process. “An optical or acoustical stimulus leads to periodic discharges in the addressed nerve cells”, wrote E. Pöppel. These discharges occur at intervals of about 30 ms, as shown by electroencephalography. Their periodicity enables the cortical structures to analyze the incoming signals, while once again aligning the sensory organ (e.g., the eye) to the physical stimulus, all at the same time. The centripetal and centrifugal excitation of sensation forms the feedback loop, already referred to several times, between the peripheral and cortical systems, and establishes synchronous peripheral decoding and its cortical representations.

There is a way to obtain scientific evidence that the neural processes under study actually do involve transmission and processing of sense qualities. It is based not on introspective experiences, but rather on verifiable data, in a sense, meta-data. To mention a few:

Conscious processes of sensation require that both the system of activation and specific sensory systems are simultaneously operative and interacting.
During the preattentive phase preceding conscious sensation, the cortical sensory detector is formed by an unconscious sensory impression. Without a sensory detector, no perception or experience occurs.
Attention structures in the parietal, prefrontal, and temporal associational cortexes aligns the sensory systems centrifugally to sense qualities of the stimulus, which are attuned to the detector.
Sense qualities are not immediately retrievable from the brain without previously having been read or scanned by the sensory organ from the physical stimulus.

On the other hand, sensory perception without intact cortical representation is impossible (cf. “Blind Vision”).
Sensory perception occurs between the periphery and the cortex in a centripetal and centrifugal multiple feedback loop, in which specific and nonspecific impulses are also simultaneously dovetailed at different levels.
These and other data give us some knowledge of events of sensation, attention, and other conscious processes. At the same time, they permit us to draw inferences about processes which we cannot observe directly, but which are prerequisites for observable processes. Data of this nature are provided by experimental cognitive psychology.
Evolution developed the solution to a problem that network theoreticians have been working on without success to date. However, the point of departure for evolution was not a mechanical network, but rather an organism with a central activation system. One must find the activity of an organism capable of self-regulating behavior behind the feedback excitation loops of sensation in order to understand what actually transpires with these feedback signals of the nervous system. The origin of symbolic information in the interaction between physical stimulus and sensory system as well as the developmental stages leading to perception of these sense qualities by the attention of a mammal can be traced step by step (Hernegger 1995).

Decoding the Neural Code in Sensation

The neural network is a highly organized, complex system of nerve cells that can be broken down all the way to the level of its molecular components for study. The nerve cells have no “inner life”, either individually nor as a group; they are capable neither of sensation nor of feeling. First, the activating system must align and prepare the sensory system and the cortical sensory detectors with the environmental stimulus before they can receive and process the sense qualities. Under the guidance and control by the activation system, the sensory apparatus, including the cortical sensory fields, is transformed to its organ of cognition. The transformation is initiated by the prior cortical analysis of signals from the peripheral receptor and the concomitant formation of a cortical sensory detector; the organ of recognition of the activation system can perceive external stimuli through its complementary filter only in the form of sense qualities, for the filter is now also the receptor of the sense qualities it generates itself.

But how does a perceived sense quality become an object of attention of the activation system?

Here, too, the importance and irreplaceability of the cortical sensory detectors is evident, even if it were only because of preattentive sensory impression represented by the neural code, which is later decoded by way of an excitatory feedback circuit with the perceived sense qualities. In this way, the neural carriers of information in the cortex are given the semantic meanings for the organism’s central controlling system, which can now direct its attention, that is, its nonspecific excitation, to the cortical sensory representations or include and incorporate the excitation patterns of the decoded sense qualities into its own system. The activation system is actually capable of including neural structures in its functional unit and releasing them again. The inclusion of the sensory apparatus in such a functional unit transforms the sensory apparatus into an organ of perception of the activation system, the representation of the organism as a whole.

Before sensation occurs, the unconscious, preattentive sensory impression involves formation of a cortical representation or sensory detector of sense qualities in the neural code of the nervous system. This code must be decoded for the information to become an object of attention.

Once they have been tuned in to the stimulus, the sensory systems, regulated by the central system of attention, are aimed outward at the stimulus, in order to decode the neural representations or the neural code of the cortex by sensation or perception of sense qualities upon meeting the stimulus. Decoding means transforming one code into another one, or into a “language” which the recipient can “understand”.

The recipient capable of “understanding” the language of sense qualities is not the isolated nervous system, in whose code the information is already stored, but rather the whole organism. Initially, although the sensory systems were directed toward the external environment, the organism was unable to sense, perceive, nor recognize anything, for lack of corresponding internal conditions, but was only capable of picking up symbolic information from outside of the central nervous system. For this purpose, it became necessary to transform the sensory system and the sensory cortex into an organ of recognition.

Decoding occurs via the feedback excitation circuit between the sensory receptor and the cortical detector. While the stimulus signals are sent inside to the brain, the brain directs the eye or ear (the sensory receptors) to the outside. By way of the reticular excitation pathways, however, the limbic-autonomic and the peripheral nervous systems, i.e., the entire organism, is involved in this process of sensation, perception and recognition, especially since somatosensory perception is involved in every other sensation. In sensory perception, feedback occurs between the organism and the nervous system by way of these complicated loops, and not only within the neural network, as contended by Edelman and most neuroscientists who are trying to find an explanation for consciousness. For this reason, the conditions with which the organism responds to sensory perception involve not only the nervous system, but the organism in its entirety. The two spheres are integrated by the feedback loops, however. Thus the organism is the receiver, for which the neural code must be decoded.

Sensation is reported to the corresponding cortical sensory fields via two separate pathways. The sensory signals reach the brain by way of a tract from the spinal cord. In the brain stem, collaterals branch off to various reticular nuclei of the activation system. The specific sensory tracts proceed further across specific relaying nuclei in the thalamus to the sensory fields of the cortex, but the nonspecific excitation in the reticular system of the brain stem divides into several paths. One such path leads to the part of the forebrain known as the limbic cortex, and another runs parallel to it through the nonspecific intralaminary thalamic nuclei to the same columns of the cortical sensory fields as the specific tracts, but in the upper layers (usually I and II) or in layer VI of the columns, whereas the specific tract has as its goal cells in layer IV of the same column. Feedback loops between the periphery and the cortex and between specific and nonspecific excitations synchronize these events.

The feedback excitation circuit of sensation or sensory perception occurs as long and as often as necessary until a firm linkage between the peripheral picking-up of sense qualities and their cortical representations has been developed. It is now known that short-term memory enters a long-term linkage by way of the hippocampal system. However, this association must be continually renewed, either by the same sensory experience or by dreaming (the REM phase of sleep). Complete sensory deprivation causes the brain to create hallucinations, during which, as in dreams, stored patterns are endogenously activated in the absence of a corresponding external stimulus.

The nonspecific neural patterns of long-term memory, which are complementary to the specific patterns, store the attention conditions of the activation system with which the organism perceived the decoding of the sense qualities. These conditions must be renewed again and again by practice and linked to the neural code.

With every new experience there is a tendency to disassociate the sense qualities from the environmental stimulus, to make it an autonomous, operant “coin” for the central controlling system. Parallel to this disassociation from the external stimulus, a linkage develops between the decoded sense qualities and their neural code or representations. Every sensation is a transfer of the symbolic information from the outside or from the periphery to neural representations by way of a pattern of connections, which finally form cortical excitation patterns.

Transformation of the Code of Symbolic Information

Before organisms equipped with sensory systems appeared, the lock-and-key mechanism was the code enabling information to be passed on. In the genes, in the immune system, and in transmission across synapses, this lock-and-key mechanism between ligand and receptor molecule is still to be found.
With the advent of sensory systems in organisms, a completely new kind of information coding cropped up, namely, symbolic information defined from the outset. The transition from an information filter to self-generated, detached information in the form of sense qualities was a fairly complicated process, especially since sense qualities cannot exist without a material carrier. First, for the neural network, the symbolic information contained in the sense qualities was translated into the neural code of nerve impulses and stored as a pattern of excitation of neuron groups. Then the central activating or attention system of the organism had to retranslate the neural code into sensory perception and associate the sense qualities decoded in this way with their cortical representations or carriers.

In the transformation of sense qualities to an object of an activating or attention system, somatosensory perception plays a critical part; it either precedes all sensation and perception, or transpires parallel to it. The body of the organism itself is represented severalfold in the parietal cortex (in areas 1, 2, 3, 5, and 7), and receives stimulus signals from the entire body surface, as well as from joints and muscles, by way of somatosensory senses; these exteroceptive somatic senses are supplemented by the interoceptive senses from the peripheral and autonomic nervous systems. This somatic sense, which is coupled by feedback with the motoric and activation systems, is crucial to the development of consciousness, for the self-reference of the periphery and the cortical equivalents by way of feedback between somatomotoric and somatosensory systems is the framework of all other sensations andperceptions. In other words, once this storing of experience of the body itself begins in the fashion described, it is continually renewed and elaborated. These somatosensory qualities derived from one’s own body become the first “language elements” of the brain. They are simultaneously a state of the body and an object of attention, i.e., the somatosensory qualities are experiences of bodily conditions. The states of the body itself were able to become the object of attention only by being perceived in the way we know as symbolic information about the physical properties of stimuli impinging on the body. These somatosensory sensations are unique, because they can take place even without involvement of other sensations; the condition of one’s own body can be perceived only as symbolic information. In other words, only symbolic information contained in somatosensory qualities can be an object of attention and perceived; somatosensoryqualities represent physical and energetic events within the body. In this fashion, an infinite series or infinite regression of conditions is prevented. The initial sensory perception cannot draw upon another condition, sensation, or feeling; it is actually the initiation of a process from which and in which conscious perception originates and happens. The organism perceives its own condition by way of symbolic information of somatosensory qualities as an object of its own attention.

Each sensation and perception can happen only by way of symbolic information of sense qualities, for there is no other way to become an object of attention or sensory cognition. It is naive and unreflected to attribute to the nervous system the ability to directly experience its processes and conditions. Only symbolic information can become an object of attention at which the sensory or cognitive systems are aimed. The only properties of physical events or objects which can be perceived are those which can be transformed into sense qualities. Consciousness and cognition have their wellsprings in this object formation.

Somatosensory perception proceeds along reciprocal pathways of the nonspecific mediodorsal thalamic nucleus to the somatic fields of the parietal cortex, among others. The somatosensory perceptions are connected in a special way, directly and inseparably, with the excitation of the activating system. Self-referring somatosensory decoding is the prerequisite for any subjective experience and the states it entails, for in this case the roles of sense qualities as objects and as states coincide in the decoded sense quality; with somatosensory perception, the organism also has an object of its attention, but the object is a condition of its own body. For this reason, in this context we speak of self-reference. The dual nature of decoded sense qualities as an object and as a state of the attention system may be explained by assuming that the activating system regards the decoded sense qualities as an object of attention, and incorporates it into its own system by way of nonspecific excitation; alternatively, the activation system may extends to include the cortical structures serving as sensory representations. The basis for this contention is the already mentioned fact that sensory qualities do not reach a conscious level until the excitation of the specific sensory systems and the nonspecific activation system unite to produce a state of common, synchronous excitation.

The perception of sense qualities happens via the previously described excitation loops in various patterns of excitation in the sensory fields and the prefrontal, parietal, and temporal, as well as the subcortical, reticular, and limbic-autonomic components of the activating system. The organism, which articulates itself in these patterns of excitation, is both carrier and object of the perception; its activating system is its organ by means of which the cortical structures of attention are steered toward the decoding process or to reactivate stored representations.

The organism, which distributes its nonspecific excitation to various cortical regulatory structures, is therefore what senses, perceives and feels. If the excitation of the activation system is turned off, the organism ceases to perceive anything. In this way, the organism, or its activating system, is in a state influenced by the process of sensation; this state is not consciously perceived as such, for only its products and the object it is attuned to, i.e., the perceived sense qualities, reach the level of consciousness. However, those sense qualities include somatosensory and interoceptory perceptions, including bodily states and the autonomic nervous system. The reference to this state of the organism, which is the foundation of conscious perception, is important for understanding the reactivation of memory; for it has been postulated that the program for reawakening of consciousness is coded in the nonspecific stores. The same condition enables the organism to perceive the decoded sense qualities as the object of its attention.

Before consciousness came into being, there were neither sensations nor feelings, perceptions of sense qualities, nor imagination. Nor was the brain able to generate these psychic events all by itself, so its only option was to take up information from the outside or from the environment and convert it to self-generated sense qualities. The road to conscious perception and cognition led from the filter of the sensory systems through the neural code of the brain to its decoding, based on the interaction of several complementary systems. The nonspatial sense qualities themselves are the elements out of which spatial forms, movements, and orientation of the body are constructed. The information symbol of the nonspatial properties bears no resemblance to the information carrier or the code, which is often a carrier of information as well. However, the brain’s code for space and time properties retains a spatio-temporal similarity, a quasi-isomorphism with the spatial stimulus properties. Several nerve structures in the peripheral receptor, in the thalamus, and in the sensory fields of the cortex serve to analyze it. And these spatial secondary sense qualities are the elements for objects, classes of objects, and entire categories.

With this inexhaustible reservoir of symbolic information, the human brain was now able to creatively construct new mental worlds. The potential combinations possibilities of the elements of symbolic information, i.e., the sense qualities, are just as inexhaustible as the sounds of human speech. As a matter of fact, sense qualities and human language share the same line of development.

Let me recapitulate the critical stages in development toward consciousness:

  1. The origin of the development was the sensory system with filters for sense qualities, the elements of symbolic information.
  2. The sensory system changed with the development of the cortical network and the central driving or activation system, and became a centrally regulated organ.
  3. Every new perception is preceded by a preattentive sensory impression for unconscious analysis of the stimulus signals, resulting in formation of a sensory detector before perception. In the second, conscious phase of sensory perception, the sensory system can therefore be aimed outward and selectively, its filters already tuned in, toward the environmental stimulus. The filters match the sense qualities as a key matches its lock or a template its matrix. The sense qualities gathered in this way are the decoding of the neural code in the cortex. The peripheral process is connected to the sensory target neurons in the cortex by way of a feedback excitation circuit, forming a unit. The long-term connection between the neural code and its decoded sense qualities is established by learning.
  4. The symbolic information, or sense qualities, thus become an object of central attention. This object formation is the origin of cognition and consciousness.

The mere description of the neurophysiological substrate of sensation and perception, however comprehensive and detailed, can do no more than relate the observable events that accompany the process of conscious perception. The widely-held notion of psychophysical parallelism is satisfied to describe the correlation or parallelism between physical (i.e., neurophysiological) and psychic (i.e., conscious, phenomenal) events, without offering an explanation of how conscious behavior came into being from these neurobiological prerequisites. The neobehaviorists tend to consider the description of the physical, neurobiological events sufficient to explain them. In order to understand what goes on in neurophysiological processes, it was necessary to regard them in a more comprehensive framework of relationships and interactions, in which the central nervous system wass not treated as if it were an isolated, autonomic entity, separate and isolated from the organism.

We have replaced psychophysical parallelism, which for a century has amassed an incalculably rich collection of observations and data, by a different model that attempts to explain the interaction of various components not reducible to each other, i.e., symbolic information and the nervous system. In our model, the observations of psychophysical parallelism have a new importance and another interpretation; the temporal correlations of inseparable events are now regarded as interactions and interdependencies of systems that generate new products and new systemic properties. The process of sensory perception can be described separately from the standpoints of sensory physiology and perception psychology, and both descriptions are correct. Nevertheless, the same sensory perception can be described, as here, under the assumption that the other two are an information process in a dynamic cybernetic system. All three descriptions are justified, but they answer different questions.

The description presented here does not merely draw upon results of neurophysiological and psychological research; it also integrates them by studying system levels within the organism and how they relate to one another. E. Pöppel formulated this systemic approach as a question: “How do individual system levels in biological systems come into being? How does something higher develop from a lower level?”

Conscious behavior has many facets, and can be defined in quite various ways. On the one hand, it is not an independent being hovering outside the body and transcending the nervous system. On the other hand, in contradiction to the so-called identity theory, it cannot be identical with the nervous system, for the first thing to become conscious is symbolic information about the external world, impinging from the outside and not generated by the nervous system alone.

The process of conscious behavior thus always involves two irreducible elements: a) the recognizing organism, and b) the recognized information, in which, in turn, information about the physical properties of the external stimulus must be differentiated from the self-generated symbol (i.e., the sense quality), by means of which the information is received by the sensory system. The symbolic information therefore goes beyond the neural process and is not reducible to it. The sensory apparatus and the sensomotoric cortex develop increasingly into organs of transmission, analysis, processing, and storage of this symbolic information, which it translates from one code into another during transmission from the peripheral sensory receptor to the cortical network, where finally the cortical representations are decoded into the original language. The symbolic information is what remains; it must not be confused or identified with the nervous system that transmits, processes and encodes it.

The sense qualities have not ceased to fascinate modern thinkers since John Locke (1632±1704). Immanuel Kant (1724±1804) regarded them as subjective forms in which we see things, and which rather tend to interfere with seeing “the things themselves”. In that era, the notion of information was hardly important, but Shannon’s concept of information turned out to be unsuitable in all attempts to apply it to consciousness. It was another train of thought in modern times, embodied by E. Cassirer’s “philosophy of symbolic forms”, Karl Bühler’s “theory of speech”, or Susanne K. Langer’s “symbol in thought, rites, and art”, to name but a few, that paved the way for the notion of symbolic information. This notion probably had little or no influence on Shannon and Weaver as they developed their theory of information. Regarding sense qualities as elements of symbolic information about the physical properties of environmental stimuli opens entirely new perspectives and possible explanations for consciousness research. In this sense, consciousness research is part of the basic science of language theory, linking the origin of human language to phylogenetic development. Conversely, consciousness research profits from the methods and categories of language research, as long as the common fallacy of coupling consciousness with the origin of human speech is avoided, i.e., confusing cause and effect. It is not inconceivable that Shannon’s concept of information and the development of mathematical formalism in theory of information that followed may also be applicable to symbolic information, permitting it to be quantified. Notwithstanding, such quantifying of information should not be confused with a mathematical model explaining consciousness; we are still far away from that.

 

References:

  1. Bruce, C. J.: Integration of sensory and motor signals in primate frontal eye fields. In: G. M. Edelman et al. (eds.) 1990, pp. 261±313.
  2. Buser, P. A., E. Rougel-Buser (eds.): Cerebral Correlates of Conscious Experience. North Holland Publ., Amsterdam 1978.
  3. Dawson, G. D.: The central control of sensory inflow. Proc. Roy. Soc. Med., London 51 (5), 531±535 (1958).
  4. Edelman, G. M., W. Einar Gall, W. M. Cowan (eds.): Signal and Sense. Local and Global Order in Perceptual Maps. Wiley, New York 1990.
  5. Grillner, S.: Neurobiology of vertebrate motor behavior. From flexion reflexes and locomotion to manipulative movements. In: G. M. Edelman et al. (eds.) 1990, pp. 187±208.
  6. Hagbarth, K. E., D. J. B. Kerr: Central influences on spinal afferent conduction. J. Neurophysiol. 17 (3), 295±297 (1954).
  7. Hassler, R.: Interaction of reticular activating system for vigilance and the corticothalamic and pallidal systems for directing awareness and attention under striatal control. In: Buser et al. (eds.) 1978.
  8. Hernegger, R.: Wahrnehmung und Bewußtsein. Ein Diskussionsbeitrag zu den Neurowissenschaften. Spektrum Akademischer Verlag, Berlin±Heidelberg±Oxford 1995.
  9. Hobson, J. A., M. Steriade: Neuronal basis of behavioral state control. In: Mountcastle, V. B., F. E. Bloom (eds.): Handbook of Physiology. The Nervous System, Vol. IV, pp. 701±825. American Physiological Society, Bethesda 1986.
  10. LeDoux, J. E.: Emotional networks in the brain. In: Lewis, M., J. M. Haviland (eds.): Handbook of Emotions. Guildford Press, New York 1993.
  11. Lindsley, D. B.: Psychophysiology and motivation. In: Jones, M. R. (ed.): Nebraska Symposium on Motivation, Vol. 5. University of Nebraska Press, Lincoln 1957.
  12. Mangun, G. E., S. A. Hillyard, in: Scheibel, A. B., A. F. Wechsler (eds.): Neurobiology of Higher Cognitive Function. Guildford Press, New York 1990.
  13. Meric, C., L. Collet: Attention and otoacoustic emissions. Neuroscience and Behavioral Reviews 18 (2), 215±222 (1994).
  14. Newman, J., B. J. Baars: A neural attentional model for access to consciousness: a global workspace perspective. Conceptions in Neuroscience 4 (2) 255±290 (1993).
  15. Ornstein, R., R. F. Thompson: The Amazing Brain. Boston 1984.
  16. Pöppel, E., A. L. Edinghaus: Geheimnisvoller Kosmos Gehirn. München 1994.
  17. Scheibel, A. B.: The brain stem reticular core and sensory function. In: Handbook of Physiology. The Nervous System, Vol. III,1. American Physiological Society, Bethesda 1984.
  18. Scheibel, A. B., A. F. Wechsler (eds.): Neurobiology of Higher Cognitive Function. Guildford Press, New York 1990.
  19. Zeki, S.: Functional specialization in the visual cortex: the generalisation of separate constructs and their multistage integration. In: Edelman, G. M., et al. 1990, pp. 85±130.

R. Hernegger, Change of Paradigms in Consciousness Research: On the Evolution of Consciousness

Consciousness

Explaining the nature of consciousness is one of the most important and perplexing areas of philosophy, but the concept is notoriously ambiguous. The abstract noun “consciousness” is not frequently used by itself in the contemporary literature, but is originally derived from the Latin con (with) and scire (to know). Perhaps the most commonly used contemporary notion of a conscious mental state is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view. But how are we to understand this? For instance, how is the conscious mental state related to the body? Can consciousness be explained in terms of brain activity? What makes a mental state be a conscious mental state? The problem of consciousness is arguably the most central issue in current philosophy of mind and is also importantly related to major traditional topics in metaphysics, such as the possibility of immortality and the belief in free will. This article focuses on Western theories and conceptions of consciousness, especially as found in contemporary analytic philosophy of mind.

The two broad, traditional and competing theories of mind are dualism and materialism (or physicalism). While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense, whereas the latter holds that, to put it crudely, the mind is the brain, or is caused by neural activity. It is against this general backdrop that many answers to the above questions are formulated and developed. There are also many familiar objections to both materialism and dualism. For example, it is often said that materialism cannot truly explain just how or why some brain states are conscious, and that there is an important “explanatory gap” between mind and matter. On the other hand, dualism faces the problem of explaining how a non-physical substance or mental state can causally interact with the physical body.

Some philosophers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. The relationship between consciousness and science is also central in much current theorizing on this topic: How does the brain “bind together” various sensory inputs to produce a unified subjective experience? What are the neural correlates of consciousness? What can be learned from abnormal psychology which might help us to understand normal consciousness? To what extent are animal minds different from human minds? Could an appropriately programmed machine be conscious?

1. Terminological Matters: Various Concepts of Consciousness

The concept of consciousness is notoriously ambiguous. It is important first to make several distinctions and to define related terms. The abstract noun “consciousness” is not often used in the contemporary literature, though it should be noted that it is originally derived from the Latin con (with) and scire (to know). Thus, “consciousness” has etymological ties to one’s ability to know and perceive, and should not be confused with conscience, which has the much more specific moral connotation of knowing when one has done or is doing something wrong. Through consciousness, one can have knowledge of the external world or one’s own mental states. The primary contemporary interest lies more in the use of the expressions “x is conscious” or “x is conscious of y.” Under the former category, perhaps most important is the distinction between state and creature consciousness (Rosenthal 1993a). We sometimes speak of an individual mental state, such as a pain or perception, as conscious. On the other hand, we also often speak of organisms or creatures as conscious, such as when we say “human beings are conscious” or “dogs are conscious.” Creature consciousness is also simply meant to refer to the fact that an organism is awake, as opposed to sleeping or in a coma. However, some kind of state consciousness is often implied by creature consciousness, that is, the organism is having conscious mental states. Due to the lack of a direct object in the expression “x is conscious,” this is usually referred to as intransitive consciousness, in contrast to transitive consciousness where the locution “x is conscious of y” is used (Rosenthal 1993a, 1997). Most contemporary theories of consciousness are aimed at explaining state consciousness; that is, explaining what makes a mental state a conscious mental state.

It might seem that “conscious” is synonymous with, say, “awareness” or “experience” or “attention.” However, it is crucial to recognize that this is not generally accepted today. For example, though perhaps somewhat atypical, one might hold that there are even unconscious experiences, depending of course on how the term “experience” is defined (Carruthers 2000). More common is the belief that we can be aware of external objects in some unconscious sense, for example, during cases of subliminal perception. The expression “conscious awareness” does not therefore seem to be redundant. Finally, it is not clear that consciousness ought to be restricted to attention. It seems plausible to suppose that one is conscious (in some sense) of objects in one’s peripheral visual field even though one is only attending to some narrow (focal) set of objects within that visual field.
Perhaps the most fundamental and commonly used notion of “conscious” is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the subjective or first-person point of view. When I am, for example, smelling a rose or having a conscious visual experience, there is something it “seems” or “feels” like from my perspective. An organism, such as a bat, is conscious if it is able to experience the outer world through its (echo-locatory) senses. There is also something it is like to be a conscious creature whereas there is nothing it is like to be, for example, a table or tree. This is primarily the sense of “conscious state” that will be used throughout this entry. There are still, though, a cluster of expressions and terms related to Nagel’s sense, and some authors simply stipulate the way that they use such terms. For example, philosophers sometimes refer to conscious states as phenomenal or qualitative states. More technically, philosophers often view such states as having qualitative properties called “qualia” (singular, quale). There is significant disagreement over the nature, and even the existence, of qualia, but they are perhaps most frequently understood as the felt properties or qualities of conscious states.

Ned Block (1995) makes an often cited distinction between phenomenal consciousness (or “phenomenality”) and access consciousness. The former is very much in line with the Nagelian notion described above. However, Block also defines the quite different notion of access consciousness in terms of a mental state’s relationship with other mental states; for example, a mental state’s “availability for use in reasoning and rationality guiding speech and action” (Block 1995: 227). This would, for example, count a visual perception as (access) conscious not because it has the “what it’s likeness” of phenomenal states, but rather because it carries visual information which is generally available for use by the organism, regardless of whether or not it has any qualitative properties. Access consciousness is therefore more of a functional notion; that is, concerned with what such states do. Although this concept of consciousness is certainly very important in cognitive science and philosophy of mind generally, not everyone agrees that access consciousness deserves to be called “consciousnesses” in any important sense. Block himself argues that neither sense of consciousness implies the other, while others urge that there is a more intimate connection between the two.

Finally, it is helpful to distinguish between consciousness and self-consciousness, which plausibly involves some kind of awareness or consciousness of one’s own mental states (instead of something out in the world). Self-consciousness arguably comes in degrees of sophistication ranging from minimal bodily self-awareness to the ability to reason and reflect on one’s own mental states, such as one’s beliefs and desires. Some important historical figures have even held that consciousness entails some form of self-consciousness (Kant 1781/1965, Sartre 1956), a view shared by some contemporary philosophers (Gennaro 1996a, Kriegel 2004).

 

2. Some History on the Topic

Interest in the nature of conscious experience has no doubt been around for as long as there have been reflective humans. It would be impossible here to survey the entire history, but a few highlights are in order. In the history of Western philosophy, which is the focus of this entry, important writings on human nature and the soul and mind go back to ancient philosophers, such as Plato. More sophisticated work on the nature of consciousness and perception can be found in the work of Plato’s most famous student Aristotle (see Caston 2002), and then throughout the later Medieval period. It is, however, with the work of René Descartes (1596-1650) and his successors in the early modern period of philosophy that consciousness and the relationship between the mind and body took center stage. As we shall see, Descartes argued that the mind is a non-physical substance distinct from the body. He also did not believe in the existence of unconscious mental states, a view certainly not widely held today. Descartes defined “thinking” very broadly to include virtually every kind of mental state and urged that consciousness is essential to thought. Our mental states are, according to Descartes, infallibly transparent to introspection. John Locke (1689/1975) held a similar position regarding the connection between mentality and consciousness, but was far less committed on the exact metaphysical nature of the mind.

Perhaps the most important philosopher of the period explicitly to endorse the existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925). Although Leibniz also believed in the immaterial nature of mental substances (which he called “monads”), he recognized the existence of what he called “petit perceptions,” which are basically unconscious perceptions. He also importantly distinguished between perception and apperception, roughly the difference between outer-directed consciousness and self-consciousness (see Gennaro 1999 for some discussion). The most important detailed theory of mind in the early modern period was developed by Immanuel Kant. His main work Critique of Pure Reason (1781/1965) is as equally dense as it is important, and cannot easily be summarized in this context. Although he owes a great debt to his immediate predecessors, Kant is arguably the most important philosopher since Plato and Aristotle and is highly relevant today. Kant basically thought that an adequate account of phenomenal consciousness involved far more than any of his predecessors had considered. There are important mental structures which are “presupposed” in conscious experience, and Kant presented an elaborate theory as to what those structures are, which, in turn, had other important implications. He, like Leibniz, also saw the need to postulate the existence of unconscious mental states and mechanisms in order to provide an adequate theory of mind (Kitcher 1990 and Brook 1994 are two excellent books on Kant’s theory of mind.).

Over the past one hundred years or so, however, research on consciousness has taken off in many important directions. In psychology, with the notable exception of the virtual banishment of consciousness by behaviorist psychologists (e.g., Skinner 1953), there were also those deeply interested in consciousness and various introspective (or “first-person”) methods of investigating the mind. The writings of such figures as Wilhelm Wundt (1897), William James (1890) and Alfred Titchener (1901) are good examples of this approach. Franz Brentano (1874/1973) also had a profound effect on some contemporary theories of consciousness. Similar introspectionist approaches were used by those in the so-called “phenomenological” tradition in philosophy, such as in the writings of Edmund Husserl (1913/1931, 1929/1960) and Martin Heidegger (1927/1962).  The work of Sigmund Freud was very important, at minimum, in bringing about the near universal acceptance of the existence of unconscious mental states and processes.

It must, however, be kept in mind that none of the above had very much scientific knowledge about the detailed workings of the brain.  The relatively recent development of neurophysiology is, in part, also responsible for the unprecedented interdisciplinary research interest in consciousness, particularly since the 1980s.  There are now several important journals devoted entirely to the study of consciousness: Consciousness and Cognition, Journal of Consciousness Studies, and Psyche.  There are also major annual conferences sponsored by world wide professional organizations, such as the Association for the Scientific Study of Consciousness, and an entire book series called “Advances in Consciousness Research” published by John Benjamins.  (For a small sample of introductory texts and important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997, Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005.)

3. The Metaphysics of Consciousness: Materialism vs. Dualism

Metaphysics is the branch of philosophy concerned with the ultimate nature of reality. There are two broad traditional and competing metaphysical views concerning the nature of the mind and conscious mental states: dualism and materialism. While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense. On the other hand, materialists hold that the mind is the brain, or, more accurately, that conscious mental activity is identical with neural activity. It is important to recognize that by non-physical, dualists do not merely mean “not visible to the naked eye.” Many physical things fit this description, such as the atoms which make up the air in a typical room. For something to be non-physical, it must literally be outside the realm of physics; that is, not in space at all and undetectable in principle by the instruments of physics. It is equally important to recognize that the category “physical” is broader than the category “material.” Materialists are called such because there is the tendency to view the brain, a material thing, as the most likely physical candidate to identify with the mind. However, something might be physical but not material in this sense, such as an electromagnetic or energy field. One might therefore instead be a “physicalist” in some broader sense and still not a dualist. Thus, to say that the mind is non-physical is to say something much stronger than that it is non-material. Dualists, then, tend to believe that conscious mental states or minds are radically different from anything in the physical world at all.

a. Dualism: General Support and Related Issues

There are a number of reasons why some version of dualism has been held throughout the centuries. For one thing, especially from the introspective or first-person perspective, our conscious mental states just do not seem like physical things or processes. That is, when we reflect on our conscious perceptions, pains, and desires, they do not seem to be physical in any sense. Consciousness seems to be a unique aspect of the world not to be understood in any physical way. Although materialists will urge that this completely ignores the more scientific third-person perspective on the nature of consciousness and mind, this idea continues to have force for many today. Indeed, it is arguably the crucial underlying intuition behind historically significant “conceivability arguments” against materialism and for dualism. Such arguments typically reason from the premise that one can conceive of one’s conscious states existing without one’s body or, conversely, that one can imagine one’s own physical duplicate without consciousness at all (see section 3b.iv). The metaphysical conclusion ultimately drawn is that consciousness cannot be identical with anything physical, partly because there is no essential conceptual connection between the mental and the physical. Arguments such as these go back to Descartes and continue to be used today in various ways (Kripke 1972, Chalmers 1996), but it is highly controversial as to whether they succeed in showing that materialism is false. Materialists have replied in various ways to such arguments and the relevant literature has grown dramatically in recent years.

Historically, there is also the clear link between dualism and a belief in immortality, and hence a more theistic perspective than one tends to find among materialists. Indeed, belief in dualism is often explicitly theologically motivated. If the conscious mind is not physical, it seems more plausible to believe in the possibility of life after bodily death. On the other hand, if conscious mental activity is identical with brain activity, then it would seem that when all brain activity ceases, so do all conscious experiences and thus no immortality. After all, what do many people believe continues after bodily death? Presumably, one’s own conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a similar historical connection to a belief in free will, which is of course a major topic in its own right. For our purposes, it suffices to say that, on some definitions of what it is to act freely, such ability seems almost “supernatural” in the sense that one’s conscious decisions can alter the otherwise deterministic sequence of events in nature. To put it another way: If we are entirely physical beings as the materialist holds, then mustn’t all of the brain activity and behavior in question be determined by the laws of nature? Although materialism may not logically rule out immortality or free will, materialists will likely often reply that such traditional, perhaps even outdated or pre-scientific beliefs simply ought to be rejected to the extent that they conflict with materialism. After all, if the weight of the evidence points toward materialism and away from dualism, then so much the worse for those related views.

One might wonder “even if the mind is physical, what about the soul?” Maybe it’s the soul, not the mind, which is non-physical as one might be told in many religious traditions. While it is true that the term “soul” (or “spirit”) is often used instead of “mind” in such religious contexts, the problem is that it is unclear just how the soul is supposed to differ from the mind. The terms are often even used interchangeably in many historical texts and by many philosophers because it is unclear what else the soul could be other than “the mental substance.” It is difficult to describe the soul in any way that doesn’t make it sound like what we mean by the mind. After all, that’s what many believe goes on after bodily death; namely, conscious mental activity. Granted that the term “soul” carries a more theological connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely different things. Somewhat related to the issue of immortality, the existence of near death experiences is also used as some evidence for dualism and immortality. Such patients experience a peaceful moving toward a light through a tunnel like structure, or are able to see doctors working on their bodies while hovering over them in an emergency room (sometimes akin to what is called an “out of body experience”). In response, materialists will point out that such experiences can be artificially induced in various experimental situations, and that starving the brain of oxygen is known to cause hallucinations.

Various paranormal and psychic phenomena, such as clairvoyance, faith healing, and mind-reading, are sometimes also cited as evidence for dualism. However, materialists (and even many dualists) will first likely wish to be skeptical of the alleged phenomena themselves for numerous reasons. There are many modern day charlatans who should make us seriously question whether there really are such phenomena or mental abilities in the first place. Second, it is not quite clear just how dualism follows from such phenomena even if they are genuine. A materialist, or physicalist at least, might insist that though such phenomena are puzzling and perhaps currently difficult to explain in physical terms, they are nonetheless ultimately physical in nature; for example, having to do with very unusual transfers of energy in the physical world. The dualist advantage is perhaps not as obvious as one might think, and we need not jump to supernatural conclusions so quickly.

i. Substance Dualism and Objections

Interactionist Dualism or simply “interactionism” is the most common form of “substance dualism” and its name derives from the widely accepted fact that mental states and bodily states causally interact with each other. For example, my desire to drink something cold causes my body to move to the refrigerator and get something to drink and, conversely, kicking me in the shin will cause me to feel a pain and get angry. Due to Descartes’ influence, it is also sometimes referred to as “Cartesian dualism.” Knowing nothing about just where such causal interaction could take place, Descartes speculated that it was through the pineal gland, a now almost humorous conjecture. But a modern day interactionist would certainly wish to treat various areas of the brain as the location of such interactions.

Three serious objections are briefly worth noting here. The first is simply the issue of just how does or could such radically different substances causally interact. How something non-physical causally interacts with something physical, such as the brain? No such explanation is forthcoming or is perhaps even possible, according to materialists. Moreover, if causation involves a transfer of energy from cause to effect, then how is that possible if the mind is really non-physical? Gilbert Ryle (1949) mockingly calls the Cartesian view about the nature of mind, a belief in the “ghost in the machine.” Secondly, assuming that some such energy transfer makes any sense at all, it is also then often alleged that interactionism is inconsistent with the scientifically well-established Conservation of Energy principle, which says that the total amount of energy in the universe, or any controlled part of it, remains constant. So any loss of energy in the cause must be passed along as a corresponding gain of energy in the effect, as in standard billiard ball examples. But if interactionism is true, then when mental events cause physical events, energy would literally come into the physical word. On the other hand, when bodily events cause mental events, energy would literally go out of the physical world. At the least, there is a very peculiar and unique notion of energy involved, unless one wished, even more radically, to deny the conservation principle itself. Third, some materialists might also use the well-known fact that brain damage (even to very specific areas of the brain) causes mental defects as a serious objection to interactionism (and thus as support for materialism). This has of course been known for many centuries, but the level of detailed knowledge has increased dramatically in recent years. Now a dualist might reply that such phenomena do not absolutely refute her metaphysical position since it could be replied that damage to the brain simply causes corresponding damage to the mind. However, this raises a host of other questions: Why not opt for the simpler explanation, i.e., that brain damage causes mental damage because mental processes simply are brain processes? If the non-physical mind is damaged when brain damage occurs, how does that leave one’s mind according to the dualist’s conception of an afterlife? Will the severe amnesic at the end of life on Earth retain such a deficit in the afterlife? If proper mental functioning still depends on proper brain functioning, then is dualism really in no better position to offer hope for immortality?

It should be noted that there is also another less popular form of substance dualism called parallelism, which denies the causal interaction between the non-physical mental and physical bodily realms. It seems fair to say that it encounters even more serious objections than interactionism.

ii. Other Forms of Dualism

While a detailed survey of all varieties of dualism is beyond the scope of this entry, it is at least important to note here that the main and most popular form of dualism today is called property dualism. Substance dualism has largely fallen out of favor at least in most philosophical circles, though there are important exceptions (e.g., Swinburne 1986, Foster 1996) and it often continues to be tied to various theological positions. Property dualism, on the other hand, is a more modest version of dualism and it holds that there are mental properties (i.e., characteristics or aspects of things) that are neither identical with nor reducible to physical properties. There are actually several different kinds of property dualism, but what they have in common is the idea that conscious properties, such as the color qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process.
Two other views worth mentioning are epiphenomenalism and panpsychism. The latter is the somewhat eccentric view that all things in physical reality, even down to micro-particles, have some mental properties. All substances have a mental aspect, though it is not always clear exactly how to characterize or test such a claim. Epiphenomenalism holds that mental events are caused by brain events but those mental events are mere “epiphenomena” which do not, in turn, cause anything physical at all, despite appearances to the contrary (for a recent defense, see Robinson 2004).

Finally, although not a form of dualism, idealism holds that there are only immaterial mental substances, a view more common in the Eastern tradition. The most prominent Western proponent of idealism was 18th century empiricist George Berkeley. The idealist agrees with the substance dualist, however, that minds are non-physical, but then denies the existence of mind-independent physical substances altogether. Such a view faces a number of serious objections, and it also requires a belief in the existence of God.

b. Materialism: General Support

Some form of materialism is probably much more widely held today than in centuries past. No doubt part of the reason for this has to do with the explosion in scientific knowledge about the workings of the brain and its intimate connection with consciousness, including the close connection between brain damage and various states of consciousness. Brain death is now the main criterion for when someone dies. Stimulation to specific areas of the brain results in modality specific conscious experiences. Indeed, materialism often seems to be a working assumption in neurophysiology. Imagine saying to a neuroscientist “you are not really studying the conscious mind itself” when she is examining the workings of the brain during an fMRI. The idea is that science is showing us that conscious mental states, such as visual perceptions, are simply identical with certain neuro-chemical brain processes; much like the science of chemistry taught us that water just is H2O.

There are also theoretical factors on the side of materialism, such as adherence to the so-called “principle of simplicity” which says that if two theories can equally explain a given phenomenon, then we should accept the one which posits fewer objects or forces. In this case, even if dualism could equally explain consciousness (which would of course be disputed by materialists), materialism is clearly the simpler theory in so far as it does not posit any objects or processes over and above physical ones. Materialists will wonder why there is a need to believe in the existence of such mysterious non-physical entities. Moreover, in the aftermath of the Darwinian revolution, it would seem that materialism is on even stronger ground provided that one accepts basic evolutionary theory and the notion that most animals are conscious. Given the similarities between the more primitive parts of the human brain and the brains of other animals, it seems most natural to conclude that, through evolution, increasing layers of brain areas correspond to increased mental abilities. For example, having a well developed prefrontal cortex allows humans to reason and plan in ways not available to dogs and cats. It also seems fairly uncontroversial to hold that we should be materialists about the minds of animals. If so, then it would be odd indeed to hold that non-physical conscious states suddenly appear on the scene with humans.

There are still, however, a number of much discussed and important objections to materialism, most of which question the notion that materialism can adequately explain conscious experience.

i. Objection 1: The Explanatory Gap and The Hard Problem

Joseph Levine (1983) coined the expression “the explanatory gap” to express a difficulty for any materialistic attempt to explain consciousness. Although not concerned to reject the metaphysics of materialism, Levine gives eloquent expression to the idea that there is a key gap in our ability to explain the connection between phenomenal properties and brain properties (see also Levine 1993, 2001). The basic problem is that it is, at least at present, very difficult for us to understand the relationship between brain properties and phenomenal properties in any explanatory satisfying way, especially given the fact that it seems possible for one to be present without the other. There is an odd kind of arbitrariness involved: Why or how does some particular brain process produce that particular taste or visual sensation? It is difficult to see any real explanatory connection between specific conscious states and brain states in a way that explains just how or why the former are identical with the latter. There is therefore an explanatory gap between the physical and mental. Levine argues that this difficulty in explaining consciousness is unique; that is, we do not have similar worries about other scientific identities, such as that “water is H2O” or that “heat is mean molecular kinetic energy.” There is “an important sense in which we can’t really understand how [materialism] could be true.” (2001: 68)

David Chalmers (1995) has articulated a similar worry by using the catchy phrase “the hard problem of consciousness,” which basically refers to the difficulty of explaining just how physical processes in the brain give rise to subjective conscious experiences. The “really hard problem is the problem of experience…How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?” (1995: 201) Others have made similar points, as Chalmers acknowledges, but reference to the phrase “the hard problem” has now become commonplace in the literature. Unlike Levine, however, Chalmers is much more inclined to draw anti-materialist metaphysical conclusions from these and other considerations. Chalmers usefully distinguishes the hard problem of consciousness from what he calls the (relatively) “easy problems” of consciousness, such as the ability to discriminate and categorize stimuli, the ability of a cognitive system to access its own internal states, and the difference between wakefulness and sleep. The easy problems generally have more to do with the functions of consciousness, but Chalmers urges that solving them does not touch the hard problem of phenomenal consciousness. Most philosophers, according to Chalmers, are really only addressing the easy problems, perhaps merely with something like Block’s “access consciousness” in mind. Their theories ignore phenomenal consciousness.

There are many responses by materialists to the above charges, but it is worth emphasizing that Levine, at least, does not reject the metaphysics of materialism. Instead, he sees the “explanatory gap [as] primarily an epistemological problem” (2001: 10). That is, it is primarily a problem having to do with knowledge or understanding. This concession is still important at least to the extent that one is concerned with the larger related metaphysical issues discussed in section 3a, such as the possibility of immortality.

Perhaps most important for the materialist, however, is recognition of the fact that different concepts can pick out the same property or object in the world (Loar 1990, 1997). Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” The traditional distinction, made most notably by Gottlob Frege in the late 19th century, between “meaning” (or “sense”) and “reference” is also relevant here. Two or more concepts, which can have different meanings, can refer to the same property or object, much like “Venus” and “The Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are so-called “phenomenal concepts” which uses a phenomenal or “first-person” property to refer to some conscious mental state, such as a sensation of red. In contrast, we can also use various concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus but one conscious mental state which can be conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then still be identical to physical properties. Moreover, this response provides a diagnosis for why there even seems to be such a gap; namely, that we use very different concepts to pick out the same property. Science will be able, in principle, to close the gap and solve the hard problem of consciousness in an analogous way that we now have a very good understanding for why “water is H2O” or “heat is mean molecular kinetic energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after all – it will just take some more time. After all, the science of chemistry didn’t develop overnight and we are relatively early in the history of neurophysiology and our understanding of phenomenal consciousness. (See Shear 1997 for many more specific responses to the hard problem, but also for Chalmers’ counter-replies.)

ii. Objection 2: The Knowledge Argument

There is a pair of very widely discussed, and arguably related, objections to materialism which come from the seminal writings of Thomas Nagel (1974) and Frank Jackson (1982, 1986). These arguments, especially Jackson’s, have come to be known as examples of the “knowledge argument” against materialism, due to their clear emphasis on the epistemological (that is, knowledge related) limitations of materialism. Like Levine, Nagel does not reject the metaphysics of materialism. Jackson had originally intended for his argument to yield a dualistic conclusion, but he no longer holds that view. The general pattern of each argument is to assume that all the physical facts are known about some conscious mind or conscious experience. Yet, the argument goes, not all is known about the mind or experience. It is then inferred that the missing knowledge is non-physical in some sense, which is surely an anti-materialist conclusion in some sense.

Nagel imagines a future where we know everything physical there is to know about some other conscious creature’s mind, such as a bat. However, it seems clear that we would still not know something crucial; namely, “what it is like to be a bat.” It will not do to imagine what it is like for us to be a bat. We would still not know what it is like to be a bat from the bat’s subjective or first-person point of view. The idea, then, is that if we accept the hypothesis that we know all of the physical facts about bat minds, and yet some knowledge about bat minds is left out, then materialism is inherently flawed when it comes to explaining consciousness. Even in an ideal future in which everything physical is known by us, something would still be left out. Jackson’s somewhat similar, but no less influential, argument begins by asking us to imagine a future where a person, Mary, is kept in a black and white room from birth during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red for example, but she learns all of the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new; namely, to use Nagel’s famous phrase, what it is like to experience red. This is a new piece of knowledge and hence she must have come to know some non-physical fact (since, by hypothesis, she already knew all of the physical facts). Thus, not all knowledge about the conscious mind is physical knowledge.

The influence and the quantity of work that these ideas have generated cannot be exaggerated. Numerous materialist responses to Nagel’s argument have been presented (such as Van Gulick 1985), and there is now a very useful anthology devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some materialists have wondered if we should concede up front that Mary wouldn’t be able to imagine the color red even before leaving the room, so that maybe she wouldn’t even be surprised upon seeing red for the first time. Various suspicions about the nature and effectiveness of such thought experiments also usually accompany this response. More commonly, however, materialists reply by arguing that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. Recalling the distinction made in section 3b.i between concepts and objects or properties, the materialist will urge that there is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. We might say that Mary, upon leaving the black and white room, becomes acquainted with the same neural property as before, but only now from the first-person point of view. The property itself isn’t new; only the perspective, or what philosophers sometimes call the “mode of presentation,” is different. In short, coming to learn or know something new does not entail learning some new fact about the world. Analogies are again given in other less controversial areas, for example, one can come to know about some historical fact or event by reading a (reliable) third-person historical account or by having observed that event oneself. But there is still only the one objective fact under two different descriptions. Finally, it is crucial to remember that, according to most, the metaphysics of materialism remains unaffected. Drawing a metaphysical conclusion from such purely epistemological premises is always a questionable practice. Nagel’s argument doesn’t show that bat mental states are not identical with bat brain states. Indeed, a materialist might even expect the conclusion that Nagel draws; after all, given that our brains are so different from bat brains, it almost seems natural for there to be certain aspects of bat experience that we could never fully comprehend. Only the bat actually undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t show that Mary’s color experience is distinct from her brain processes.

Despite the plethora of materialist responses, vigorous debate continues as there are those who still think that something profound must always be missing from any materialist attempt to explain consciousness; namely, that understanding subjective phenomenal consciousness is an inherently first-person activity which cannot be captured by any objective third-person scientific means, no matter how much scientific knowledge is accumulated. Some knowledge about consciousness is essentially limited to first-person knowledge. Such a sense, no doubt, continues to fuel the related anti-materialist intuitions raised in the previous section. Perhaps consciousness is simply a fundamental or irreducible part of nature in some sense (Chalmers 1996). (For more see Van Gulick 1993.)

iii. Objection 3: Mysterianism

Finally, some go so far as to argue that we are simply not capable of solving the problem of consciousness (McGinn 1989, 1991, 1995). In short, “mysterians” believe that the hard problem can never be solved because of human cognitive limitations; the explanatory gap can never be filled. Once again, however, McGinn does not reject the metaphysics of materialism, but rather argues that we are “cognitively closed” with respect to this problem much like a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. More specifically, McGinn claims that we are cognitively closed as to how the brain produces conscious awareness. McGinn concedes that some brain property produces conscious experience, but we cannot understand how this is so or even know what that brain property is. Our concept forming mechanisms simply will not allow us to grasp the physical and causal basis of consciousness. We are not conceptually suited to be able to do so.

McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in materialist terms; instead, he presents another argument for his admittedly pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection or the first-person perspective, but our access to the brain is through the use of outer spatial senses (e.g., vision) or a more third-person perspective. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.
Materialist responses are numerous. First, one might wonder why we can’t combine the two perspectives within certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalist theory emerge from such a combined approach? Presumably, McGinn would say that we are not capable of putting such a theory together in any appropriate way. Second, despite McGinn’s protests to the contrary, many will view the problem of explaining consciousness as a merely temporary limit of our theorizing, and not something which is unsolvable in principle (Dennett 1991). Third, it may be that McGinn expects too much; namely, grasping some causal link between the brain and consciousness. After all, if conscious mental states are simply identical to brain states, then there may simply be a “brute fact” that really does not need any further explaining. Indeed, this is sometimes also said in response to the explanatory gap and the hard problem, as we saw earlier. It may even be that some form of dualism is presupposed in McGinn’s argument, to the extent that brain states are said to “cause” or “give rise to” consciousness, instead of using the language of identity. Fourth, McGinn’s analogy to lower animals and mathematics is not quite accurate. Rats, for example, have no concept whatsoever of calculus. It is not as if they can grasp it to some extent but just haven’t figured out the answer to some particular problem within mathematics. Rats are just completely oblivious to calculus problems. On the other hand, we humans obviously do have some grasp on consciousness and on the workings of the brain—just see the references at the end of this entry! It is not clear, then, why we should accept the extremely pessimistic and universally negative conclusion that we can never discover the answer to the problem of consciousness, or, more specifically, why we could never understand the link between consciousness and the brain.

iv. Objection 4: Zombies

Unlike many of the above objections to materialism, the appeal to the possibility of zombies is often taken as both a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” basically refers to conceivable creatures which are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for there to be such creatures: “the conceivability of zombies seems…obvious to me…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996: 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus, it is logically possible for me to jump fifty feet in the air, but not empirically possible. Philosophers often use the notion of “possible worlds,” i.e., different ways that the world might have been, in describing such non-actual situations or possibilities. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because materialism would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all possible worlds), and the same should therefore go for mind-brain identity claims. Since the possibility of zombies shows that it doesn’t, then we should conclude that materialism is false. [See Identity Theory.]

It is impossible to do justice to all of the subtleties here. The literature in response to zombie, and related “conceivability,” arguments is enormous (see, for example, Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001). A few lines of reply are as follows: First, it is sometimes objected that the conceivability of something does not really entail its possibility. Perhaps we can also conceive of water not being H2O, since there seems to be no logical contradiction in doing so, but, according to received wisdom from Kripke, that is really impossible. Perhaps, then, some things just seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind-brain and water-H2O cases (or other such scientific identities). Indeed, the entire issue of the exact relationship between “conceivability” and “possibility” is the subject of an important recently published anthology (Gendler and Hawthorne 2002). Second, even if zombies are conceivable in the sense of logically possible, how can we draw a substantial metaphysical conclusion about the actual world? There is often suspicion on the part of materialists about what, if anything, such philosophers’ “thought experiments” can teach us about the nature of our minds. It seems that one could take virtually any philosophical or scientific theory about almost anything, conceive that it is possibly false, and then conclude that it is actually false. Something, perhaps, is generally wrong with this way of reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why such zombie scenarios seem possible; namely, that we do not (at least, not yet) see what the necessary connection is between neural events and conscious mental events. On the one side, we are dealing with scientific third-person concepts and, on the other, we are employing phenomenal concepts. We are, perhaps, simply currently not in a position to understand completely such a necessary connection.
Debate and discussion on all four objections remains very active.

v. Varieties of Materialism

Despite the apparent simplicity of materialism, say, in terms of the identity between mental states and neural states, the fact is that there are many different forms of materialism. While a detailed survey of all varieties is beyond the scope of this entry, it is at least important to acknowledge the commonly drawn distinction between two kinds of “identity theory”: token-token and type-type materialism. Type-type identity theory is the stronger thesis and says that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956, Smart 1959). However, this view historically came under serious assault due to the fact that it seems to rule out the so-called “multiple realizability” of conscious mental states. The idea is simply that it seems perfectly possible for there to be other conscious beings (e.g., aliens, radically different animals) who can have those same mental states but who also are radically different from us physiologically (Fodor 1974). It seems that commitment to type-type identity theory led to the undesirable result that only organisms with brains like ours can have conscious states. Somewhat more technically, most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. (But for more recent defenses of type-type identity theory see Hill and McLaughlin 1999, Papineau 1994, 1995, 1998, Polger 2004.) As a consequence, a more modest “token-token” identity theory has become preferable to many materialists. This view simply holds that each particular conscious mental event in some organism is identical with some particular brain process or event in that organism. This seems to preserve much of what the materialist wants but yet allows for the multiple realizability of conscious states, because both the human and the alien can still have a conscious desire for something to drink while each mental event is identical with a (different) physical state in each organism.

Taking the notion of multiple realizability very seriously has also led many to embrace functionalism, which is the view that conscious mental states should really only be identified with the functional role they play within an organism. For example, conscious pains are defined more in terms of input and output, such as causing bodily damage and avoidance behavior, as well as in terms of their relationship to other mental states. It is normally viewed as a form of materialism since virtually all functionalists also believe, like the token-token theorist, that something physical ultimately realizes that functional state in the organism, but functionalism does not, by itself, entail that materialism is true. Critics of functionalism, however, have long argued that such purely functional accounts cannot adequately explain the essential “feel” of conscious states, or that it seems possible to have two functionally equivalent creatures, one of whom lacks qualia entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).

Some materialists even deny the very existence of mind and mental states altogether, at least in the sense that the very concept of consciousness is muddled (Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such as desires and beliefs, will eventually be eliminated and replaced by physicalistic terms as neurophysiology matures into the future (Churchland 1983). This is meant as analogous to past similar eliminations based on deeper scientific understanding, for example, we no longer need to speak of “ether” or “phlogiston.” Other eliminativists, more modestly, argue that there is no such thing as qualia when they are defined in certain problematic ways (Dennett 1988).

Finally, it should also be noted that not all materialists believe that conscious mentality can be explained in terms of the physical, at least in the sense that the former cannot be “reduced” to the latter. Materialism is true as an ontological or metaphysical doctrine, but facts about the mind cannot be deduced from facts about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might be viewed as a relatively harmless variation on materialist themes, but others object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed, the line between such “non-reductive materialism” and property dualism is not always so easy to draw; partly because the entire notion of “reduction” is ambiguous and a very complex topic in its own right. On a related front, some materialists are happy enough to talk about a somewhat weaker “supervenience” relation between mind and matter. Although “supervenience” is a highly technical notion with many variations, the idea is basically one of dependence (instead of identity); for example, that the mental depends on the physical in the sense that any mental change must be accompanied by some physical change (see Kim 1993).

4. Specific Theories of Consciousness

Most specific theories of consciousness tend to be reductionist in some sense. The classic notion at work is that consciousness or individual conscious mental states can be explained in terms of something else or in some other terms. This section will focus on several prominent contemporary reductionist theories. We should, however, distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so in mentalistic terms, such as by using unconscious mental states or other cognitive notions.

a. Neural Theories

The more direct reductionist approach can be seen in various, more specific, neural theories of consciousness. Perhaps best known is the theory offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons fire in synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75 cycles per second). However, many philosophers and scientists have put forth other candidates for what, specifically, to identify in the brain with consciousness. This vast enterprise has come to be known as the search for the “neural correlates of consciousness” or NCCs (see section 5b below for more). The overall idea is to show how one or more specific kinds of neuro-chemical activity can underlie and explain conscious mental activity (Metzinger 2000). Of course, mere “correlation” is not enough for a fully adequate neural theory and explaining just what counts as a NCC turns out to be more difficult than one might think (Chalmers 2000). Even Crick and Koch have acknowledged that they, at best, provide a necessary condition for consciousness, and that such firing patters are not automatically sufficient for having conscious experience.

b. Representational Theories of Consciousness

Many current theories attempt to reduce consciousness in mentalistic terms. One broadly popular approach along these lines is to reduce consciousness to “mental representations” of some kind. The notion of a “representation” is of course very general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way; for example, as mental events representing outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. More specifically, philosophers will often call such representational mental states “intentional states” which have representational content; that is, mental states which are “about something” or “directed at something” as when one has a thought about the house or a perception of the tree. Although intentional states are sometimes contrasted with phenomenal states, such as pains and color experiences, it is clear that many conscious states have both phenomenal and intentional properties, such as visual perceptions. It should be noted that the relation between intentionalilty and consciousness is itself a major ongoing area of dispute with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Siewart 1998, Horgan and Tienson 2002) while most representationalists insist that intentionality is prior to consciousness.

The general view that we can explain conscious mental states in terms of representational or intentional states is called “representationalism.” Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, then, believe that there is room for a kind of “second-step” reduction to be filled in later by neuroscience. The other related motivation for representational theories of consciousness is that many believe that an account of representation or intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a reductionist and naturalistic theory of consciousness. Most generally, however, we can say that a representationalist will typically hold that the phenomenal properties of experience (that is, the “qualia” or “what it is like of experience” or “phenomenal character”) can be explained in terms of the experiences’ representational properties. Alternatively, conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.

i. First-Order Representationalism

A First-order representational (FOR) theory of consciousness is a theory that attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states. Probably the two most cited FOR theories of consciousness are those of Fred Dretske (1995) and Michael Tye (1995, 2000), though there are many others as well (e.g., Harman 1990, Kirk 1994, Byrne 2001, Thau 2002, Droege 2003). Tye’s theory is more fully worked out and so will be the focus of this section. Like other FOR theorists, Tye holds that the representational content of my conscious experience (i.e., what my experience is about or directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and other FOR representationalists often use the somewhat technical notion of the “transparency of experience” as support for their view (Harman 1990). This is an argument based on the phenomenological first-person observation, which goes back to Moore (1903), that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is not blue; rather, one “sees right through” one’s experience to its representational properties, and there is nothing else to one’s experience over and above such properties.

Whatever the merits and exact nature of the argument from transparency (see Kind 2003), it is clear, of course, that not all mental representations are conscious, so the key question eventually becomes: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes a mental state a conscious mental state? Here Tye defends what he calls “PANIC theory.” The acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content. Without probing into every aspect of PANIC theory, Tye holds that at least some of the representational content in question is non-conceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. (Actually, the exact nature or even existence of non-conceptual content of experience is itself a highly debated and difficult issue in philosophy of mind. See Gunther 2003.) Conscious states clearly must also have “intentional content” (IC) for any representationalist. Tye also asserts that such content is “abstract” (A) and not necessarily about particular concrete objects. This condition is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious, however, is that such content must be “poised” (P), which is an importantly functional notion. The “key idea is that experiences and feelings…stand ready and available to make a direct impact on beliefs and/or desires. For example…feeling hungry… has an immediate cognitive effect, namely, the desire to eat….States with nonconceptual content that are not so poised lack phenomenal character [because]…they arise too early, as it were, in the information processing” (Tye 2000: 62).

One objection to Tye’s theory is that it does not really address the hard problem of phenomenal consciousness (see section 3b.i). This is partly because what really seems to be doing most of the work on Tye’s PANIC account is the very functional sounding “poised” notion, which is perhaps closer to Block’s access consciousness (see section 1) and is therefore not necessarily able to explain phenomenal consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s PANIC account might not equally apply to unconscious representations and thus how it really explains phenomenal consciousness.

Other standard objections to Tye’s theory as well as to other FOR accounts include the concern that it does not cover all kinds of conscious states. Some conscious states seem not to be “about” anything, such as pains, anxiety, or after-images, and so would be non-representational conscious states. If so, then conscious experience cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains, itches, and the like do represent, in the sense that they represent parts of the body. And after-images, hallucinations, and the like either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in response to a whole host of alleged counter-examples to representationalism. Historically among them are various hypothetical cases of inverted qualia (see Shoemaker 1982), the mere possibility of which is sometimes taken as devastating to representationalism. These are cases where behaviorally indistinguishable individuals have inverted color perceptions of objects, such as person A visually experiences a lemon the way that person B experience a ripe tomato with respect to their color, and so on for all yellow and red objects. Isn’t it possible that there are two individuals whose color experiences are inverted with respect to the objects of perception? (For more on the importance of color in philosophy, see Hardin 1986.)
A somewhat different twist on the inverted spectrum is famously put forth in Block’s (1990) Inverted Earth case. On Inverted Earth every object has the complementary color to the one it has here, but we are asked to imagine that a person is equipped with color-inverting lenses and then sent to Inverted Earth completely ignorant of those facts. Since the color inversions cancel out, the phenomenal experiences remain the same, yet there certainly seem to be different representational properties of objects involved. The strategy on the part of critics, in short, is to think of counter-examples (either actual or hypothetical) whereby there is a difference between the phenomenal properties in experience and the relevant representational properties in the world. Such objections can, perhaps, be answered by Tye and others in various ways, but significant debate continues (Macpherson 2005). Intuitions also dramatically differ as to the very plausibility and value of such thought experiments. (For more, see Seager 1999, chapters 6 and 7. See also Chalmers 2004 for an excellent discussion of the dizzying array of possible representationalist positions.)

ii. Higher-Order Representationalism

As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness. For example, John Locke (1689/1975) once said that “consciousness is the perception of what passes in a man’s own mind.” This intuition has been revived by a number of philosophers (Rosenthal, 1986, 1993b, 1997, 2000, 2004; Gennaro 1996a; Armstrong, 1968, 1981; Lycan, 1996, 2001). In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a “meta-psychological” state, i.e., a mental state directed at another mental state. So, for example, my desire to write a good encyclopedia entry becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am “aware of” in some sense. Any theory which attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. It is best initially to use the more neutral term “representation” because there are a number of different kinds of higher-order theory, depending upon how one characterizes the HOR in question. HO theories, thus, attempt to explain consciousness in mentalistic terms, that is, by reference to such notions as “thoughts” and “awareness.” Conscious mental states arise when two unconscious mental states are related in a certain specific way; namely, that one of them (the HOR) is directed at the other (M). HO theorists are united in the belief that their approach can better explain consciousness than any purely FOR theory, which has significant difficulty in explaining the difference between unconscious and conscious mental states.

There are various kinds of HO theory with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David M. Rosenthal, think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual or experiential state of some kind (Lycan 1996) which does not require the kind of conceptual content invoked by HOT theorists. Partly due to Kant (1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (such as in Rosenthal 2004 and Lycan 2004). Some philosophers, however, have argued that the difference between these theories is perhaps not as important or as clear as some think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).

A common initial objection to HOR theories is that they are circular and lead to an infinite regress. It might seem that the HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state the higher-order thought (HOT) is not itself conscious; otherwise, circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one’s attention is directed back into one’s mind. For example, what makes my desire to write a good entry a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed at the entry and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).

Peter Carruthers (2000) has proposed another possibility within HO theory; namely, that it is better for various reasons to think of the HOTs as dispositional states instead of the standard view that the HOTs are actual, though he also understands his “dispositional HOT theory” to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to higher-order thought. So “conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves.” (Carruthers 2000: 228). Some first-order perceptual contents are available to a higher-order “theory of mind mechanism,” which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual intentional content; for example, a conscious experience of red not only has a first-order content of “red,” but also has the higher-order content “seems red” or “experience of red.” Carruthers also makes interesting use of so-called “consumer semantics” in order to fill out his theory of phenomenal consciousness. The content of a mental state depends, in part, on the powers of the organisms which “consume” that state, e.g., the kinds of inferences which the organism can make when it is in that state. Daniel Dennett (1991) is sometimes credited with an earlier version of a dispositional account (see Carruthers 2000, chapter ten). Carruthers’ dispositional theory is often criticized by those who, among other things, do not see how the mere disposition toward a mental state can render it conscious (Rosenthal 2004; see also Gennaro 2004; for more, see Consciousness, Higher Order Theories of.)

It is worth briefly noting a few typical objections to HO theories (many of which can be found in Byrne 1997): First, and perhaps most common, is that various animals (and even infants) are not likely to have to the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex higher-order thoughts such as “I am in mental state M”? Although most who bring forth this objection are not HO theorists, Peter Carruthers (1989) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. Gennaro (1993, 1996) has replied to Carruthers on this point; for example, it is argued that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states. Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate continues, however, in Carruthers (2000, 2005) and Gennaro (2004).

A second objection has been referred to as the “problem of the rock” (Stubenberg 1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HO theorists to explain just how adding the HO state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van Gulick 2000, 2004, Gennaro 2005). A common theme is that there is a principled difference in the objects of the HO states in question. Rocks and the like are not mental states in the first place, and so HO theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HO states must be “in the head.”
Third, the above leads somewhat naturally to an objection related to Chalmers’ hard problem (section 3b.i). It might be asked just how exactly any HO theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative “what it is like” aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HO theorists have been slow to address this problem, though a number of overlapping responses have emerged (see also Gennaro 2005 for more extensive treatment). Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HO theories. The claim is that HO theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, i.e., in terms of a higher-order awareness of some kind. A full account of “qualitative properties” or “sensory qualities” (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, Lycan 1996, 2001). Thus, a full explanation of phenomenal consciousness does require more than a HO theory, but that is no objection to HO theories as such. Another response is that proponents of the hard problem unjustly raise the bar as to what would count as a viable explanation of consciousness so that any such reductivist attempt would inevitably fall short (Carruthers 2000). Part of the problem, then, is a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995; see also section 3b). Moreover, anyone familiar with the literature knows that there are significant terminological difficulties in the use of various crucial terms which sometimes inhibits genuine progress (but see Byrne 2004 for some helpful clarification).

A fourth important objection to HO approaches is the question of how such theories can explain cases where the HO state might misrepresent the lower-order (LO) mental state (Byrne 1997, Neander 1998, Levine 2001). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation can be offered by the HO theorist? If my LO state registers a red percept and my HO state registers a thought about something green due, say, to some neural misfiring, then what happens? It seems that problems loom for any answer given by a HO theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if the HO theorist takes the option that the resulting conscious experience is reddish, then it seems that the HO state plays no role in determining the qualitative character of the experience. This objection forces HO theorists to be clearer about just how to view the relationship between the LO and HO states. (For one reply, see Gennaro 2004.) Debate is ongoing and significant both on varieties of HO theory and in terms of the above objections (see Gennaro 2004a). There is also interdisciplinary interest in how various HO theories might be realized in the brain.

iii. Hybrid Representational Accounts

A related and increasingly popular version of representational theory holds that the meta-psychological state in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HO state is extrinsic to (i.e., entirely distinct from) its target mental state. The assumption, made by Rosenthal for example, about the extrinsic nature of the meta-thought has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. One motivation for this movement is growing dissatisfaction with standard HO theory’s ability to handle some of the objections addressed in the previous section. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various other followers, normally associated with the phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also Smith 1986, 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which then still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a cold glass of water, one is also aware that one is in that very state. The conscious desire both represents the glass of water and itself. It is this “self-representing” which makes the state conscious.
These theories can go by various names, which sometimes seem in conflict, and have added significantly in recent years to the acronyms which abound in the literature. For example, Gennaro (1996a, 2002, 2004, 2006) has argued that, when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts. Gennaro calls this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s theory of consciousness can be understood in this way (Gennaro 2002).

Gennaro holds that conscious mental states should be understood (as Kant might have today) as global brain states which are combinations of passively received perceptual input and presupposed higher-order conceptual activity directed at that input. Higher-order concepts in the meta-psychological thoughts are presupposed in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Both Gennaro and Van Gulick have suggested that conscious states can be understood materialistically as global states of the brain, and it would be better to treat the first-order state as part of the larger complex brain state. This general approach is also forcefully advocated in a series of papers by Uriah Kriegel (such as Kriegel 2003a, 2003b, 2005, 2006) and is even the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and, more recently, the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development and is perhaps somewhat mysterious. Nonetheless, there is agreement among these authors that conscious mental states are, in some important sense, reflexive or self-directed. And, once again, there is keen interest in developing this model in a way that coheres with the latest neurophysiological research on consciousness. A point of emphasis is on the concept of global meta-representation within a complex brain state, and attempts are underway to identify just how such an account can be realized in the brain.

It is worth mentioning that this idea was also briefly explored by Thomas Metzinger who focused on the fact that consciousness “is something that unifies or synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of “higher-order binding” and thus uses the acronym HOB. Others who hold some form of the self-representational view include Kobes (1995), Caston (2002), Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory can be viewed in this light since he contends that conscious states have two representational contents. Thomas Natsoulas also has a series of papers defending a similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro) view this hybrid position to be a modified version of HOT theory; indeed, Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not inclined to call their views “higher-order” at all. To some extent, this is a terminological dispute, but, despite important similarities, there are also subtle differences between these hybrid alternatives. Like HO theorists, however, those who advocate this general approach all take very seriously the notion that a conscious mental state M is a state that subject S is (non-inferentially) aware that S is in. By contrast, one is obviously not aware of one’s unconscious mental states. Thus, there are various attempts to make sense of and elaborate upon this key intuition in a way that is, as it were, “in-between” standard FO and HO theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)

c. Other Cognitive Theories

Aside from the explicitly representational approaches discussed above, there are also related attempts to explain consciousness in other cognitive terms. The two most prominent such theories are worth describing here:
Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model (MDM) of consciousness. Although similar in some ways to representationalism, Dennett is most concerned that materialists avoid falling prey to what he calls the “myth of the Cartesian theater,” the notion that there is some privileged place in the brain where everything comes together to produce conscious experience. Instead, the MDM holds that all kinds of mental activity occur in the brain by parallel processes of interpretation, all of which are under frequent revision. The MDM rejects the idea of some “self” as an inner observer; rather, the self is the product or construction of a narrative which emerges over time. Dennett is also well known for rejecting the very assumption that there is a clear line to be drawn between conscious and unconscious mental states in terms of the problematic notion of “qualia.” He influentially rejects strong emphasis on any phenomenological or first-person approach to investigating consciousness, advocating instead what he calls “heterophenomenology” according to which we should follow a more neutral path “leading from objective physical science and its insistence on the third person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences.” (1991: 72)

Bernard Baars’ Global Workspace Theory (GWT) model of consciousness is probably the most influential theory proposed among psychologists (Baars 1988, 1997). The basic idea and metaphor is that we should think of the entire cognitive system as built on a “blackboard architecture” which is a kind of global workspace. According to GWT, unconscious processes and mental states compete for the spotlight of attention, from which information is “broadcast globally” throughout the system. Consciousness consists in such global broadcasting and is therefore also, according to Baars, an important functional and biological adaptation. We might say that consciousness is thus created by a kind of global access to select bits of information in the brain and nervous system. Despite Baars’ frequent use of “theater” and “spotlight” metaphors, he argues that his view does not entail the presence of the material Cartesian theater that Dennett is so concerned to avoid. It is, in any case, an empirical matter just how the brain performs the functions he describes, such as detecting mechanisms of attention.

Objections to these cognitive theories include the charge that they do not really address the hard problem of consciousness (as described in section 3b.i), but only the “easy” problems. Dennett is also often accused of explaining away consciousness rather than really explaining it. It is also interesting to think about Baars’ GWT in light of the Block’s distinction between access and phenomenal consciousness (see section 1). Does Baars’ theory only address access consciousness instead of the more difficult to explain phenomenal consciousness? (Two other psychological cognitive theories worth noting are the ones proposed by George Mandler 1975 and Tim Shallice 1988.)

d. Quantum Approaches

Finally, there are those who look deep beneath the neural level to the field of quantum mechanics, basically the study of sub-atomic particles, to find the key to unlocking the mysteries of consciousness. The bizarre world of quantum physics is quite different from the deterministic world of classical physics, and a major area of research in its own right. Such authors place the locus of consciousness at a very fundamental physical level. This somewhat radical, though exciting, option is explored most notably by physicist Roger Penrose (1989, 1994) and anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness arises through quantum effects which occur in subcellular neural structures known as microtubules, which are structural proteins in cell walls. There are also other quantum approaches which aim to explain the coherence of consciousness (Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to explain consciousness (Silberstein 1998, 2001). It is difficult to assess these somewhat exotic approaches at present. Given the puzzling and often very counterintuitive nature of quantum physics, it is unclear whether such approaches will prove genuinely scientifically valuable methods in explaining consciousness. One concern is simply that these authors are trying to explain one puzzling phenomenon (consciousness) in terms of another mysterious natural phenomenon (quantum effects). Thus, the thinking seems to go, perhaps the two are essentially related somehow and other physicalistic accounts are looking in the wrong place, such as at the neuro-chemical level. Although many attempts to explain consciousness often rely of conjecture or speculation, quantum approaches may indeed lead the field along these lines. Of course, this doesn’t mean that some such theory isn’t correct. One exciting aspect of this approach is the resulting interdisciplinary interest it has generated among physicists and other scientists in the problem of consciousness.

5. Consciousness and Science: Key Issues

Over the past two decades there has been an explosion of interdisciplinary work in the science of consciousness. Some of the credit must go to the ground breaking 1986 book by Patricia Churchland entitled Neurophilosophy. In this section, three of the most important such areas are addressed.

a. The Unity of Consciousness/The Binding Problem

Conscious experience seems to be “unified” in an important sense; this crucial feature of consciousness played an important role in the philosophy of Kant who argued that unified conscious experience must be the product of the (presupposed) synthesizing work of the mind. Getting clear about exactly what is meant by the “unity of consciousness” and explaining how the brain achieves such unity has become a central topic in the study of consciousness. There are, no doubt, many different senses of “unity” (see Tye 2003; Bayne and Chalmers 2003), but perhaps most common is the notion that, from the first-person point of view, we experience the world in an integrated way and as a single phenomenal field of experience. (For an important anthology on the subject, see Cleeremans 2003.) However, when one looks at how the brain processes information, one only sees discrete regions of the cortex processing separate aspects of perceptual objects. Even different aspects of the same object, such as its color and shape, are processed in different parts of the brain. Given that there is no “Cartesian theater” in the brain where all this information comes together, the problem arises as to just how the resulting conscious experience is unified. What mechanisms allow us to experience the world in such a unified way? What happens when this unity breaks down, as in various pathological cases? The “problem of integrating the information processed by different regions of the brain is known as the binding problem” (Cleeremans 2003: 1). Thus, the so-called “binding problem” is inextricably linked to explaining the unity of consciousness. As was seen earlier with neural theories (section 4a) and as will be seen below on the neural correlates of consciousness (5b), some attempts to solve the binding problem have to do with trying to isolate the precise brain mechanisms responsible for consciousness. For example, Crick and Koch’s (1990) idea that synchronous neural firings are (at least) necessary for consciousness can also be viewed as an attempt to explain how disparate neural networks bind together separate pieces of information to produce unified subjective conscious experience. Perhaps the binding problem and the hard problem of consciousness (section 3b.i) are very closely connected. If the binding problem can be solved, then we arguably have identified the elusive neural correlate of consciousness and have, therefore, perhaps even solved the hard problem. In addition, perhaps the explanatory gap between third-person scientific knowledge and first-person unified conscious experience can also be bridged. Thus, this exciting area of inquiry is central to some of the deepest questions in the philosophical and scientific exploration of consciousness.

b. The Neural Correlates of Consciousness (NCCs)

As was seen earlier in discussing neural theories of consciousness (section 4a), the search for the so-called “neural correlates of consciousness” (NCCs) is a major preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing down the precise brain property responsible for consciousness is a different and far more difficult enterprise than merely holding a generic belief in some form of materialism. One leading candidate is offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons all fire in synchrony with one another (oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently, one method used is simply to study some aspect of neural functioning with sophisticated detecting equipments (such as MRIs and PET scans) and then correlate it with first-person reports of conscious experience. Another method is to study the difference in brain activity between those under anesthesia and those not under any such influence. A detailed survey would be impossible to give here, but a number of other candidates for the NCC have emerged over the past two decades, including reentrant cortical feedback loops in the neural circuitry throughout the brain (Edelman 1989, Edelman and Tononi 2000), NMDA-mediated transient neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea is that anesthetics destroy conscious mental activity because they interfere with the functioning of NMDA synapses between neurons, which are those that are dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored at length in Metzinger (2000). Ongoing scientific investigation is significant and an important aspect of current scientific research in the field.

One problem with some of the above candidates is determining exactly how they are related to consciousness. For example, although a case can be made that some of them are necessary for conscious mentality, it is unclear that they are sufficient. That is, some of the above seem to occur unconsciously as well. And pinning down a narrow enough necessary condition is not as easy as it might seem. Another general worry is with the very use of the term “correlate.” As any philosopher, scientist, and even undergraduate student should know, saying that “A is correlated with B” is rather weak (though it is an important first step), especially if one wishes to establish the stronger identity claim between consciousness and neural activity. Even if such a correlation can be established, we cannot automatically conclude that there is an identity relation. Perhaps A causes B or B causes A, and that’s why we find the correlation. Even most dualists can accept such interpretations. Maybe there is some other neural process C which causes both A and B. “Correlation” is not even the same as “cause,” let alone enough to establish “identity.” Finally, some NCCs are not even necessarily put forth as candidates for all conscious states, but rather for certain specific kinds of consciousness (e.g., visual).

c. Philosophical Psychopathology

Philosophers have long been intrigued by disorders of the mind and consciousness. Part of the interest is presumably that if we can understand how consciousness goes wrong, then that can help us to theorize about the normal functioning mind. Going back at least as far as John Locke (1689/1975), there has been some discussion about the philosophical implications of multiple personality disorder (MPD) which is now called “dissociative identity disorder” (DID). Questions abound: Could there be two centers of consciousness in one body? What makes a person the same person over time? What makes a person a person at any given time? These questions are closely linked to the traditional philosophical problem of personal identity, which is also importantly related to some aspects of consciousness research. Much the same can be said for memory disorders, such as various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness require some kind of autobiographical memory or psychological continuity? On a related front, there is significant interest in experimental results from patients who have undergone a commisurotomy, which is usually performed to relieve symptoms of severe epilepsy when all else fails. During this procedure, the nerve fibers connecting the two brain hemispheres are cut, resulting in so-called “split-brain” patients.

Philosophical interest is so high that there is now a book series called Philosophical Psychopathology published by MIT Press. Another rich source of information comes from the provocative and accessible writings of neurologists on a whole host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book) and, more recently, V. S. Ramachandran (2004; see also Ramachandran and Blakeslee 1998). Another launching point came from the discovery of the phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently discussed in the philosophical literature regarding its implications for consciousness. Blindsight patients are blind in a well defined part of the visual field (due to cortical damage), but yet, when forced, can guess, with a higher than expected degree of accuracy, the location or orientation of an object in the blind field.

There is also philosophical interest in many other disorders, such as phantom limb pain (where one feels pain in a missing or amputated limb), various agnosias (such as visual agnosia where one is not capable of visually recognizing everyday objects), and anosognosia (which is denial of illness, such as when one claims that a paralyzed limb is still functioning, or when one denies that one is blind). These phenomena raise a number of important philosophical questions and have forced philosophers to rethink some very basic assumptions about the nature of mind and consciousness. Much has also recently been learned about autism and various forms of schizophrenia. A common view is that these disorders involve some kind of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal phenomenon, although not really a “pathological” condition as such (Cytowic 2003). Those with synesthesia literally have taste sensations when seeing certain shapes or have color sensations when hearing certain sounds. It is thus an often bizarre mixing of incoming sensory input via different modalities.
One of the exciting results of this relatively new sub-field is the important interdisciplinary interest that it has generated among philosophers, psychologists, and scientists.

6. Animal and Machine Consciousness

Two final areas of interest involve animal and machine consciousness. In the former case it is clear that we have come a long way from the Cartesian view that animals are mere “automata” and that they do not even have conscious experience (perhaps partly because they do not have immortal souls). In addition to the obviously significant behavioral similarities between humans and many animals, much more is known today about other physiological similarities, such as brain and DNA structures. To be sure, there are important differences as well and there are, no doubt, some genuinely difficult “grey areas” where one might have legitimate doubts about some animal or organism consciousness, such as small rodents, some birds and fish, and especially various insects.

Nonetheless, it seems fair to say that most philosophers today readily accept the fact that a significant portion of the animal kingdom is capable of having conscious mental states, though there are still notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say that various animals can have all of the same kinds of sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical and mathematical problems, enjoying artworks, thinking about the vast universe or the distant past, and so on. However, it still seems reasonable to believe that animals can have at least some conscious states from rudimentary pains to various perceptual states and perhaps even to some level of self-consciousness. A number of key areas are under continuing investigation. For example, to what extent can animals recognize themselves, such as in a mirror, in order to demonstrate some level of self-awareness? To what extent can animals deceive or empathize with other animals, either of which would indicate awareness of the minds of others? These and other important questions are at the center of much current theorizing about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some ways, the problem of knowing about animal minds is an interesting sub-area of the traditional epistemological “problem of other minds”: How do we even know that other humans have conscious minds? What justifies such a belief?

The possibility of machine (or robot) consciousness has intrigued philosophers and non-philosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? One important early launching point was a well-known paper by the mathematician Alan Turing (1950) which proposed what has come to be known as the “Turing test” for machine intelligence and thought (and perhaps consciousness as well). The basic idea is that if a machine could fool an interrogator (who could not see the machine) into thinking that it was human, then we should say it thinks or, at least, has intelligence. However, Turing was probably overly optimistic about whether anything even today can pass the Turing Test, as most programs are specialized and have very narrow uses. One cannot ask the machine about virtually anything, as Turing had envisioned. Moreover, even if a machine or robot could pass the Turing Test, many remain very skeptical as to whether or not this demonstrates genuine machine thinking, let alone consciousness. For one thing, many philosophers would not take such purely behavioral (e.g., linguistic) evidence to support the conclusion that machines are capable of having phenomenal first person experiences. Merely using words like “red” doesn’t ensure that there is the corresponding sensation of red or real grasp of the meaning of “red.” Turing himself considered numerous objections and offered his own replies, many of which are still debated today.

Another much discussed argument is John Searle’s (1980) famous Chinese Room Argument, which has spawned an enormous amount of literature since its original publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to reject what he calls “strong AI” which is the view that suitably programmed computers literally have a mind, that is, they really understand language and actually have other mental capacities similar to humans. This is contrasted with “weak AI” which is the view that computers are merely useful tools for studying the mind. The gist of Searle’s argument is that he imagines himself running a program for using Chinese and then shows that he does not understand Chinese; therefore, strong AI is false; that is, running the program does not result in any real understanding (or thought or consciousness, by implication). Searle supports his argument against strong AI by utilizing a thought experiment whereby he is in a room and follows English instructions for manipulating Chinese symbols in order to produce appropriate answers to questions in Chinese. Searle argues that, despite the appearance of understanding Chinese (say, from outside the room), he does not understand Chinese at all. He does not thereby know Chinese, but is merely manipulating symbols on the basis of syntax alone. Since this is what computers do, no computer, merely by following a program, genuinely understands anything. Searle replies to numerous possible criticisms in his original paper (which also comes with extensive peer commentary), but suffice it to say that not everyone is satisfied with his responses. For example, it might be argued that the entire room or “system” understands Chinese if we are forced to use Searle’s analogy and thought experiment. Each part of the room doesn’t understand Chinese (including Searle himself) but the entire system does, which includes the instructions and so on. Searle’s larger argument, however, is that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

Despite heavy criticism of the argument, two central issues are raised by Searle which continue to be of deep interest. First, how and when does one distinguish mere “simulation” of some mental activity from genuine “duplication”? Searle’s view is that computers are, at best, merely simulating understanding and thought, not really duplicating it. Much like we might say that a computerized hurricane simulation does not duplicate a real hurricane, Searle insists the same goes for any alleged computer “mental” activity. We do after all distinguish between real diamonds or leather and mere simulations which are just not the real thing. Second, and perhaps even more important, when considering just why computers really can’t think or be conscious, Searle interestingly reverts back to a biologically based argument. In essence, he says that computers or robots are just not made of the right stuff with the right kind of “causal powers” to produce genuine thought or consciousness. After all, even a materialist does not have to allow that any kind of physical stuff can produce consciousness any more than any type of physical substance can, say, conduct electricity. Of course, this raises a whole host of other questions which go to the heart of the metaphysics of consciousness. To what extent must an organism or system be physiologically like us in order to be conscious? Why is having a certain biological or chemical make up necessary for consciousness? Why exactly couldn’t an appropriately built robot be capable of having conscious mental states? How could we even know either way? However one answers these questions, it seems that building a truly conscious Commander Data is, at best, still just science fiction.

In any case, the growing areas of cognitive science and artificial intelligence are major fields within philosophy of mind and can importantly bear on philosophical questions of consciousness. Much of current research focuses on how to program a computer to model the workings of the human brain, such as with so-called “neural (or connectionist) networks.”
7. References and Further Reading

Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968.
Armstrong, D. “What is Consciousness?” In The Nature of Mind. Ithaca, NY: Cornell University Press, 1981.
Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988.
Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.
Baars, B., Banks, W., and Newman, J. eds. Essential Sources in the Scientific Study of Consciousness. Cambridge, MA: MIT Press, 2003.
Balog, K. “Conceivability, Possibility, and the Mind-Body Problem.” In Philosophical Review 108: 497-528, 1999.
Bayne, T. & Chalmers, D. “What is the Unity of Consciousness?” In Cleeremans, 2003.
Beckoff, M., Allen, C., and Burghardt, G. The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Cambridge, MA: MIT Press, 2002.
Blackmore, S. Consciousness: An Introduction. Oxford: Oxford University Press, 2004.
Block, N. “Troubles with Functionalism.” In Readings in the Philosophy of Psychology, Volume 1, Ned Block, ed., Cambridge, MA: Harvard University Press, 1980a.
Block, N. “Are Absent Qualia Impossible?” Philosophical Review 89: 257-74, 1980b.
Block, N. “Inverted Earth.” In Philosophical Perspectives, 4, J. Tomberlin, ed., Atascadero, CA: Ridgeview Publishing Company, 1990.
Block, N. “On a Confusion about the Function of Consciousness.” In Behavioral and Brain Sciences 18: 227-47, 1995.
Block, N. “Mental Paint and Mental Latex.” In E. Villanueva, ed. Perception. Atascadero, CA: Ridgeview, 1996.
Block, N, Flanagan, O. & Guzeledere, G. eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Block, N. & Stalnaker, R. “Conceptual Analysis, Dualism, and the Explanatory Gap.” Philosophical Review 108: 1-46, 1999.
Botterell, A. “Conceiving what is not there.” In Journal of Consciousness Studies 8 (8): 21-42, 2001.
Boyd, R. “Materialism without Reductionism: What Physicalism does not entail.” In N. Block, ed. Readings in the Philosophy of Psychology, Vol.1. Cambridge, MA: Harvard University Press, 1980.
Brentano, F. Psychology from an Empirical Standpoint. New York: Humanities, 1874/1973.
Brook, A. Kant and the Mind. New York: Cambridge University Press, 1994.
Brook, A. & Raymont, P. 2006. A Unified Theory of Consciousness. Forthcoming.
Byrne, A. “Some like it HOT: Consciousness and Higher-Order Thoughts.” In Philosophical Studies 86:103-29, 1997.
Byrne, A. “Intentionalism Defended.” In Philosophical Review 110: 199-240, 2001.
Byrne, A. “What Phenomenal Consciousness is like.” In Gennaro 2004a.
Campbell, N. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview, 2004.
Carruthers, P. “Brute Experience.” In Journal of Philosophy 86: 258-269, 1989.
Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000.
Carruthers, P. “HOP over FOR, HOT Theory.” In Gennaro 2004a.
Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University Press, 2005.
Caston, V. “Aristotle on Consciousness.” Mind 111: 751-815, 2002.
Chalmers, D.J. “Facing up to the Problem of Consciousness.” In Journal of Consciousness Studies 2:200-19, 1995.
Chalmers, D.J. The Conscious Mind. Oxford: Oxford University Press, 1996.
Chalmers, D.J. “What is a Neural Correlate of Consciousness?” In Metzinger 2000.
Chalmers, D.J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002.
Chalmers, D.J. “The Representational Character of Experience.” In B. Leiter ed. The Future for Philosophy. Oxford: Oxford University Press, 2004.
Churchland, P. S. “Consciousness: the Transmutation of a Concept.” In Pacific Philosophical Quarterly 64: 80-95, 1983.
Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986.
Cleeremans, A. The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford University Press, 2003.
Crick, F. and Koch, C. “Toward a Neurobiological Theory of Consciousness.” In Seminars in Neuroscience 2: 263-75, 1990.
Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994.
Cytowic, R. The Man Who Tasted Shapes. Cambridge, MA: MIT Press, 2003.
Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt, 1999.
Dennett, D. C. “Quining Qualia.” In A. Marcel & E. Bisiach eds. Consciousness and Contemporary Science. New York: Oxford University Press, 1988.
Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991.
Dennett, D. C. Sweet Dreams. Cambridge, MA: MIT Press, 2005.
Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995.
Droege, P. Caging the Beast. Philadelphia & Amsterdam: John Benjamins Publishers, 2003.
Edelman, G. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books, 1989.
Edelman, G. & Tononi, G. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.” In Metzinger 2000.
Flohr, H. “An Information Processing Theory of Anesthesia.” In Neuropsychologia 33: 9, 1169-80, 1995.
Fodor, J. “Special Sciences.” In Synthese 28, 77-115, 1974.
Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London: Routledge, 1996.
Gendler, T. & Hawthorne, J. eds. Conceivability and Possibility. Oxford: Oxford University Press, 2002.
Gennaro, R.J. “Brute Experience and the Higher-Order Thought Theory of Consciousness.” In Philosophical Papers 22: 51-69, 1993.
Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996a.
Gennaro, R.J. Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis: Hackett Publishing Company, 1996b.
Gennaro, R.J. “Leibniz on Consciousness and Self Consciousness.” In R. Gennaro & C. Huenemann, eds. New Essays on the Rationalists. New York: Oxford University Press, 1999.
Gennaro, R.J. “Jean-Paul Sartre and the HOT Theory of Consciousness.” In Canadian Journal of Philosophy 32: 293-330, 2002.
Gennaro, R.J. “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine,” 2004.  In Gennaro 2004a.
Gennaro, R.J., ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and Philadelphia: John Benjamins, 2004a.
Gennaro, R.J. “The HOT Theory of Consciousness: Between a Rock and a Hard Place?” In Journal of Consciousness Studies 12 (2): 3-21, 2005.
Gennaro, R.J. “Between Pure Self-referentialism and the (extrinsic) HOT Theory of Consciousness.” In Kriegel and Williford 2006.
Goldman, A. “Consciousness, Folk Psychology and Cognitive Science.” In Consciousness and Cognition 2: 264-82, 1993.
Graham, G. “Recent Work in Philosophical Psychopathology.” In American Philosophical Quarterly 39: 109-134, 2002.
Gunther, Y. ed. Essays on Nonconceptual Content. Cambridge, MA: MIT Press, 2003.
Guzeldere, G. “Is Consciousness the Perception of what passes in one’s own Mind?” In Metzinger 1995.
Hameroff, S. “Quantum Computation in Brain Microtubules? The Pemose-Hameroff “Orch OR” Model of Consciousness.” In Philosophical Transactions Royal Society London A 356:1869-96, 1998.
Hardin, C. Color for Philosophers. Indianapolis: Hackett, 1986.
Harman, G. “The Intrinsic Quality of Experience.” In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.
Heidegger, M. Being and Time (Sein und Zeit). Translated by J. Macquarrie and E. Robinson. New York: Harper and Row, 1927/1962.
Hill, C. S. “Imaginability, Conceivability, Possibility, and the Mind-Body Problem.” In Philosophical Studies 87: 61-85, 1997.
Hill, C. and McLaughlin, B. “There are fewer things in Reality than are dreamt of in Chalmers’ Philosophy.” In Philosophy and Phenomenological Research 59: 445-54, 1998.
Horgan, T. and Tienson, J. “The Intentionality of Phenomenology and the Phenomenology of Intentionality.” In Chalmers 2002.
Husserl, E. Ideas: General Introduction to Pure Phenomenology (Ideen au einer reinen Phänomenologie und phänomenologischen Philosophie). Translated by W. Boyce Gibson. New York: MacMillan, 1913/1931.
Husserl, E. Cartesian Meditations: an Introduction to Phenomenology. Translated by Dorian Cairns.The Hague: M. Nijhoff, 1929/1960.
Jackson, F. “Epiphenomenal Qualia.” In Philosophical Quarterly 32: 127-136, 1982.
Jackson, F. “What Mary didn’t Know.” In Journal of Philosophy 83: 291-5, 1986.
James, W. The Principles of Psychology. New York: Henry Holt & Company, 1890.
Kant, I. Critique of Pure Reason. Translated by N. Kemp Smith. New York: MacMillan, 1965.
Keenan, J., Gallup, G., and Falk, D. The Face in the Mirror. New York: HarperCollins, 2003.
Kim, J. “The Myth of Non-Reductive Physicalism.” In Proceedings and Addresses of the American Philosophical Association, 1987.
Kim, J. Supervenience and Mind. Cambridge, MA: Cambridge University Press, 1993.
Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998.
Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003.
Kirk, R. Raw Feeling. New York: Oxford University Press, 1994.
Kitcher, P. Kant’s Transcendental Psychology. New York: Oxford University Press, 1990.
Kobes, B. “Telic Higher-Order Thoughts and Moore’s Paradox.” In Philosophical Perspectives 9: 291-312, 1995.
Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company, 2004.
Kriegel, U. “PANIC Theory and the Prospects for a Representational Theory of Phenomenal Consciousness.” In Philosophical Psychology 15: 55-64, 2002.
Kriegel, U. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” In Synthese 134: 477-504, 2003a.
Kriegel, U. “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument.” In Canadian Journal of Philosophy 33: 103-132, 2003b.
Kriegel, U. “Consciousness and Self-Consciousness.” In The Monist 87: 182-205, 2004.
Kriegel, U. “Naturalizing Subjective Character.” In Philosophy and Phenomenological Research, forthcoming.
Kriegel, U. “The Same Order Monitoring Theory of Consciousness.” In Kriegel and Williford 2006.
Kriegel, U. & Williford, K. Self-Representational Approaches to Consciousness. Cambridge, MA: MIT Press, 2006.
Kripke, S. Naming and Necessity. Cambridge, MA: Harvard University Press, 1972.
Leibniz, G. W. Discourse on Metaphysics. Translated by D. Garber and R. Ariew. Indianapolis: Hackett, 1686/1991.
Leibniz, G. W. The Monadology. Translated by R. Lotte. London: Oxford University Press, 1720/1925.
Levine, J. “Materialism and Qualia: the Explanatory Gap.” In Pacific Philosophical Quarterly 64,354-361, 1983.
Levine, J. “On Leaving out what it’s like.” In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2003.
Loar, B. “Phenomenal States.” In Philosophical Perspectives 4, 81-108, 1990.
Loar, B. “Phenomenal States”. In N. Block, O. Flanagan, and G. Guzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-72, 1999.
Locke, J. An Essay Concerning Human Understanding. Ed. P. Nidditch. Oxford: Clarendon, 1689/1975.
Ludlow, P., Nagasawa, Y, & Stoljar, D. eds. There’s Something about Mary. Cambridge, MA: MIT Press, 2004.
Lurz, R. “Neither HOT nor COLD: An Alternative Account of Consciousness.” In Psyche 9, 2003.
Lurz, R. “Either FOR or HOR: A False Dichotomy.” In Gennaro 2004a.
Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996.
Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of Consciousness.” Analysis 61: 3-4, 2001.
Lycan, W.G. “The Superiority of HOP to HOT.” In Gennaro 2004a.
Macpherson, F. “Colour Inversion Problems for Representationalism.” In Philosophy and Phenomenological Research 70: 127-52, 2005.
Mandler, G. Mind and Emotion. New York: Wiley, 1975.
Marshall, J. and Zohar, D. The Quantum Self: Human Nature and Consciousness Defined by the New Physics. New York: Morrow, 1990.
McGinn, C. “Can we solve the Mind-Body Problem?” In Mind 98:349-66, 1989.
McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991.
McGinn, C. “Consciousness and Space.” In Metzinger 1995.
Metzinger, T. ed. Conscious Experience. Paderbom: Ferdinand Schöningh, 1995.
Metzinger, T. ed. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge, MA: MIT Press, 2000.
Moore, G. E. “The Refutation of Idealism.” In G. E. Moore Philosophical Studies. Totowa, NJ: Littlefield, Adams, and Company, 1903.
Nagel, T. “What is it like to be a Bat?” In Philosophical Review 83: 435-456, 1974.
Natsoulas, T. “The Case for Intrinsic Theory I. An Introduction.” In The Journal of Mind and Behavior 17: 267-286, 1996.
Neander, K. “The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness.” In Philosophical Perspectives 12: 411-434, 1998.
Papineau, D. Philosophical Naturalism. Oxford: Blackwell, 1994.
Papineau, D. “The Antipathetic Fallacy and the Boundaries of Consciousness.” In Metzinger 1995.
Papineau, D. “Mind the Gap.” In J. Tomberlin, ed. Philosophical Perspectives 12. Atascadero, CA: Ridgeview Publishing Company, 1998.
Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002.
Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001.
Penrose, R. The Emperor’s New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford University Press, 1989.
Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994.
Place, U. T. “Is Consciousness a Brain Process?” In British Journal of Psychology 47: 44-50, 1956.
Polger, T. Natural Minds. Cambridge, MA: MIT Press, 2004.
Preston, J. and Bishop, M. eds. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. New York: Oxford University Press, 2002.
Ramachandran, V.S. A Brief Tour of Human Consciousness. New York: PI Press, 2004.
Ramachandran, V.S. and Blakeslee, S. Phantoms in the Brain. New York: Harper Collins, 1998.
Robinson, W.S. Understanding Phenomenal Consciousness. New York: Cambridge University Press, 2004.
Rosenthal, D. M. “Two Concepts of Consciousness.” In Philosophical Studies 49:329-59, 1986.
Rosenthal, D. M. “The Independence of Consciousness and Sensory Quality.” In E. Villanueva, ed. Consciousness. Atascadero, CA: Ridgeview Publishing, 1991.
Rosenthal, D.M. “State Consciousness and Transitive Consciousness.” In Consciousness and Cognition 2: 355-63, 1993a.
Rosenthal, D. M. “Thinking that one thinks.” In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993b.
Rosenthal, D. M. “A Theory of Consciousness.” In N. Block, O. Flanagan, and G. Guzeldere, eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
Rosenthal, D. M. “Introspection and Self-Interpretation.” In Philosophical Topics 28: 201-33, 2000.
Rosenthal, D. M. “Varieties of Higher-Order Theory.” In Gennaro 2004a.
Ryle, G. The Concept of Mind. London: Hutchinson and Company, 1949.
Sacks, 0. The Man who mistook his Wife for a Hat and Other Essays. New York: Harper and Row, 1987.
Sartre, J.P. Being and Nothingness. Trans. Hazel Barnes. New York: Philosophical Library, 1956.
Seager, W. Theories of Consciousness. London: Routledge, 1999.
Seager, W. “A Cold Look at HOT Theory.” In Gennaro 2004a.
Searle, J. “Minds, Brains, and Programs.” In Behavioral and Brain Sciences 3: 417-57, 1980.
Searle, J. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984.
Searle, J. The Rediscovery of the Mind. Cambridge. MA: MIT Press, 1992.
Siewert, C. The Significance of Consciousness. Princeton, NJ: Princeton University Press, 1998.
Shallice, T. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press, 1988.
Shear, J. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997.
Shoemaker, S. “Functionalism and Qualia.” In Philosophical Studies, 27, 291-315, 1975.
Shoemaker, S. “Absent Qualia are Impossible.” In Philosophical Review 90, 581-99, 1981.
Shoemaker, S. “The Inverted Spectrum.” In Journal of Philosophy, 79, 357-381, 1982.
Silberstein, M. “Emergence and the Mind-Body Problem.” In Journal of Consciousness Studies 5: 464-82, 1998.
Silberstein, M. “Converging on Emergence: Consciousness, Causation and Explanation.” In Journal of Consciousness Studies 8: 61-98, 2001.
Skinner, B. F. Science and Human Behavior. New York: MacMillan, 1953.
Smart, J.J.C. “Sensations and Brain Processes.” In Philosophical Review 68: 141-56, 1959.
Smith, D.W. “The Structure of (self-)consciousness.” In Topoi 5: 149-56, 1986.
Smith, D.W. Mind World: Essays in Phenomenology and Ontology. Cambridge, MA: Cambridge University Press, 2004.
Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers, 1998.
Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986.
Thau, M. Consciousness and Cognition. Oxford: Oxford University Press, 2002.
Titchener, E. An Outline of Psychology. New York: Macmillan, 1901.
Turing, A. “Computing Machinery and Intelligence.” In Mind 59: 433-60, 1950.
Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995.
Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000.
Tye, M. Consciousness and Persons. Cambridge, MA: MIT Press, 2003.
Van Gulick, R. “Physicalism and the Subjectivity of the Mental.” In Philosophical Topics 13, 51-70, 1985.
Van Gulick, R. “Nonreductive Materialism and Intertheoretical Constraint.” In A. Beckermann, H. Flohr, J. Kim, eds. Emergence and Reduction. Berlin and New York: De Gruyter, 1992.
Van Gulick, R. “Understanding the Phenomenal Mind: Are we all just armadillos?” In M. Davies and G. Humphreys, eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
Van Gulick, R. “What would count as Explaining Consciousness?” In Metzinger 1995.
Van Gulick, R. “Inward and Upward: Reflection, Introspection and Self-Awareness.” In Philosophical Topics 28: 275-305, 2000.
Van Gulick, R. “Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness.” In Gennaro 2004a.
Van Gulick, R. “Mirror Mirror – is that all?” In Kriegel and Williford 2006.
Weiskrantz, L. Blindsight. Oxford: Clarendon, 1986.
Wilkes, K. V. “Is Consciousness Important?” In British Journal for the Philosophy of Science 35: 223-43, 1984.
Wilkes, K. V. “Yishi, Duo, Us and Consciousness.” In A. Marcel & E. Bisiach, eds., Consciousness in Contemporary Science. Oxford: Oxford University Press, 1988.
Williford, K. “The Self-Representational Structure of Consciousness.” In Kriegel and Williford 2006.
Wundt, W. Outlines of Psychology. Leipzig: W. Engleman, 1897.
Yablo, S. “Concepts and Consciousness.” In Philosophy and Phenomenological Research 59: 455-63, 1999.

Rocco J. Gennaro, Email: rocco@indstate.edu, Indiana State University

Knowing Your Own Mind

What is it to “know your own mind”? In ordinary English, this phrase connotes clear headed decisiveness and a firm resolve but in the language of contemporary philosophy, the indecisive and the susceptible can know their own minds just as well as anybody else. In the philosopher’s usage, “knowing your own mind” is just a matter of being able to produce a knowledgeable description of your mental state, whether it be a state of indecision, susceptibility or even confusion. What exercises philosophers is the fact that people seem to produce these descriptions of their own mental lives without any pretence of considering evidence or reasons of any kind and yet these descriptions are treated by the rest of us as authoritative, at least in a wide range of cases. How can this be?

Most of the philosophers exercised by this problem would regard the English phrase “knows his own mind” and its connotations as a mere distraction, as the product of a theoretically unhelpful ambiguity. But to some, the English phrase suggests a fruitful approach to the philosophical problem of self-knowledge. In the twentieth century, Wittgenstein, Sartre and Austin all explored the idea that such mental phenomena as thinking, reasoning, deliberating are, in an important sense, activities which culminate in deeds, in the making of cognitive commitments. Moran sets out to refurbish this tradition, to revive the notion that self-knowledge is special because it is a matter of actively making up your own mind, rather than of passively apprehending it.  

Moran’s approach looks most promising when applied to the cognitive parts of our mental lives: our beliefs, judgements, the more intellectual of our desires and emotions. Yet the contemporary discussion of self-knowledge focuses at least as much on experiential states like pain and visual sensations. Moran says at the outset that he thinks these questions about experience require a quite different treatment (p. xxxiii) and focuses his attention on the cognitive. Some of his opponents might regard this as an admission of defeat but the range of phenomena Moran does cover is sufficiently impressive to make his view worthy of very serious consideration. In this article, I shan’t try to discuss Moran’s account of belief, desire and emotion. Instead I will concentrate on his view of our knowledge of our own intentions and of what are doing (or shall do) to execute them.

The Epistemology of Agency

People in a normal frame of mind usually know what they are doing and they know this without having to observe their own behaviour. In Anscombe’s example, which Moran adopts, someone pumping water may know that they are operating a pump without observation but not know that they are creating a clicking noise except by hearing the noise (pp. 124-7). In Anscombe’s view, this is because making a noise is no part of what they are trying to do. Anscombe acknowledges that one can be wrong in thinking that one is operating a pump (the pump one means to operate might be an hallucination); her point is that our knowledge of our own actions is groundless, not that it is infallible. We can (and often do) know what we are doing without the evidential backing needed for knowledge of someone else’s pumping.

As both Anscombe and Moran observe, this epistemic authority extends to our own future agency (p. 88). A person in a normal frame of mind can come to know that they will go to London tomorrow simply by deciding to go to London tomorrow. Again this way of knowing the future is hardly foolproof: an unexpected rail strike or weakness of will may intervene, and empirical evidence as to the reliability of the railways  etc. is needed before the decision can be taken. Anscombe’s point is that a normal person can know that they will go to London in this way without amassing inductive evidence of their own resoluteness, of their propensity to stick to and execute their own decisions, whereas if they were asked whether someone else who has decided to go to London will actually go, they would need evidence not only about the state of the railways but also about how resolute the person in question is. For Anscombe, the fact that we (directly) create this action gives us a special, evidence-independent way of knowing of it. We lack groundless knowledge of what other people are doing or will do precisely because we don’t (directly) create their actions, rather we apprehend them as objects in the empirical world.

In expounding these points, I confined my attention to “people in a normal frame of mind”. An example devised by David Velleman shows why this qualification is needed.[2] I have agreed to meet an old friend in order to patch up a quarrel. As the meeting unfolds, I start to become petulant, raise my voice, provoke my friend into saying some unconscionable things and we part in anger. I later infer that I must have decided, before the meeting, to end the friendship, an objective which I skilfully attained. But at no stage either prior to or during the meeting would I have announced such an intention and even afterwards I unearth it only in an effort to explain what happened. I may end up convinced that this was my intention but only qua observer of myself.

I take it Moran has this sort of case in mind when he says that:

Even within a psychoanalytic explanation, it will normally be the case that the contrary thoughts and attitudes which explain the subject’s blocked awareness of the intention will themselves be reasons for ambivalence in his overall intention; that is the intention itself will not be a wholehearted one. Ignorance in such a case will not be mere ignorance, not only because it will be irresistible to look for a motivation of sorts to explain it, but because the motivation we then impute to the person must qualify the ascription of the original intention (as conflicted or partial). (p. 57) 

Applying this to Velleman’s example, I don’t fully intend to end the friendship because I can’t, to use Moran’s favoured expression, avow this intention and I can’t avow this intention because I am aware of good reasons for not ending the friendship. In general, Moran thinks that a subject is an epistemic authority about the content of his intentions and intentional actions only where the subject’s intentions seem well grounded to the subject himself. To the extent that an intention seems to him misguided, the subject’s privileged epistemic access to it will be compromised, as will his sense of being in control of behaviour guided by this intention. To put it another way, he will start to wonder whether this intention is, in the full sense, his.[3]

Moran’s epistemology of agency takes off from the fact that we create our actions, we don’t just observe them. But why, someone may ask, does the mere fact that we create our actions make us an authority about them? I am unsure whether Moran offers to answer to this question. He might regard it as a brute fact that everyone is an authority about the character of their own creations (including their actions). Alternatively, he might be offering us an explanation for this fact, an explanation which goes as follows: when we act, we act for a reason and the character of our action is determined by the character of our reasons. Subjects (in a normal frame of mind) know the character of their reasons – they know what practical considerations they find convincing and persuasive – and so they know the character of their action. A subject is an authority about what they are doing just in so far as they are an authority about their reasons.

If Moran is telling us the latter story then he must be operating with a highly rationalistic notion of agency. A moderate rationalist would require that truly intentional action be motivated by a reason, by the agent’s awareness of some respect in which the outcome intended would be desirable, without requiring that the agent act on (what he would think of as) his strongest or most powerful reasons. But this moderate rationalism does not suggest a substantial epistemology of agency: we can hardly explain how an agent knows what they are doing by supposing that they know what reasons they are acting on. To avoid this difficulty we are pushed towards a more extreme rationalism which insists that truly intentional agency is agency determined by the strongest reasons which the agent knows they have. Since knowing what reasons you have is not synonymous with knowing what you are doing, there is a substantial epistemology of agency on offer here. Yet such hyper-rationalism runs up against the fact that people intentionally do things they know they have sufficient reason not to do.

A hyper-rationalist can acknowledge that various forms of behaviour approximate to fully intentional and responsible agency but, for him, agency occurs in its pure form only when it accurately reflects the subject’s reasons.  I am unsure whether Moran is invoking this rationalist notion of agency because I am unsure whether he means to offer us a substantial epistemology of agency. This uncertainty is connected with another interpretative issue: how should we understand Moran’s discussion of practical irrationality and, in particular, his notion of akrasia?

Practical Irrationality

On the usual understanding of these terms, akratic or incontinent action is action which is performed even though the agent judges that it would be better for him to be doing something else. Akrasia is a very familiar phenomenon. If I eat more biscuits than I know I’ll be able to digest comfortably or remain slumped in front of the TV when I realise I ought to be making lunch, my eating and my lounging are things I do, things which I do deliberately, freely and intentionally, things for which I am fully responsible. In these respects, akratic action is no different from fully continent agency: it is agency par excellence.

Understanding akrasia in this way, I find some of the things Moran says about it rather puzzling. For example

when I know that I am akratic with respect to the question before me, that compromises the extent to which I can think of my behaviour as intentional action … Nor does a person speak with first-person authority about such conditions. (pp. 127-8)

Moran also says that the akratic does not “identify” with their action and that his knowledge of it is empirical and not “ordinary self-knowledge” (p. 67).[4] The implication is that while the akratic may be able to predict that he is likely to behave akratically and then exercise a sort of self-control by putting obstacles in the way of akratic behaviour (e.g. placing a time-lock on the drinks cabinet), he will lack that first-person knowledge of and control over his behaviour which the continent possess.[5]

Moran’s description fits Velleman’s psychoanalytic example very well but as a commentary on everyday akrasia, it is a bit overstated. My partiality to lunch time TV hardly qualifies as an obsession or a compulsion even though it tempts me to do what I know I should not do. I am not overwhelmed by the desire to watch, I choose to indulge it. If I am honest, I won’t deny that it is me who decides to remain on the sofa. Furthermore, I know perfectly well why I take this decision: the lunchtime soap opera is genuinely entertaining and diverting (though, even in my own eyes, other considerations are more pressing). There is no failure of first-person access either to the motivation for my akratic behaviour or to what it gets me to do.

Reflection on such everyday cases of akrasia puts pressure on Moran’s notion of an avowal. Moran says that there are two elements to avowal: first, an authoritative awareness of the state of mind avowed and secondly an endorsement of it (pp. 91-2). Consider a Catholic woman, described by Jackson, who has become pregnant after rape.[6] She judges that she ought not to have an abortion but akratically decides to have an abortion nevertheless and makes plans to attend the abortion clinic next week. Can she avow this intention? She can do more than report this intention in the way she might report a third parties’ intention but she can’t go as far as to endorse this intention. What she can do is to affirm that she is set on having an abortion, that she has resolved to have an abortion, that she is committed to having an abortion. Moran’s notion of an avowal seems to include the idea of endorsement (p. 67) so perhaps he would maintain that the woman can’t avow her intention. But if that is how the notion of an avowal is to be understood, it looks as if what manifests a distinctive knowledge of (and control over) intention is not our ability to avow these intentions but rather our ability to affirm them.

Until now I have been assuming that Moran is using “akrasia” in its standard sense but this may be a misreading. In several places, he is at pains to draw a distinction between what might be called pure evaluation on the one hand and decision-making on the other (where the notion of “deliberation” applies only to the latter)

the mere appraisal of one’s attitudes, however normative, would apply equally well to past as well as to current attitudes, and indeed may have just the same application to another person as to oneself. In itself such an assessment is not an essentially first-personal affair. Rather “deliberative” reflection as intended here is of the same family of thought as practical reflection, which does not conclude with a normative judgement about what would be best to do but with the formation of an actual intention to do something. (p. 59)

The implication of this paragraph is that Moran is not really interested in the relationship between someone’s judgement of their reasons and their action;  such judgement interests him only in so far as it involves the formation of an intention so to act. What really concerns Moran is the relationship between intention and action. If that is right we should focus our discussion of practical irrationality not on akrasia but rather on what I shall call irresolution.

Philosophers sometimes apply the labels “weakness of will” and “incontinence” to both akrasia and irresolution but they are not the same thing.[7] Akrasia is a matter of failing to intend and act in accordance with your practical judgement, your judgements about what you should (i.e. have most reason) to do. Irresolution is a matter of not sticking to intentions once formed, of giving in to the very ‘inclinations’ which the intention was formed (however akratically) to resist. As Jackson develops his example, his Catholic woman is by turns akratic and irresolute. She is akratic when she forms the intention to attend the abortion clinic but later on her scruples get the better of her and she irresolutely refuses to leave the house on the day of her appointment. We need not take a stand on the issue of whether this woman’s akrasia or her irresolution are symptomatic of irrationality. The present point is simply that they are not the same thing.   

I suspect that Moran tends to equate akrasia with irresolution (p. 81) and that this goes some way to explaining his view of akrasia. Moran argues convincingly that a person who knows they are irresolute will, as a matter of conceptual necessity, find it very difficult to form intentions (pp. 77-83, pp. 94-8). If past experience of her own behaviour convinces Jackson’s woman that she won’t attend the abortion clinic even if she decides to do so, it is hard to see how she can even decide to attend the clinic.  Saying “I intend to have an abortion but predict that I won’t” is rather like saying “I believe that it will rain but it won’t”. Both sentences express a state of mind that is more than irrational, it is paradoxical.[8]

Of course, an analyst might convince someone that they had a belief which they wished to disavow and the subject might express this by saying “I believe that my brother drove my parents to an early grave even though he didn’t”. But here the subject seems entitled to enter a qualification: this is not his belief in the full sense, he knows about it only via the analyst and he cannot assume full responsibility for it. It is plausible to say the same of intentions that the agent believes they won’t execute. The analyst may convince me that I still intend to proposition my childhood sweetheart someday even if I know perfectly well that I’ll never bring myself to do it. But, I can plausibly insist, this intention is mine only in an qualified sense. Fully self-conscious irresolution is indeed paradoxical. None of this applies to everyday cases of akrasia. There is nothing paradoxical about the statement “I know I ought not to have an abortion but I shall”. And there is little pressure to say that such a statement evinces a division of the person, or a diminishment of the person’s control over or responsibility for the intention.

Reading Moran’s discussion of practical irrationality as concerned with irresolution, he maintains that a person is an authority about what they are doing because they are an authority about the intention with which they act and they are an authority about the intention with which they act because this is something they have created, something which is itself an expression of their agency. Here it is the subject’s ability to affirm their intention, rather than their ability to endorse it, which ensures that they have knowledge of (and control over) what they are doing. But, read in this way, Moran is not offering the substantial epistemology of agency I suggested he might. Rather he is assuming that an agent has direct knowledge of both of what he is doing and of what he intends to do simply because he is choosing to do it.

To sum up, Moran thinks that there is close tie between what he calls the standpoint of practical deliberation and the possession of first-person epistemic authority: one is an authority about what one is doing and why just in so far as one is occupying this standpoint (p. 127). To occupy this standpoint is, by definition, to both make judgements about what you have reason to do and to implement those judgements in decision and action (pp. 63-4, pp. 94-5, p. 131, pp. 145-6).[9] Now it is clear that these two things can come apart: one can make practical judgements which one does not implement and one can take decisions or perform actions which do not reflect one’s practical judgements. The question for Moran is this: when that happens, is first-person authority necessarily compromised? Is a subject who fails to do what he thinks he ought to do ipso facto less well placed to know what he is doing and why? My discussion suggests a negative answer.

Conclusion    

I have focused on what I take to be a central theme of Moran’s book but my treatment has been far from comprehensive. For example, I have had to ignore his interesting discussion of belief, desire and emotion and I have failed to mention his illuminating criticisms of superficially similar positions (like Shoemaker’s). Two other features of the book are particularly worthy of mention. First, Moran offers us an interpretation of the work the early Sartre which enabled the present writer to read parts of Being and Nothingness with great profit. Secondly, I strongly recommend the fascinating final chapter in which Moran explores some virgin territory: the moral psychology of the first person.

Reading a work of academic philosophy, one often finds oneself slogging through chapter after chapter of critical commentary on the work of others, to be rewarded with only the vaguest sketch of an alternative view. Yet (to paraphrase Feyerabend) no sensible person abandons a theory because of a counterexample, only for a better theory. Moran has taken Feyerabend’s maxim to heart. His prose is elegant and engaging. He lays out his view in enough detail to expose its weaknesses as well as its strengths. Occasionally, I would have wished to know more about why he thinks his position should be preferred to alternatives. Nevertheless Moran has said enough to encourage others to build up the theory’s defences and highlight its advantages over more familiar approaches to the problem of self-knowledge.

 

 [1] Richard Moran, Authority and Estrangement (Princeton NJ: Princeton University Press, 2001), xc + 201 pp. Page references are to this work.

[2] D. Velleman – The Possibility of Practical Reason (Oxford: Oxford University Press, 2000) pp. 126-7.

[3] See also Moran discussing of emotions which one can’t avow on p. 93.

[4] On p. 131 Moran connects claims about alienation to claims about control and responsibility, saying that (certain forms of) the latter are compromised by such alienation (see also pp. 117-8).

[5] Moran says similar things about the desires which motivate action (pp. 116-20).

[6] This example occurs on p. 4. of F. Jackson – ‘Weakness of Will’ Mind (1984) 93, pp. 1-18

[7] R. Holton – ‘Intention and Weakness of Will’ Journal of Philosophy (1999) 96: pp. 241-62.

[8] This reading makes good sense of the parallels Moran draws between akrasia and self-deception (p. 67, p. 87)

[9] This may be too strong. Perhaps Moran intends only that the deliberative standpoint be one in which it is believed that one’s decision will reflect one’s practical judgement. I don’t think this will help. Someone who does not believe but merely hopes that they will take what they regard as the right decision in a difficult situation may still know, in the usual first-person way, what they end up doing (or deciding to do).

 

KNOWING YOUR OWN MIND[1], David Owens  University of Sheffield, United Kingdom

Cognition, Consciousness, and Physics

REVIEW OF: Roger Penrose (1994) Shadows of the Mind. New York: Oxford University Press.

1. Introduction

1.1 Physics is surely the most beautiful of the sciences, and it is esthetically tempting to suppose that two of the great scientific mysteries we confront today, observer effects in quantum mechanics and conscious experience, are in fact the same. Roger Penrose is an admirable contributor to modern physics and mathematics, and his new book, Shadows of the Mind (SOTM) offers us some brilliant intellectual fireworks — which for me at least, faded rapidly on further examination.

1.2 I felt disappointed for several reasons, but one obvious one: Is consciousness really a physics problem? Penrose writes,

A scientific world-view which does not profoundly come to terms with the problem of conscious minds can have no serious pretensions of completeness. Consciousness is part of our universe, so any physical theory which makes no proper place for it falls fundamentally short of providing a genuine description of the world. I would maintain that there is yet no physical, biological, or computational theory that comes very close to explaining our consciousness … (emphasis added)

1.3 Having spent 17 years of my life trying to do precisely what Penrose suggests has not and cannot be done, this point was a bit disconcerting. But even more surprising was the claim that consciousness is a problem in physics. The conscious beings we see around us are the products of billions of years of biological evolution. We interact with them — with each other — at a level that is best described as psychological. All of our evidence regarding consciousness depends upon reports of personal experiences, and observation of our own perception, memories, attention, imagery, and the like. The evidence therefore would seem to be exclusively psychobiological. We will come back to this question.

1.4 The argument in SOTM comes down to two theses and a statement of faith. The first thesis I will call the “Turing Impossibility Proof,” and the second, the “Quantum Promissory Note”. The statement of faith involves classical Platonism of the mathematical variety, founded in a sense of certainty and wonder at the amazing success of mathematical thought over the last 25 centuries, and the extraordinary ability of mathematical formalisms to yield deep insight into scientific questions (SOTM, p. 413). This view may be captured by Einstein’s well-known saying that “the miraculous thing about the universe is that it is comprehensible.” While I share Penrose’s admiration for mathematics, I do not believe in the absolute nature of mathematical thought, which leads him to postulate a realm of special conscious insight requiring no empirical investigation to be understood.

1.5 After considering the argument of SOTM I will briefly sketch the current scientific alternative, the emerging psychobiology of consciousness (see Baars, 1988, 1994; Edelman, 1989; Newman and Baars, 1993; Schacter, 1990; Gazzaniga, 1994). Though the large body of current evidence can be stated in purely objective terms, I will strive to demonstrate the phenomena by appealling to the reader’s personal experience, such as your consciousness of the words on this page, the inner speech that often goes with the act of reading carefully, and so on. Such demonstrations help to establish the fact that we are indeed talking about consciousness as such.

2. Has Science Failed To Understand Consciousness?

2.1 Central to SOTM is Penrose’s contention that contemporary science has failed to understand consciousness. There is more than a little truth to that — if we exclude the last decade — but it is based on a great historical misunderstanding: It assumes that psychologists and biologists have tried to understand human experience with anything like the persistence and talent routinely devoted to memory, language, and perception. The plain fact is that we have not treated the issue seriously until very recently. It may be difficult for physicists to understand this — current physics does not seem to be intimidated by anything — but the subject of conscious experience, the great core question of traditional philosophy, has simply been taboo in psychology and biology for most of this century. I agree with John Searle that this is a scandalous fact, which should be a great source of embarassment to us in cognitive psychology and neuroscience. But no one familiar with the field could doubt it. As Crick and Koch (1992) have written, “For many years after James penned The Principles of Psychology (1890) . . most cognitive scientists ignored consciousness, as did almost all neuroscientists. The problem was felt to be either purely “philosophical” or too elusive to study experimentally. . . In our opinion, such timidity is ridiculous.”

2.2 Fortunately the era of avoidance is visibly fading. First-order theories are now available, and have not by any means been disproved (Baars,1983, 1988, and in press; Crick & Koch, 1992; Edelman, 1989; Gazzaniga, 1994; Schacter, 1990; Kinsbourne, 1993; etc.). In fact, there are significant commonalities among contemporary theories of consciousness, so that one could imagine a single, integrative hybrid theory with relative ease. But Penrose does not deal with this literature at all.

2.3 Has science failed, and do we need a scientific revolution? Given the fact that we have barely begun to apply normal science to the topic, Penrose’s call for a scientific revolution seems premature at best. There is yet nothing to revolt against. Of course we should be ready to challenge our current assumptions. But it has not been established by any means that ordinary garden-variety conscious experience cannot be explained through a diligent pursuit of normal science.

3. A Critique Of The Turing “Impossibility Proof”

3.1 Impossibility arguments have a mixed record in science. On one side is the proof scribbled on the back of an envelope by physicists in the Manhattan Project, showing that the first enriched uranium explosion would not trigger a chain reaction destroying the planet. But notice that this was not a purely mathematical proof; it was a physical-chemical-mathematical reductio, with a very well-established, indispensible empirical basis. On the side of pure mathematics, we have such historical examples as Bishop Berkeley’s disproof of Newton’s use of infinitesimals in the calculus. Berkeley was mathematically right but the point was empirically irrelevant; physicists used the flawed calculus for two hundred years with great scientific success, until just before 1900 the paradox was resolved by the discovery of converging series.

3.2 Even more empirically irrelevant was Zeno’s famous Paradox, which seemed to show that we cannot walk a whole step, since we must first cover half a step, then half of half a step, then half of the remaining distance, and the like, never reaching the whole intended step. Zeno of Elea used this clever argument to prove to the astonishment of the world that motion was impossible. But that did not paralyze commerce. Ships sailed, people walked, and camels trudged calmly on their way doing the formally impossible thing for a couple of thousand years until the formal solution emerged. And of course we have more than a century of mathematical reductios claiming that Darwinian evolution is impossible if you combine all the a priori probabilities of carbon chains evolving into DNA and ending up with thee and me. These reductios on behalf of divine Creation still appear with regularity, but the biological evidence is so strong that they are not even considered.

3.3 The problem is of course that a mathematical model is only as good as its assumptions, and those depend upon the quality of the evidence. The whole Turing Machine debate and its putative implications for consciousness is in my opinion a great distraction from the sober scientific job of gathering evidence and developing theory about the psychobiology of consciousness (e.g., Baars, 1988; 1994). The notion that the Turing argument actually tells us something scientifically useful is amazingly vulnerable. After all, the theory assumes an abstract automaton blessed with infinite time, infinite memory, and an environment that imposes no resource constraints. The brain is a massively parallel organ with 100 billion simultaneously active neurons, but the Turing Machine is at the extreme end of serial machines. This appears to be the reason why discussion of the Turing topic appears nowhere in the psychobiological literature. It seems primarily limited to philosophy and the general intellectual media.

3.4 Finally, it turns out that all current cognitive and neural models are formal Turing equivalents. That means the mathematical theory is useless in the critical task of choosing between models that are quite different computationally and on the evidence. It does not distinguish between neural nets and symbolic architectures for example, as radically different as they are in practice. But that is exactly the challenge we face today: choosing between theories based on their fit with the evidence. Here the theory of automata is no help at all.

3.5 A small but telling fact about Penrose’s book caught my attention: of its more than 400 references, fewer than forty address the psychology or biology of consciousness. But all our evidence on the subject is psychological and, to a lesser extent, biological! It appears that Penrose’s topic is not consciousness in the ordinary psychoneural sense, like waking up in the morning from a deep sleep or listening to music. How the positive proposals in SOTM relate to normal psychobiological consciousness is only addressed in terms of a technical hypothesis. Stuart Hameroff, an anesthesiologist at the University of Arizona currently working with Penrose, has proposed that general anesthetics interact with neurons via quantum level events in neural microtubules, which transport chemicals down axons and dendrites. It is an interesting idea, but it is by no means accepted, and there are many alternative hypotheses about anesthetics. But it is a real hypothesis: testable, relevant to the issue of consciousness, and directly aimed at the quantum level.

3.6 Penrose calls attention to the inability of Turing Machines to know when to stop a possibly nonterminating computation. This is a form of the Goedel Theorem, from which Penrose draws the following conclusion: “Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth.” That is to say, if humans can propose a Halting Rule which turns out to be demonstrably correct, and if we take Turing Machines as models of mathematicians, then the ability of mathematicians to come up with Halting Rules shows that their mental processes are not Turing-computable.

3.7 I’m troubled by this argument, because all of the cognitive studies I know of human formal reasoning and logic show that humans will take any shortcut available to find a plausible answer for a formal problem; actually following out formalisms mentally is rare in practice, even among scientists and engineers. Human beings are not algorithmic creatures; they prefer by far to use heuristic, fly-by-the-seat-of-your-pants analogies to situations they know well. Even experts typically use heuristic shortcuts. Furthermore, the apparent reductio of Penrose’s claim has a straightforward alternative explanation, namely that one of the premises is plain wrong. The implication psychologically is not that people are fancier than any Turing Machine, but that they are much sloppier that any explicit algorithm, and yet do quite well in many cases.

3.8 The fact that people can walk is an effective counter to Zeno’s Paradox. The fact that people can talk in sentences was Chomsky’s counter to stimulus-response theories of language. Now we know that people can in many cases find Halting Rules. It’s not that human processes are noncomputible by a real computer — numerous mental processes have been simulated with computers, including some formidable ones like playing competitive chess — but rather that the formal straightjacket of Turing Machinery is simply the wrong model to apply. This is the fallacy in trying to attribute rigorous all-or-none logical reasoning to ordinary human beings, who are pragmatic, heuristic, cost-benefit gamblers when it comes to solving formal problems.

3.9 Penrose proceeds to deduce that consciousness is noncomputable by Turing standards. But even this claim is based only on intuition; the argument has the form, “mathematicians have an astonishingly good record gaining fundamental insights into a variety of formal systems; this is obviously impossible for a Turing automaton; hence mathematicians themselves cannot be modeled by such automatons.” From a psychobiological point of view the success of mathematical intuition is more likely reflect the nervous system’s excellent heuristics for discovering patterns in the world. The brain appears to have sophisticated knowledge of space, for example, which may in turn allow deep geometrical intuitions to occur with great accuracy in talented individuals. In effect, we may put a billion years of brain evolution of spatial processing to good use if we are fortunate enough to be mathematically talented.

4. The Quantum Promissory Note

4.1 Having proved that Turing machines cannot account for mathematical intuition, Penrose develops the idea that Quantum Mechanics will provide a solution. QM is the crown jewel of modern theoretical physics, an endless source of insight and speculation. It shows extraordinary observer paradoxes. Consciousness is a mysterious something human observers have, and many people leap to the inference that the two observer mysteries must be the same. But this is at best a leap of faith. It is much too facile: observations of quantum events are not made directly by human beings but by such devices as Geiger counters with no consciousness in any reasonable sense of the word. Conscious experience so far as we know is limited to huge biological nervous systems, produced over a billion years of evolution.

4.2 There is no precedent for physicists deriving from QM any macrolevel phenomenon such as a chair or a flower or a wad of chewing gum, much less a nervous system with 100 billion neurons. Why then should we believe that one can derive psychobiological consciousness from QM? QM has not been shown to give any psychological answers. Conscious experience as we know it in humans has no resemblance to recording the collapse of a quantum wave packet. Let’s not confuse the mysteries of QM with the question of the reader’s perception of this printed phrase , or the inner sound of these words !

4.3 What can we make of Penrose’s Quantum Promissory Note? All scientific programs are promissory notes, making projections about the future and betting on what we may possibly find. The Darwin program was a promissory note, the Human Genome project is, as are particle physics and consciousness research. How do you place your bets? Is there a track record? Is there any evidence?

5. Treating Consciousness As A Variable: The Evidence For Consciousness As Such

5.1 We are barely at the point of agreeing on the real scientific questions, and on the kind of theory that could address them. On the matter of evidence, Baars (1983, 1988, 1994 and in press), Libet (1985) and others have argued that empirical constraints bearing on consciousness involve a close comparison of very similar conscious and unconscious processes. As elsewhere in science, we can only study a phenomenon if we can treat it as a variable. Many scientific breakthroughs result from the realization that some previously assumed constant, like atmospheric pressure, frictionless movement,the uniformity of space, the velocity and mass of the Newtonian universe, and the like, were actually variables, and that is the aim here. In the case of consciousness we can conduct a contrastive analysis comparing waking to sleep, coma, and general anesthesia; subliminal to supraliminal perception, habituated vs. novel stimuli, attended vs. nonattended streams of information, recalled vs. nonrecalled memories, and the like. In all these cases there is evidence that the conscious and unconscious events are comparable in many respects, so that we can validly probe for the essential differences between otherwise similar conscious and unconscious events (See Greenwald, 1992; Weiskrantz, 1986; Schacter, 1990).

5.2 This “method of contrastive analysis” is much like the experimental method: We can examine closely comparable cases that differ only in respect to consciousness, so that consciousness becomes, in effect, a variable. However, instead of dealing with only one experimental data set, contrastive analysis involves entire categories of well-established phenomena, summarizing numerous experimental studies. In this way we can highlight the variables that constrain consciousness over a very wide range of cases. The resulting robust pattern of evidence places major constraints on theory (Baars, 1988; in press).

6. Can Penrose Deal With Unconscious Information Processing?

6.1 Like many psychologists before 1900 Penrose appears to deny unconscious mental processes altogether. This is apparently because his real criterion is introspective access to the world of formal ideas. But introspection is impossible for unconscious events, and so the tendency for those who rely on introspection alone is to disbelieve the vast domain of unconscious processes.

6.2 Unconscious processing can be inferred from numerous sources of objective evidence. The simplest case is the great multitude of your memories that are currently unconscious. You can now recall this morning’s breakfast — but what happened to that memory before you brought it to mind? There is much evidence that even before recall the memory of breakfast was still represented in the nervous system, though not consciously. For example, we know that unconscious memories can influence other processes without ever coming to mind. If you had orange juice for breakfast today you may switch to milk tomorrow, even without bringing today’s juice to mind. A compelling case can be made for unconscious representation of habituated stimuli, of memories before and after recall, automatic skills, implicit learning, the rules of syntax, unattended speech, presupposed knowledge, preconscious input processing, and many other phenomena. In recent years a growing body of neurophysiological evidence has provided convergent confirmation of these claims. Researchers still argue about some of the particulars, but it is widely agreed that given adequate evidence, unconscious processes may be inferred.

6.3 What is the critical difference then between comparable conscious and unconscious processes? There are several, but perhaps the most significant one is that conscious percepts and images can trigger access to unanticipated knowledge sources. It is as if the conscious event is broadcast to memory, skill control, decision-making functions, anomaly detectors, and the like, allowing us to match the input with related memories, use it as a cue for a skilled actions or decisions, and detect problems in the input. At a broad architectural level, conscious representations seem to provide access to multiple knowledge source in the nervous system, while unconscious ones seem to be relatively isolated. The same conclusion follows from other contrastive analyses. (See Baars, 1988).

6.4 None of this evidence appears to fit in the SOTM framework, because it has no role for unconscious but vitally important information processing. This is a major point on which the great weight of psychobiological evidence and SOTM are fundamentally at odds.

7. The Emerging Psychobiology Of Consciousness

7.1 The really daring idea in contemporary science is that consciousness may be understandable without miracles, just as Darwin’s revolutionary idea was that biological variation could be understood as a purely natural phenomenon. We are beginning to see human conscious experience as a major biological adaptation, with multiple functions. It seems as if a conscious event becomes available throughout the brain to the neural mechanisms of memory, skill control, decision-makings, anomaly detection, and the like, allowing us to match our experiences with related memories, use them as a cue for skilled actions or decisions, and detect anomalies in them. By comparison, unconscious events seem to be relatively isolated. Thus consciousness is not just any kind of knowledge: It is knowledge that is widely distributed, that triggers off widespread unconscious processing, has multiple integrative and coordinating functions, aids in decision-making, problem-solving and action control, and provides information to a self-system.
8. Conclusion

8.1 I don’t know if consciousness has some profound metaphysical relation to physics. Science is notoriously unpredictable over the long term, and there are tricky mind-body paradoxes that may ultimately demand a radical solution. But at this point in the vexed history of the problem there is little question about the preferable scientific approach. It is not to try to solve the mind-body problem first — that effort has a poor track record — or to pursue lovely but implausible speculations. It is simply to do good science using consciousness as a variable, and investigating its relations to other psychobiological variables.

References

Baars, B.J. (1983). Conscious contents provide the nervous system with coherent, global information. In R. Davidson, G. Schwartz, & D. Shapiro (Eds.), Consciousness and self-regulation, 3, 45-76. New York: Plenum Press.

Baars, B.J. (1988) A cognitive theory of consciousness. Cambridge, UK: Cambridge University Press.

Baars, B.J. (1994) A thoroughly empirical approach to consciousness. PSYCHE 1(6) [80 paragraphs] URL:http://psyche.cs.monash.edu.au/volume1/ psyche-94-1-6-contrastive-1-baars.html

Baars, B.J. (in press) Consciousness regained: The new science of human experience. Oxford, UK: Oxford University Press.

Crick, F.H.C. & Koch, C. (1992) The problem of consciousness, Scientific American, 267(3), 153-159.

Edelman, G. (1989) The remembered present: A biological theory of consciousness. NY: Basic Books.

Gazzaniga, M. (1994) Cognitive neuroscience. Cambridge, MA: MIT Press.

Greenwald, A. (1992). New Look 3, Unconscious cognition reclaimed. American Psychologist, 47(6), 766-779.

James, W. (1890/1983). The principles of psychology. Cambridge, MA: Harvard University Press.

Kinsbourne, M. (1993). Integrated field model of conscousness. In G. Marsh & M. J. Brock (Eds.), CIBA symposium on experimental and theoretical studies of consciousness. (pp. 51-60). London: Wiley Interscience.

Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8, 529-66.

Newman, J., & Baars, B. J. (1993). A neural attentional model for access to consciousness: A Global Workspace perspective. Concepts In Neuroscience, 2(3), 3-25.

Penrose, R. (1994) Shadows of the mind. Oxford, UK: Oxford University Press.

Schacter, D. L. (1990). Toward a cognitive neuropsychology of awareness: Implicit knowledge and anosognosia. Journal of Clinical and Experimental Neuropsychology, 12(1), 155-178.

Weiskrantz, (1986) Blindsight: A single case and its implications. Oxford, UK: Clarendon Press.

Can Physics Provide a Theory of Consciousness? A Review of Shadows of the Mind by Roger Penrose by Bernard J. Baars, baars@cogsci.berkeley.edu, Copyright (c) Bernard J. Baars 1995

Morality & Neuroscience

An Ravelingien reports on the conference ‘Double standards. Towards an integration of evolutionary and neurological perspectives on human morality.’ (Ghent University, 21-22 Oct. 2006)

In Love in the Ruins, Walker Percy tells the story of Tom More, the inventor of the extraordinary ‘ontological lapsometer’1. The lapsometer is a diagnostic tool, a ‘stethoscope of the human soul’.  Just as a stethoscope or an EEG can trace certain physical dysfunctions, the lapsometer can measure the frailties of the human mind. The device can measure ‘how deep the soul has fallen’ and allows for early diagnoses of potential suicides, paranoia, depression, or other mood disorders. Bioethicist Carl Elliott refers to this novel to illustrate a well-known debate within psychiatry2. According to Elliott, the image of the physician that uses the lapsometer to unravel the mysteries of the soul is a comically desperate attempt to objectify experiences that cannot accommodate such scientific analysis. His objection carries back to the conflict between a sociological perspective – that would stress the subjective experiences related to the cultural and social context of human psychology – and a biological perspective – that would rather determine the physiological causes of mental and mood dysfunction. It is very likely that debate about the subjective and indefinite nature of some experiences will climax when empirical science is applied to trace and explain the biology of our moral sentiments and convictions. For most of us, I presume, nothing would appear to be more inextricably a part of our personal experience and merit than our moral competence. The conference ‘Double Standards’ questioned this intuition and demonstrated that the concept of ‘morality’ is becoming more and more tangible.

Jan Verplaetse and Johan Braeckman, the organizers of the conference, gathered 13 reputable experts and more than 150 participants to ponder one of the oldest and most fundamental philosophical questions: how did morality come into existence? For this, they drew upon two different scientific approaches: evolutionary psychology and neuroscience. In theory, these disciplines are complementary.  Neuroscientists assume that morality is generated by specific neural mechanisms and structures, which they hope to find by way of sophisticated brain imaging techniques. Evolutionary scientists, by contrast, want to figure out what the adaptive value of morality is for it to have evolved. According tot hem, morality is – just as all aspects of our human nature – a product of evolution through selection. Moral and social behavior must have had a selective advantage, from which the relevant cognitive and emotional functions developed. Through an interdisciplinary approach, the alleged functions can direct the neuroscientist in searching for the neurological structures that underlie them. Or, the other way around, the imaging of certain neural circuits should help to discover whether and to what extent our moral intuitions are indeed embedded in our ‘nature.’ During the conference, this double perspective gave rise to several interesting hypotheses.

It appears that neuroscientists have already achieved remarkably uniform results regarding the crucial brain areas that are involved in fulfilling moral tasks. Jorge Moll was the first to use functional MRI-studies to show that three major areas are engaged in moral decision making: the frontal lobes, temporal lobe and limbic-paralimbic areas. Other speakers at the conference confirmed this overlapping pattern of neural activity, regardless of differences in the ways in which moral stimuli were presented, and regardless of the specific content of the moral tasks (whether the tasks consisted of complex dilemma’s, simple scenario’s with an emotional undertone, or references to violence and bodily harm). Since these findings, several researchers have started looking for the biological basis of more specific moral intuitions. Jean Decety, for instance, has found the neural correlates that play a role in the cognitive modulation of empathy. fMRI-studies are also being used to compare ‘normal’ individuals with people who show deviant (and in particular criminal/immoral) behavior and to thereby derive new explanations of such a-typical behavior. As such, James Blair suggested that individuals with psychopathy have problems with learned emotional responses to negative stimuli.  According to him, the common neural circuit activated in moral decision making is in a more general sense involved in a rudimentary form of stimulus reinforcement learning. At least one form of morality is developed by such reinforcement learning: what Blair calls care-based morality. Contrary to psychopathic individuals, even very young children realize that there is an important difference between for instance the care-based norm ‘do not hit another child’ and the convention-based norm ‘do not talk during class’. In absence of a clear rule, ‘normal’ individuals will be more easily inclined to transgress social conventions than care-based norms. The reason for this, he proposed, is that transgression of care-based norms confronts us with the suffering of our victim(s). The observation of others in pain, sadness, anger, … immediately evokes a negative response, an aversion, in the self, from which we learn to avoid situations with similar stimuli. Blair offered brain images of psychopathic individuals that showed evidence of reduced brain activity in those parts of the brain that are involved in stimulus reinforcement (the ventromedial prefrontal cortex and the amygdala). Adrian Raine gave an entirely different perspective on ‘immoral behavior,’ in suggesting that certain deviances in the prefrontal cortex point to a predisposition towards antisocial behavior. According to Raine, immoral behavior need not be a dysfunction of normal neural circuits; evolution may just as well have shaped the brain to have a predisposition for immoral rather than moral behavior. Antisocial behavior may have a selective advantage: it can be a very effective means of taking others’ resources. As such, the expression of sham emotions (such as faked shame or remorse) can be interpreted as a strategy to mislead others in thinking that they have corrected their behavior. Raine finds support for his hypothesis in indications of a strong genetic basis for antisocial behavior. He also offered brain imaging results that show an 11% reduction in prefrontal grey matter in antisocial individuals and reduced activity in the prefrontal cortex of affective murderers.

Will we one day be able to evaluate ‘how deep someone’s morality has fallen’? Will there be a ‘stethoscope of morality,’ that can measure the weaknesses of our moral judgments and behaviors? If so, will we able to cure immoral behavior? Or, conversely, will we be able to augment the brain processes that are involved in our moral competence? Perhaps most importantly, what do we do with the notion of moral responsibility when there is evidence of predispositions towards antisocial behavior?  Although there is still a long way to go in understanding the neurobiology of human morality, this conference was an important step in introducing some moral dilemma’s that may confront us as the field of research progresses. More information on www.themoralbrain.be

1. Percy W (1971), Love in the Ruins, Farrar, Straus & Giroux, New York.

2. Elliott C (1999), Bioethics, Culture and Identity. A Philosophical Disease, Routledge, London.

 

——————————————————————————–
An Ravelingien Ph.D. is a fellow of the IEET, and an assistant researcher in bioethics at the Department of Philosophy, Ghent University.

Charisma, Crowds, Psychology

I came across the following article related to some anthropological work that Charles Lindholm has been involved in, and reading it I found it quite useful (with some reservations) in understanding some eastern cultures – especially those of the Indian Subcontinent.  It is a long article that will require a a number of sittings to read and comprehend. If you don’t have the time, atleast try to understand the conclusions.

What does it mean to be ‘in one’s right mind’? Ordinary discourse and the technical languages of the social sciences assume that being in one’s right mind essentially means that one has the ability to calculate how to attain valued ends while avoiding injury and opprobrium (See Note 1). The calculating rationality which utilizes appropriate means to achieve desired ends is thought to be known and recognized both by rational subjects themselves and by equally rational observers; irrationality, then, is an incapacity to calculate, and is revealed in a lack of congruence between acts and goals.

Anthropologists, as professional iconoclasts, have often attempted to demonstrate that assumptions about ‘normal’ consciousness vary according to cultural context; what is madness here is sanity there, and vice versa. This approach is especially characteristic of interpretive anthropologists who wish to avoid imposing preconceived Western notions of rationality on what Clifford Geertz calls ‘local knowledge’.

However, although the range of goals and methods for achieving them has been greatly expanded by an awareness of cultural context, the interpretive approach does not really offer any significant challenge to the model of rationality outlined above, but rather remains grounded in standard utilitarian assumptions of rational individual actors calculating means to achieve valued ends. In this paper, I argue that a truly radical challenge to the notion of rationality already exists within the canon of Western social thought in the works of Max Weber and Emile Durkheim, as well as in the now forgotten writings of crowd psychologists Gustave Le Bon and Gabriel Tarde.

In the next few pages, I will outline these oppositional and radically non-calculative aspects of social theory, contrast them with the work of some influential modern scholars, and, by means of a discussion of typical recruitment mechanisms found in some ‘New Age’ movements, suggest a few ways these classic perspectives might help us to rethink our notions of person, agent, and sanity.

Max Weber and the Irrational

It is appropriate to begin with Max Weber, who is the predominant figure in the pantheon of modern American sociology and anthropology. For Weber and his orthodox followers sociology and anthropology were defined as the effort to reveal sympathetically yet systematically the significance of social action through exposing the cultural values and norms that motivate persons. This is the famous method of verstehen, or, in Geertzian terms, ‘taking the native’s point of view’, and is the foundation of interpretive anthropology. From this perspective, the interpreter reaches ‘understanding’ by realizing the meanings the local actor attaches to his or her actions in pursuit of culturally valued goals. In other words, Weberian and Geertzian actors are reasonable, although their reasons may not be immediately transparent to an uninitiated observer due to cultural and historical differences in value-systems and in the modes of rationality developed as a consequence of these differences.

We can see then that Weberian sociology and its modern interpretive descendants are approaches to social science that fit in well with the model of ‘standard’ consciousness I outlined above: human beings are assumed to be rational agents acting consciously and intelligently to maximize their valued goals; their thought is recognizable as reasonable by the thinker as well as by the culturally knowledgeable observer; furthermore, rationality is highly valued within its particular cultural setting, since only rational action can lead to attainment of culturally desirable ends. The contribution of interpretive social science, in the Weberian and Geertzian sense, is thus to reveal the rationality of apparent irrationality through supplying “the interpretive understanding of social action and thereby… a causal explanation of its course and consequences” (Weber 1978: 4).

For Weber, this approach, in which the point of view of the other is taken in order to display the underlying intent and purpose of social action for that other is the sole mode of inquiry proper to the social sciences. According to Weber, such a limitation of the possibilities of sociology is necessary because sociologists (and, by extension, anthropologists) are products and purveyors of rational analytic thought and can only practise their craft in this mode. Even more crucial, however, is Weber’s fundamental contention that any action orientation in which the actors’ motives and goals are not self-consciously determined is outside the realm of meaning, therefore unintelligible, and as such must be excluded from the central interpretive task of social theory.

But although Weber specifically excludes all irrational, unconscious, and purely reactive activity from the realm of theory and accordingly devotes himself to explicating the types of rationality that ‘make sense’ of other cultures and historical epochs, he himself was well aware that a great deal of human life – indeed, most of human life – is not experienced by self-conscious agents acting for achieving valued goals within coherent ‘webs of meaning’. Weber therefore breaks action orientations down into four ideal types. Two of these types – value rationality and instrumental rationality – are different forms of calculating consciousness based upon the rationality of the actor (See Note 3), and in most of his major writing Weber elaborates their distinctions and evolution. The other two types of action orientation, however, are deemed by Weber to be without any purpose or meaning whatsoever, and thereby to stand outside the range of social theory. These types are tradition and charisma (See Note 4).

Tradition is defined by Weber as “on the other side” of the borderline between meaningful and irrational action (Weber 1978: 25), since for him tradition ideally implies an automatic and unthinking repetition by the actor enmeshed within the confines of a mindless swarm; it is a state of torpor, lethargy and inertia, predictable and mechanical, reproducing itself in utter indifference and submerging the creative individualities of all persons caught within its coils (See Note 5). Here, Weber gives us a picture of mundane life governed by routine; a world of the passive crowd in which rational self-consciousness and goal-orientation has no part to play.

Yet, although tradition is sociologically unanalyzable in principle, Weber nonetheless notes that action motivated by habit and thoughtless conformity is hardly unusual. Instead, he writes that “in the great majority of cases actual action goes on in a state of inarticulate half-consciousness or actual unconsciousness of its subjective meaning” (Weber 1978: 21) and that the “bulk of all everyday action” is motivated by “an almost automatic reaction to habitual stimuli” (Weber 1978: 25). Weber freely acknowledges such “merely reactive imitation may well have a degree of sociological importance at least equal to that of the type which can be called social action in the strict sense” (Weber 1978: 24).

Of even greater importance is charisma, which stands in absolute contrast to tradition. In its simplest form, charisma is defined by Weber as “a certain quality of an individual personality by virtue of which he is considered extraordinary and treated as endowed with supernatural, superhuman or at least specifically exceptional powers or qualities” (Weber 1978: 242). Individuals possessing charisma are portrayed by Weber as above all else emotional and vitalizing, in complete opposition both to the ennervating authority of the patriarch and the rational efficiency of the technician-bureaucrat. Instead, whatever the charismatic leader says is right not because it makes sense, or because it coincides with what has always been done, but because the leader says it. Orders can therefore be completely whimsical, self-contradictory and even lead to death or destruction for the follower, demonstrating the disciple’s inner emotional compulsion to obey without regard for coherence or consequence.

The extraordinary figures who inspire such unreasoning devotion are imagined by Weber to be, in their typical form, berserk warriors, pirates and demagogues. They reveal their capacities through a highly intensified and emotionally labile state of consciousness that excites and awes the onlookers, and jolts them from the everyday (See Note 6). The primary type, from which the others spring, is the epileptoid magician-shaman who can incorporate the Gods and display divine powers primarily through convulsions, trembling and intense effusions of excitement (Weber 1972: 327, 1978: 401) (See Note 7). Through his capacity for epileptoid states, the shaman served both as an exemplar of ecstasy and as the leader in the rituals of communal intoxication and orgy Weber took as the original sacred experience (Weber 1978: 401, 539).

Why should such manifestations of apparent abnormality appeal to an audience? It is not intuitively obvious that a display of epileptoid behavior would be attractive to anyone; in our society quite the contrary is the case. But Weber postulated that extreme emotional states, such as those generated in seizures and other forms of emotionally heightened altered states of consciousness, had a contagious effect, spreading through the audience and infecting its members with corresponding sensations of enhanced emotionality and vitality; these expansive sensation are felt to be emanating from the stimulating individual, who is then attributed with superhuman powers. The charismatic appeal therefore lies precisely in the capacity of a person to display heightened emotionality and in the reciprocal capacity of the audience to imitation and corresponding sensations of altered awareness.

Thus for Weber, what is essential and compulsive in the charismatic relation is not its meaning, though explanatory meaning systems will certainly be generated after the fact (See Note 9). Rather, it is the participatory communion engendered by the epileptoid performance of the charismatic which experientially and immediately releases the onlookers from their mundane sufferings. “For the devout the sacred value, first and above all, has been a psychological state in the here and now. Primarily this state consists in the emotional attitude per se;” an attitude in which the following could momentarily escape from themselves by dissolving in “the objectless acosmism of love” (Weber 1972: 278, 330 emphasis in original). For Weber, such prophets provided the creative force in history; only through their inspiration could enough energy and commitment be generated to overturn an old social order. They are the heroes and saints who, he feared, could no longer be born in the rationalized world of modern society (See Note 11).

To recapitulate, we have then in Weber two forms of altered or dissociated states of consciousness that, from his point of view, are not amenable to sociological analysis since they stand outside rational goal-orientation, yet are nonetheless of crucial importance in history and culture. In fact, the question of what these states are altered or dissociated from becomes a difficult question to answer, since Weber sees the predominance of the rational ‘standard’ consciousness to be a relatively recent development. Perhaps, instead, it is more appropriate to say that rationality itself, especially in its modern instrumental version, is an altered state, vis-a-vis its powerful predecessors of tradition and charisma.

The Rationalization of Irrationality

But these opposites also continually transform one into the other in a continuous dialectic, and they move as well through history toward their own supercession by more rational modes of thought. Charisma occurs, Weber says, when tradition has lost its hold and people no longer feel compelled to repeat the old patterns, obey the old orders. Charismatic revolutions themselves are destined to be short-lived, and necessarily have a new tradition nascent within them; ritualization and bureacratization inevitably appear as the prophet’s original vitalizing revelation is repeated and institutionalized by his self-interested followers, who wish to cloak themselves with the sacred transformative quality originally imputed to the personal aura of the leader himself. This type of charisma supports the new traditions born of the original prophesy; but now the crown, the throne, the robe, instead of being the accoutrements of the ecstatic prophet, may legitimize a moribund time server. Charisma in this instance becomes co-terminus with tradition, justifying and validating the habitual obedience of the masses (See Note 12). From this perspective, tradition too changes in character, losing its irrational somnambulistic component to become a coherent framework within which free agents actively and rationally pursue the given values and goals elaborated by the prophet and his minions. In other words, both charisma and tradition become rationalized as they transform from their ideal-typical state.

Weber’s conceptualization of this process has had great influence upon his American followers. But where Weber placed the primary forms of charisma and tradition outside the boundaries of social thought, while still giving them credit as the precursors of rationality, his successors have tried to make them disappear completely by incorporating them within their systematic meaning-centered theories. Thus the influential sociologist Edward Shils claims that an innate human quest for a coherent and meaningful way of understanding the world is the sacred heart of every viable social formation. Therefore, it follows that “the charismatic propensity is a function of the need for order” (Shils 1965:203) and that charisma is felt automatically whenever one draws near the entities and institutions thought to embody and emanate that order. Tradition can then be understood as located precisely within the same order-giving central structures in which charisma inheres; structures that, far from being irrational, provide a sacred and coherent model for living a meaningful life. Shil’s paradigm is explicitly followed by Clifford Geertz, who argues for “the inherent sacredness of sovereign power” (1983: 123), and proceeds to analyze the manner in which this supposed sovereign, meaning-giving central power is manifested in various cultural frameworks.

These neo-Weberian perspectives have erased the image of charisma as an irrational emotional convulsion. Instead, all persons in all societies at all times are attempting, with greater or lesser success, to promote and to attain a culturally given sacred central symbolic system of accepted significance, as revealed in concrete institutional forms. The only human problem is not being able to achieve proximity to this holy order. From within this framework, the frenzy of the shaman is transformed into a reasonable search for coherence and significance, and tradition and charisma become equivalent to rationality (See Note 13).

Obviously, this version of society is far from the social and historical concept of irrational action that Weber knew, revealed, and set aside as ineffable and thus outside of sociological discourse. Weber certainly could not have accepted the reduction of charisma and tradition to ‘sacred order’. For him, the primary form of tradition remained imitative and senseless, and the primary form of charisma remained convulsive, revolutionary, and outside of ‘meaning’ entirely. The best that sociology could do, from his perspective, was to recognize the capacity of these irrational impulses to influence a rational course of action, and thereby to “assess the causal significance of irrational factors in accounting for the deviations from this type” (Weber 1978: 6) (See Note 14).

Durkheim and Group Consciousness

Let me turn now to Emile Durkheim, the other great ancestor of contemporary social thought, whose work offers what I believe to be a more theoretically compelling understanding of the irrational than does Weber. However, Durkheim’s concern with grasping irrational states of being is now more or less forgotten or else the object of misunderstanding and derision (See Note 15). Instead, he is known today primarily as he was interpreted by Talcott Parsons, ie., as a systematic thinker strongly associated with functionalism and with his pioneering use of statistical data to isolate variables for the purposes of demonstrating causal chains in social organizations. Here his great contributions are his dissection of the division of labor and its consequences, and his correlation of suicide rates with alienating social conditions. His other great project, one which strongly influenced later structuralism, was his effort to demonstrate that categories of thought are themselves social products, and thereby to ground Kantian metaphysical imperatives in a structured social reality.

But these are only a part of Durkheim’s sociology. In contrast to the Weberian concern with conscious agents struggling to achieve culturally mediated goals and values, Durkheim founded his sociology on the notion that ordinary consciousness is characterized more by rationalization than by rationality. For him, the reasons people claim to have for what they are doing and the meanings they attribute to their actions are post facto attempts to explain socially generated compulsions which they actually neither understand nor control.

Thus Durkheim, unlike Weber, draws a radical distinction between the goals and character of the group and the goals and characters of the individuals within the group, arguing that “social psychology has its own laws that are not those of individual psychology” (1966: 312). Furthermore, “the interests of the whole are not necessarily the interests of the part” (Durkheim 1973: 163); indeed, they may be, and often are, completely at odds. But the group imposes its own will upon the hearts and minds of its members and compels them to act in ways that run against their own subjective interests; these actions are later rationalized to ‘make sense’, and the rationalizations then become the value systems of a particular human society.

Durkheim therefore presents us with the extraordinary proposal that sociology cannot take as its subject the individual person who is manipulating within culture to maximize his or her own ends. Rather, he proposes a continuous conflictual ebb and flow between singularity and community, self and group (See Note 16). As he writes, “our inner life has something like a double center of gravity. On the one hand is our individuality – and, more particularly, our body in which it is based; on the other it is everything in us that expresses something other than ourselves…. (These) mutually contradict and deny each other” (1973: 152) (See Note 17).

Durkheim, like Weber, envisions the individual to be rationally calculating and maximizing. But far from assuming this form of consciousness to be the nexus of society or of sociology, Durkheim repudiates egoistic calculation as immoral, solipsistic, depraved, animalistic, and of no sociological interest. Instead, he argues that human beings rise above animality and pure appetite precisely at the point where the ‘normal’ mind of the self-aggrandizing egoistic actor is immersed and subdued within the transformative grip of the social (See Note 18).

Durkheim’s vision of the selfish actor dissolved within the crucible of society appears to parallel to Weber’s image of tradition as a state of deindividuated trance. But there is a very significant difference between the two, which derives from Durkheim’s understanding of the experience of group consciousness. Where for Weber the state of unthinking immersion in the group is associated with torpor and lethargy, Durkheim argues instead that people submerge themselves in the collective precisely because participation offers an immediate felt sense of transcendence to its members. It is a sensation of ecstasy, not boredom, that experientially validates self-loss in the community.

Influenced by studies of Mesmerism (See Note 19) and the same notions of emotional excitability that Weber also utilized, Durkheim thought that an extraordinary altered state of consciousness among individuals in a group, which he called ‘collective effervescence’ would occur spontaneously “whenever people are put into closer and more active relations with one another” (Durkheim 1965: 240-1). This experience is one of depersonalization, and of a transcendent sense of participation in something larger and more powerful than themselves (See Note 20). Durkheim, ordinarily a placid writer, paints a potent picture of this state, as the personal ego momentarily disintegrates under the influence of the fevered crowd. “The passions released are of such an impetuosity that they can be restrained by nothing…. Everything is just as though he really were transported into a special world, entirely different from the old one where he ordinarily lives, and into an environment filled with exceptionally intense forces that take hold of him and metamorphose him” (Durkheim 1965: 246, 249).

Durkheim imagines that within the excited mass, sensations of emotional intensification are released in impulsive outbursts that contagiously spread to those around. From this point of view, charisma exists only in the group; the charismatic leader who is Weber’s hero is here a passive symbol serving, in Elias Canetti’s words, as a ‘crowd crystal’ around whom the collective can solidify and resonate (Canetti 1978) (See Note 21). The result of this solidification is immediate imitation, magnified through the lens of the leader and synchronized within the group as a whole. In a feedback loop, this echoing and magnifying serves to further heighten emotion, leading to greater challenges to the ego and more potent feelings of exaltation. After this ecstatic experience “men really are more confident because they feel themselves stronger: and they really are stronger, because forces which were languishing are now reawakened in the consciousness”(Durkheim 1965: 387).

The physical experience of self-loss and intoxication in the crowd’s collective effervescence is, for Durkheim, the “very type of sacred thing” (Durkheim 1965: 140) and is the ultimate and permanent source of social cohesion; all else is secondary. Thus he writes that what is necessary for social life “is that men are assembled, that sentiments are felt in common and expressed in common acts; but the particular nature of these sentiments and acts is something relatively secondary and contingent” (1965: 431-2).

Tradition, from this perspective, is not seen as a torpid counter to the excitement of charisma, as in the Weberian model. Instead, a viable tradition is understood as suffused with the ecstatic experience of regular collective participation. Thus Durkheim conflates charisma and tradition in a manner completely the reverse of Shils and Geertz. For Durkheim, any attribution of meaning to the felt reality of collective effervescence is strictly a posteriori; an attempt by individuals try to explain and rationalize what is actually a primal, prelogical, experiential state of transcendent self-loss that provides the felt moral basis for all social configurations, and combats the solipsistic self-interest that would tear society apart.

Crowd Psychology

Durkheim’s positive moral view of group consciousness and Weber’s favorable portrait of charismatic relations were completely overturned in the early 20th century by the crowd psychologists Gustave Le Bon and Gabriel Tarde. These two French theorists, though now largely forgotten by academics, were tremendously influential in their time, and were the founders of the present-day practices of political polling and media consultation as well as the esoteric study of group psychology. For them the collective experience no longer had any redemptive features, and became instead a frightful combination of chaos, credulity and passion as persons within the crowd automatically regress to more primitive, child-like states of being while under the influence of their irrational, emotionally-compelling leader (See Note 22).

In this formulation, the ‘standard’ state of rational consciousness, which Le Bon and Tarde both quite explicitly took to be the consciousness of a masculine, calculating, utilitarian free agent, was fragile indeed. Indeed, though lauding rationality as the highest form of thought, the crowd psychologists, like Weber, were suspicious of the extent to which rational consciousness actually prevailed. Tarde, for example, believed that people, though imagining themselves to be free agents acting for understood goals, are in truth “unconscious puppets whose strings were pulled by their ancestors or political leaders or prophets” (1903:77). From this perspective, men and women, insofar as they are members of a group, are “in a special state, which much resembles the state of fascination in which the hypnotized individual finds himself in the hands of the hypnotiser”(Le Bon 1952: 31).

In this vision, even the most rational individual ran great risk of being quickly and irresistibly reduced to the lowest common denominator when immersed in a crowd, and consequently of acting in a savage, childish, ‘feminine’ and, in short, irrational manner that would never be condoned by ordinary standards of behavior. Rational consciousness, then, is portrayed and appreciated by these thinkers as a feeble refuge from the torrents of passion and destruction that seethe within the collective; a torrent that drowns all who are drawn into its vortex (See Note 23). The Durkheimian view of the power of the collective is here completely accepted, but this power is allowed only a negative moral content, while the good is found solely in the flimsy boat of rationality.

For the crowd psychologists, as for Durkheim, the mechanisms that stimulate the crowd are simple. Once a mass is gathered, any strong action excites immediate imitation and magnification in a cycle of intensification that eventually dies down, much like the ripples that appear after a stone is thrown into a pool. Only through such stimulation can human beings attain “the illusion of will” (Tarde 1903:77) (See Note 24). So, where Durkheim believed the primal group would coalesce spontaneously without the necessity of any external excitement, crowd psychology argued that someone had to throw the stone and provide the “dream of command” that stimulates the crowd to unite in pursuit of “a dream of action”(Tarde 1903: 77).

In postulating the need for a leader to galvanize the group, Le Bon and Tarde brought together Durkheimian and Weberian imagery. But where Weber had given the charismatic a positive value as the founder of new religions and the healer of the dispirited, Le Bon and Tarde see him in negative guise as a powerful and willful figure; a mesmerist who is capable of expressing in his person the electrifying excitement and volition that awakens the sleeping crowd, providing the masses with an irresistible command that solidifies and motivates them under his thrall (See Note 25). The inner character of this leader remained an enigma; far from a rational calculator, he is “recruited from the ranks of those morbidly nervous, excitable, half-deranged persons who are bordering on madness” (Le Bon 1952: 132). In particular, he had to be “obsessed” by an idea that “has taken possession of him”, in a way exactly parallel to the possession of the shaman by a god or gods (Le Bon 1952: 118). The crowd psychologists argue that it is precisely the leader’s obsessive self-absorption that appeals to the crowd, since only through feeling himself pulled and formed by forces beyond his control does the leader gain the power to act and thereby break the cycle of imitation and passivity that has held the collective in a somnambulistic stupor (See Note 26).

In the paradigm offered by crowd psychology, such persons elicit not only obedience, but also the love and adulation of the followers. By standing apart, completely focused on an inner vision which compels and energizes them, they embody and exemplify the “dream of command” that electrifies the following. So we have the paradox of a leader who, far from wishing to further the ends of his followers, instead “in perfect egotism offered himself to (their) adoration” (Tarde 1903: 203). The crowd psychologists thus come to the pessimistic conclusion that the group’s devotion has “never been bestowed on easy-going masters, but on the tyrants who vigorously oppressed them” in order to serve their own driven obsessions (Le Bon 1952: 54).

Crowd psychology therefore unites Durkheim and Weber by placing an ecstatic and convulsive charismatic at the center of a receptive group. The state of torpor that Weber saw in tradition is here understood as the somnambulistic trance that precedes charismatic involvement in a state of collective effervescence. The moral quality of crowd participation and charismatic excitement is now also reversed. Where Durkheim portrayed the vitality of society arising from communal experiences of unity, and where Weber hoped for the arrival of a transformative new prophet who could break open the iron cage of instrumental rationality, crowd psychology gives us frightening imagery of both groups and leaders; imagery that points not toward the church and the prophet, but toward Nazism and Hitler. As Le Bon prophetically writes, as a consequence of the erosion of traditional bonds of kinship, ethnicity and religion that kept the regression to mass consciousness at bay, “the age we are about to enter will in truth be the ERA OF CROWDS” (1952:14).

The Denial of Charisma

In so demonizing the altered states of charisma and group participation, crowd psychology prefigures the modern attitude, though unlike modern writers, the crowd psychologists retained a fearful appreciation of the potency of group consciousness. But this appreciation has been repressed by the efforts by Shils, Geertz and others of the interpretive school who aim to transform the charismatic appeal of the leader and the convulsive reaction of the group into a rational quest for meaning, order and coherence. In a parallel manner, ‘resource mobilization’ theorists of mass movements have argued that activist groups are made up of purposive and reasonable individual free agents voluntarily gathered together for the sake of commonly held goals of social justice. And, similarly, social constructivist theories of emotion portray emotion as ‘cognitive,’ and therefore consider emotions primarily as ’embodied appraisals’.

I want to be clear here that I do not dispute the salience of a search for meaning, coherence, and justice as causes for commitment to any movement; and certainly emotions are cognized (to be afraid of a cut electrical wire one must know that it is dangerous). But the feeling person, overwhelmed by nameless anxiety, immersed in the vortex of a mob, or irresistibly drawn to a charismatic figure like a moth to a flame, is hardly a rational calculator. The image of free agents making reasonable appraisals of risks, enacting values, construing meaningful systems and pursuing desired outcomes within a coherent cultural context is a vision of humanity that may be appropriate for understanding a great proportion of action and thought; but clearly the apotheosis of rationalization and voluntarism found in these contemporary theories ignores precisely the aspects of social behavior that Weber, Durkheim and the crowd psychologists sought to bring to the fore; i.e., the power of irrational group experience to stimulate men and women into actions that can only be called meaningful, orderly, and goal-oriented if these terms are emptied of all content.

Why has this denial of the irrational psychology of groups and leaders occurred? In part, the assertion of human reasonableness under even the most extraordinary circumstances can be considered an intellectual reaction to the implications of the horrible spectre of Nazism that the crowd psychologists so uncannily prophesied (See Note 27). But it is also clear that the denial of collective deindividuating altered states of consciousness corresponds with our present social formation, which mirrors and ratifies the rationalization processes of the society at large and finds its most powerful philosophical expression in the romantic existentialist apotheosis of the self (See Note 28). Because this model holds sway, a positive moral evaluation of collective charismatic states will be very difficult to achieve, as will the experience of charisma itself.

Charisma Today: est and Scientology

I can illustrate my point (See Note 29) by sketching the trajectory of two apparently pragmatic and “world affirming” (See Note 30) charismatic groups: est, founded and led by Werner Erhard and Scientology, founded and led by the late L. Ron Hubbard (See Note 31). In their stated purposes, these two groups appear highly instrumental, charging a substantial fee to help people to achieve better adjustment at work, new friends, greater happiness, a more satisfying love life. They have a strong continuity with the ‘healthy-minded’ ‘once-born’ religions that William James (1982) found so characteristic of American culture; religions which typically affirm the goodness of all creation and preach accommodation with the world as it is, attracting middle-class, white collar adherents anxious to better themselves. The est Forum, for instance, stresses that its program is suited to “the already successful… the already healthy…the already committed…the already accomplished…the already knowledgeable” (Forum pamphlet 1986). The purpose of joining is to learn a practice allowing one to manipulate “the levers and controls of personal effectiveness, creativity, vitality and satisfaction” (Forum pamphlet 1986); and testimonials from converts make claims not to higher wisdom, but rather that the discipline “has helped me to handle life better…. I get on better with people…. I can apply myself to work and study more easily than before” (Foster 1971: 119). Successful graduates are “people who know how to make life work” (Erhard quoted in Brewer 1975: 36).

In the pragmatic, cheerful ‘once born’ ethos, the desire for personal enlightenment is reconciled with practical action, doing well in the office becomes a pathway to self-fulfillment, and accepting hierarchy is understood not only as a useful strategy in business, but also as a spiritual exercise, since “you get power by giving power to a source of power” (Erhard quoted in Tipton 1982: 215). Armed with new perceptions, the trainees can acquiesce to whatever situation they find themselves in, confident that “being with it makes it disappear” (an est trainer, quoted in Tipton 1982: 209); that whatever one is doing is what one wants to do, and that the world is good and just.

“Everyone of us is a god in his own universe, and the creator of the very reality around ourselves” (an est trainer, quoted in Singh 1987: 10). As Ellwood remarks, from this perspective “an individual only gets into traps and circumstances he intends to get into…. the limitations he has must have been invented by himself” (1973: 175).
In keeping with the practical, work-oriented manifest content of this ideology, most participants have little involvement in any particular spiritual technology, judging efficacy, like any good consumer, solely by perceived results. They are, in Bird’s (1979) terminology, apprentices rather than devotees or disciples; persons merely looking for helpful knowledge in a complicated mystic marketplace.

Yet, despite their overtly instrumental character, utilitarian orientation, and constantly shifting peripheral membership, these groups paradoxically appear to have a strong tendency to develop highly committed charismatized inner cores of intensely loyal devotees gathered around a leader taken to be a demigod. As Roy Wallis puts it, “social reality outside the movement may come to seem a pale and worthless reflection of the social reality of the movement…. (as) the self and personal identity… become subordinated to the will and personality of the leader” (Wallis 1984: 122-24).

In Scientology, for instance, there was a “transformation from a loose, almost anarchic group of enthusiasts of a lay psychotherapy, Dianetics, to a tightly controlled and rigorously disciplined following for a quasi-religious movement, Scientology” (Wallis 1977:5). L. Ron Hubbard, the founder of this group, began as a science fiction writer and entrepreneur, but ended by claiming to be a Messiah “wearing the boots of responsibility for this universe” (Hubbard quoted in Ellwood 1973: 172). His disciples concurred, seeing him as a charismatic superman who could escape space and time, and whose insight into the world would lead to universal salvation.

For the inner cadre of Scientologists the ‘meaning’ of membership did not hinge on a coherent doctrine, since Hubbard “modified the doctrine frequently without precipitating significant opposition” (Wallis 1977: 153). As a result, “even the most doctrinally learned Scientologists may be unsure what palpable qualities a clear (an enlightened person) is supposed to manifest, other than confidence and loyalty to the cult” (Bainbridge and Stark 1980: 133). Participation rested instead on absolute faith in Hubbard himself and on one’s total unreserved commitment to the organization. As a former convert writes, “the extent of one’s faith was the measure of one’s future gains…. Everything depended on one’s own certainty at the moment” (Kaufman 1972: 25, 179). Any questioning showed one was not moving toward ‘clear’, whereas meditation on Hubbard’s often self-contradictory words was considered to be transformative in itself.

In the fully formed Scientology corporation a multi-million dollar enterprise was headed by a small, secretive, highly disciplined and fully committed central cadre, the Sea Org, marked by their esoteric practices, special language, and distinctive uniforms of white, with black boots and belt. Totally dedicated to Hubbard, they formed an inner circle of virtuosi living in seclusion aboard Hubbard’s yacht, proclaiming their devotion by signing ‘billion year contracts’ of spiritual service to their eternal leader.

As the group made claims to have the key, not simply to enhanced awareness, but to all the world’s problems, it also became more rigid and totalitarian; fear of ‘suppressives’ (Scientology language for opponents) heightened, leading to expensive lawsuits and countersuits; meanwhile Hubbard himself withdrew deeper into paranoia, eventually isolating himself so that only three people were actually permitted to see him, and it became a matter of controversy whether he was alive or dead (See Note 32).

Est has followed a similar trajectory. Beginning as the revelation of a former encyclopedia salesman and ex-Scientology convert, est brought together the techniques of Scientology, Buddhist meditation, existential philosophy and group therapy to form a potent self-help organization which soon began to exhibit a charismatic character. Werner Erhard, the founder, was idolized by his committed followers as a “fully realized human being” who “lives in risk and possibility… we catch up with him, then he moves ten steps ahead” (a convert quoted in Singh 1987: 89). An inner circle of devotees controlling the vast est empire were absolutely loyal to Erhard, whom they conceived to be a savior. This inner circle was tightly knit, strictly regulated, and required to have only “those purposes, desires, objectives, and intentions that Werner agreed for you to have” (the president of est, quoted in Martin 1980: 112). Not coincidentally, they began to resemble Erhard closely, down to mannerisms and dress.

The accommodative est message of “perfection as a state in which things are the way the are, and not the way they are not” (Erhard quoted in Martin 1980: 114) was taken by the inner circle to be a message that would transform the world through transforming consciousness, and est began to reorient itself into a more overtly religious salvationist direction, with Erhard as the prophet of the coming millennium. But the pressure of being a charismatic figure began to tell on Erhard, who showed signs of psychological disintegration, brutalizing members of his family and the inner core while simultaneously demanding greater and more violent tests of loyalty from those closest to him. The ensuing tension led, in recent years, to defections and litigation within the core, and to public attacks on Erhard by some of his closest relatives and associates (See Note 33).

The parallel descents of these groups into paranoia and authoritarianism are instructive, and illustrate the difficulties even the most accommodative charismatic movements and leaders have in adapting to modern social conditions. They also illustrate recurrent patterns of group processes that are not reducible to a quest for meaning or coherence or any other rational end, but that can better be conceptualized within a framework of charisma, collective effervescence, and the psychology of crowds. The same framework can help us to understand the methods of recruitment that drew people deeply into these organizations (See Note 34).

Essentially, recruitment to est and Scientology, in common with recruitment to many other modern cults, relies on techniques that reveal to the prospective clients the degree to which their personal identities are contingent and socially constructed. The stated end is to permit the convert to escape from obligations of should and ought (referred to as ‘garbage’) in order to find the authentic, eternal and vital selves that lie beneath social and familial conditioning.
The notion of a primal unsocialized vital center is taken absolutely literally by Scientology. In its doctrine, human beings are actually concrete emanations of timeless energy forces called Thetans, who manifested themselves in the material world for amusement, but who have been so absorbed in their games that they have forgotten their true transcendent identities. To remedy this unhappy condition, one must ‘clear’ material residues and memories away from Thetan consciousness and allow the Thetan to “relinquish his self-imposed limitations” (Hubbard quoted in Wallis 1977: 104).

The fantastic science fiction ideology would hardly be convincing to many potential converts without its experiential ratification through a long process of training in which the new member’s sense of identity and social context is consistently undermined via a bewildering, repetitious and emotionally charged sequence of ‘deprogramming’ exercises (‘auditing’) which utilize a fallacious instrument (the ‘e-meter’) that students believe registers fluctuations in their emotional responses (see Whitehead 1987 for a detailed account).

In the training, the student, under the eye of an experienced ‘auditor’, may be asked repeatedly to relive and repeat painful or intense experiences of the past. The auditor asks questions such as “tell me something you would be willing to have that person (indicated by the trainer) not know about you”, over and over again. No explanations are given, and the trainee is also constantly obliged to redefine the most common words and phrases he or she uses in response, and is required as well to master the complex Scientology jargon. The ‘runs’ of repeated questions and answers can go for many hours, confusing and exhausting the trainee. The ostensible aim of this ritual is to distance the trainee from emotional reactions to ‘garbage’ so he or she can become ‘at cause’ by getting a ‘clear’ reading on the e-meter. In consequence of this process, the trainee will hypothetically become free to experience unencumbered ecstatic Thetan awareness.

The training process occurs in an atmosphere of high anxiety, as the trainee struggles to control the random fluctuations of the e-meter while simultaneously feelings of disorientation, remorse, hatred, love, jealousy and so on are elicited by the repetitious, probing, highly personal questions and complex demands of the auditor, a powerful authority figure believed to have achieved a more evolved superhuman consciousness. Each auditing session concludes with cathartic group gatherings in which the participants ‘share wins’ and “were warmly welcomed into the group, greeted and applauded” (Wallis 1977: 173). This sequence proved to be remarkably effective in gaining great loyalty from many Scientology ‘preclears’, who would themselves move up the elaborate ladder toward ‘clear’ status and become ‘auditors’ of other initiates (See Note 35).

Est never utilized such a literal image of liberation as Scientology’s Thetan, but very similar techniques were in operation in the recruitment and training process. For est, as in Scientology, history and family are considered to be destructively enmeshing, and the point of training is to be released “from the cultural trance, the systematic self-delusion, to which most of us surrender our aliveness” (Marsh 1975: 38). The process is conceived as awakening to one’s timeless and vital transpersonal essence, thus becoming “truly able and perfect” (an est trainer, quoted in Tipton 1982: 177). As in Scientology, trainees cannot break through into this perfect realm by reason; reason is regarded as a defense against the intrinsic and immediate truth of intuitive feeling states. “If you experience it, it’s the truth. The same thing believed is a lie” (Erhard, quoted in Tipton 1982: 192).

As in Scientology, instruction is geared to break down the students’ reasoning power and ‘conditioning’ through emotionally charged training sessions designed to demonstrate that their beliefs and personalities are programmed by their past, their culture, and their associations. In the classical est seminar, 250 persons or so spend two weekends totalling 60 to 70 emotionally intense (and expensive) hours of lectures, meditation and confrontation. The trainer typically abuses and infantilizes the group, calling them ‘assholes’ whose lives are ‘shit’, and prohibiting them from using the toilet. The students are further bombarded by paradoxes undercutting logic (See Note 36), asked to relive traumatic emotional experiences of the past, incited to act out deep fears, or perhaps insulted and abused by the leader in front of the audience for arrogance or selfishness. Role playing, switching genders, taking on other identities, all are part of the repertoire. The effectiveness of these efforts to decenter the self in the context of the group is evident in one participant’s description: “It seems now that almost the entire roomful of people are crying, moaning, groaning, sobbing, screaming, shouting, writhing. ‘Stop it! Stop it!’ ‘No! No! No!’ ‘I didn’t do it! I didn’t do it!’ ‘Please….’ ‘Help!’ ‘Daddy, daddy, daddy….’ The groans, the crying, the shouts reinforce each other; the emotions pour out of the trainees” (quoted in Martin 1980: 123).

These methods are quite typical, and involve what Harriet Whitehead (1987) has called ‘renunciation,’ that is, a dedifferentiation of cognitive structures coupled with a withdrawal of affect from its previous points of attachment. In this process, the susceptible subject is pressed to become ‘deautomatized’ (Deikman 1969), hyperaware of the role of conditioning and the plasticity of the self, while simultaneously stimulated to emotionally charged abreactions which are mirrored and magnified by the group and the leader, who represents the sacred group founder. These deconditioning’ exercises are obviously not aimed at promoting adaption to ‘ordinary misery’ (Freud’s claim for psychotherapy), but rather to the revelation of a deeper, transcendent inner self no longer bound by the chains of culture or context, nor by the stimulus-response mechanisms of the mind. Instead, “you take responsibility…. in effect you have freely chosen to do everything that you have ever done and to be precisely what you are. In that instant you become exactly what you always wanted to be” (Brewer 1975).

For participants (See Note 37), this inner self is not a matter of conjecture or theory. It is really experienced in the effervescence of the collective – just as Durkheim hypothesized. The combination of an undermining of personal identity, systematic devaluation and confusion of ordinary thought, the stimulation of heightened abreactive emotions detached from original causes within the context of the mirroring group and under the protection of a god-like leader act together to provide expansive sensations of catharsis for those who are carried away by the techniques of collective ecstasy.

The individual participating in this experience is likely to attribute his or her feelings of expansion to the doctrine and the leader. The ‘perfect self’ that is then revealed when personal identity is stripped away is, more often than not, a self modeled after the charismatic group exemplar. A new identity then replaces that which has been abandoned as inauthentic – an identity legitimated by the intensity of the emotion generated in the altered state of consciousness of the ecstatic group context – but one which, in consequence, can only exist within this extraordinary situation (See Note 38). In other words, despite appearances of pragmatism, the world-affirming group is likely to develop into a node of collective effervescence that stands in opposition to the larger rationalized social organization, which is experienced as ‘dead’ and alienating. The next step is to try to make the world replicate the group; this is the road toward Messianism and paranoia.

Conclusion

Two points are especially worth reiterating here. The first is the repeated use of techniques aimed at demonstrating that the recruit is not an autonomous individual, but rather is ‘programmed’ and ‘conditioned’ by history, culture, and family. This revelation, engendered in a highly charged group context under the authority of an apparently powerful authority figure, is crucial in stimulating the emotional abreaction that helps lead the subject into collective participation. It is, it seems to me, an anthropological fact of considerable importance that persons in this culture can be transformed by discovering that their lives are not totally autonomous and that their identities are not completely self-manufactured. The efficacy of this technique is, quite evidently, closely related to the prevalent American capitalist social organization and its accompanying ideology of possessive individualism and purposive agency.

A connected point is that members of a configuration with such an ideological and social structure are highly susceptible to a covert hunger for the collective experience offered by charismatic immersion. As I have argued elsewhere (1990), when the feeling self is stripped of identity markers and significant emotional ties with others, and simultaneously affirmed as the sole source of action and preference, then the intensity and certainty of charismatic revelation will be extremely attractive, since participation in a charismatic group offers precisely the emotional gratification, self-loss and affirmation of a transcendent identity that the predominant social model of reality precludes.

However, because such movements are in conflict with the ruling order of thought, they must take on extreme forms. Charisma becomes not a moment, but eternal; the god is no longer manifested occasionally in an otherwise ordinary mortal, but the vehicle has to be holy all the time. So, paradoxically, a culture founded on the ‘standard’ consciousness of rationality and individual agency renders even more fervid and impetuous the expression of the altered state of awareness Weber called ‘charisma’.

To summarize, in this essay I have argued that ‘meaning-centered’ interpretive analysis is in fact located within a tradition that assumes as its basic premise the rationality of maximizing individual actors. This perspective is not adequate for understanding forms of social action that are outside the realm of rationality – a point recognized by Weber himself in his discussion of tradition and charisma.

Here I have sketched very lightly, with plenty of room for contradiction and dispute, some alternative views on irrationality, using the works of Weber, Durkheim, Le Bon and Tarde to argue that processes of charismatic involvement, collective effervescence, and crowd psychology may help us grasp the basic pattern of such apparently irrational action and to place it a framework of theoretical knowledge. Far too rapidly, I’ve applied this framework to the actual trajectories of two new religions, showing how their evolution and their mode of recruitment fit within it.

The final question is perhaps whether this mode of approach is applicable only for understanding cultic groups at the periphery of social life, or whether it might have some relevance for more mainstream medical practitioners and psychiatrists. I contend the latter is the case. For example, if we believe, with Durkheim, that human society is built upon an emotional experience of selflessness within the transcendent group, what then happens when the increasing dominance of the competitive economy and the worship of the individual make such experiences less and less likely to occur, or even to be imagined? One result might be the charisma hunger mentioned above, and the escalating excesses of charismatic groups. But the more prevalent result may be the appalling number of complaints about depression, deadness and detachment among psychiatric patients in the US, coupled with fevered efforts to stimulate some sense of vitality through various forms of addiction and thrill seeking. These may be the prices paid for the absence of any felt sense of connection to the social world.

 

Notes

  1. I am not claiming that Westerners only have positive evaluations of instrumental rationality; ‘sincere’ emotion is also highly valued. However, sincere feelings do not come from the mind, but from the heart.
  2. The ‘ideal type’ is a formal conceptual model to be used as a lens for viewing variations in real social configurations in order to make comparisons. This implies that ‘rational’ social formations are in actual fact never fully rational, but always have ‘traditional’ and ‘charismatic’ elements within them, even though these elements may be suppressed or denied. And, of course, the reverse is also the case. For more on Weber’s methodology, see Weber 1949.
  3. Instrumental rationality – the rationality typical of modernity and capitalism – is characterized by the most efficient use of means to reach an end. Value rationality – the rationality of premodern societies – envisions means as ends, with efficiency taking second place to proper modes of behavior. The complexities and ambiguities of this distinction are many, and the boundaries of the categories are by no means clear, but what is relevant here is simply that both types of social action, whatever their differences and similarities, involve conscious choices and acts aimed at maximizing valued goals.
  4. In a sense, charisma is the non-rational parallel to value-rationality, since charisma is the attachment of the self to another through affect, just as value-rationality involves an affective faith in a value. Tradition, which is cold and routinized, is, in this respect, analogous to the equally cold technical efficiency of instrumental rationality.
  5. Interestingly, Weber foresaw just such a hive-like future for rational man. Utmost rational efficiency will lead, he feared, to a rigid and immobile bureaucratic and technocratic social system.
  6. See Weber 1978: 242, 400-3, 535-6, 554, 1112, 1115. 1972: 279, 287 for the relationship between charismatic revelation and ecstatic states of excitement.
  7. The conjunction between epilepsy and charisma seems odd given our modern medical conception of grand-mal and petit-mal epileptic seizures as electrical storms in the brain that eliminate consciousness while causing gross motor convulsions. But Weber’s model (one common to his era) broadly imagined epileptic – or, more properly, epileptoid – seizures as closely akin to hypnotic states and to hysterical fits (see Thornton 1976, Massey and McHenry 1986 for more on this connection). Our modern counterpart might be the category of dissociation. However, it is also worth noting that Winkelman (1986), among others, has argued for a parallel between shamanic dissociation, temporal lobe epilepsy, and other forms of what Sacks (1985) has called mental superabundances, or disorders of excess, in which sensations of energy and vitality become morbid, and illness presents itself as euphoria. An example is Dostoyevsky, who writes, “You all, healthy people, can’t imagine the happiness which we epileptics feel during the second before our fit… I don’t know if this felicity lasts for seconds, hours or months, but believe me, I would not exchange it for all the joys that life may bring!” (quoted in Sacks 1985: 137). We might also recall that cross-cultural studies of shamanism do in fact show strong incidence of overtly epileptoid manifestations such as trembling and convulsions, especially in the early stages of shamanic initiation. Evidently there may be both a predisposition and an element of imitation and training at work in achieving shamanic trance, and the trance itself may have a considerable overlap with some mild forms of disturbance of the temporal lobe.
  8. “Ecstasy was also produced by the provocation of hysterical or epileptoid seizures among those with predispositions toward such paroxysms, which in turn produced orgiastic states in others” (Weber 1978: 535).
  9. Characteristically, Weber’s own intellectual concern is with typologizing and contextualizing the novel ethical meaning systems provoked by the prophet’s revelations. He notes that the prophet himself may believe the new meaning system is his major contribution. But Weber clearly states that for the masses, and especially for the impoverished, the prophet remains a charismatic with transcendent powers; the commitment of these followers is not to ideas, but to the prophet’s person and his promise of immediate experiential salvation (Weber 1978: 467, 487).
  10. Levi-Strauss (1967) takes a similar position, but with a very different analytical point.
  11. “Under the technical and social conditions of rational culture, an imitation of the life of Buddha, Jesus, or Francis seems condemned to failure for purely external reasons” (Weber 1972:357).
  12. See Greenfeld (1985) for a good statement of the distinction between primary and secondary charisma; though she too assumes as the essential driving force an orientation for building meaning.
  13. As Harriet Whitehead writes, “cultural anthropology has chosen the conservative route of merely noting that religious practices seem to have some intensifying or disordering effect upon experience, and retreating back into the realm of culturally organized meaning manipulation” (1987: 105). In Weberian terms, this ‘retreat’ has an ‘elective affinity’ for intellectuals, because it is founded on an assertion of the absolute value and importance of the scholarly professional faith in the primacy of reason and the possibility of approaching meaning through interpretation.
  14. Weber profoundly regretted his own incapacity to experience the compulsion of charisma, he lamented the decline of the ecstatic, and he longed for the advent of “entirely new prophets” who would bring, through their very presence, an escape from “the iron cage” of rational action without transcendent content that he envisioned as the inevitable and unhappy future of humanity (Weber 1958:181-2).
  15. See, for example, Meeker, who portrays Durkheim as believing “science would eventually prove fully adequate as a replacement for religion” (1990: 62), and who castigates him for his supposed dismissal of “human dreams and wishes” in favor of the apotheosis of an abstract emblem. Meeker here ignores Durkheim’s emphasis on passion and desire in the construction of the elementary forms of religious life.
  16. “We do not admit that there is a precise point at which the individual comes to an end and the social realm commences…. we pass without interval from one order of facts to the other” (Durkheim 1966: 313).
  17. In taking this perspective, Durkheim prefigures Freud, but with an entirely reversed moral viewpoint. And, of course, the influence of Rousseau and the Comptean vision of a revolutionary sociology are very strong indeed in Durkheim’s apotheosis of society.
  18. Durkheim argues in an important footnote that the realm of the economy, where the maximizing rational individual holds sway, is the only arena of social life that is in essence completely opposed to the sacred. The dominance of the economy in modern culture is therefore destructive of the moral bonds of society (1965: 466). Note how different his project is from Weber’s, who aimed to show the ways in which various prophecies favor or oppose the rise of capitalism.
  19. As Moscovici writes, the hypnotic state was envisioned in late 19th century French culture as “that strange drug which… releases the individual from his solitude and carries him off to a world of collective intoxication” (1985: 92). As already noted, hypnotism and epilepsy were thought to be similar in nature. The idea and experience of hypnotism and allied dissociated states was a romantic counter to Utilitarian individualism, and had a strong influence on social and psychological thought, as well as literature and the arts, in the late 19th and early 20th century.
    20 The similarity to Weber’s ‘objectless acosmism of love’ is evident.
  20. For this reason, Durkheim can make the seemingly paradoxical claim that “despotism is nothing more than inverted communism” (1984: 144).
  21. This image continues to prevail in medical theories of ‘mass hysteria’. See Bartholomew (in press) for a compendium of examples. Bartholomew’s paper is also an example of the interpretive attempt to validate all apparently irrational action by demonstrating its meaningfulness and intent within a cultural context.
  22. The tropes of the ‘feminine’, ‘savage’, ‘childish’ crowd are painfully clear indicators of the anxiety felt by these men over a possible loss of control and over the weakness of their masculine, civilized, adult personnas. An interesting, if obvious, analysis could be made of these metaphors, which relate to the changing political climate of France and heightened fear of lower class rebellion. What I wish to stress here, however, is the structure of the argument.
  23. Awareness makes no difference to this existential condition. “If the photographic plate became conscious at a given moment of what was happening to it, would the nature of the phenomenon be essentially changed” (Tarde).
  24. As Tarde writes, “volition, together with emotion and conviction, is the most contagious of psychological states. An energetic and authoritative man wields an irresistible power over feeble natures. He gives them the direction which they lack. Obedience to him is not a duty, but a need….. Whatever the master willed, they will; whatever the apostle believes or has believed, they believe” (1903: 198).
  25. Although the the leader’s appeal is irrational, it has certain pattern, and Le Bon gained much of his fame as a modern Machiavelli, telling rulers how to hold the reins of power in the new Age of the Crowd through the use of emotionally charged theatricality, large gestures, dramatic illusions and the rhetoric of myth. According to Le Bon, the modern leader’s technique must be “to exaggerate, to affirm, to resort to repetitions, and never attempt to prove anything by reasoning” (Le Bon 1952: 51). Le Bon’s instructions have been taken seriously by many demagogues, including Hitler, who cited him extensively in Mein Kampf.
  26. Those who believe that Nazi devotees and leaders were motivated by either value or instrumental rationality should consider work by Robert Waite (1977) and Ian Kershaw (1987), as well as Joachim Fest’s biography of Hitler (1974), and the numerous biographies of dedicated Nazis. For more on this, see Lindholm 1990: 93-116.
  27. The intellectual debt of much contemporary anthropological theory to existential and phenomenological thought cannot be adequately pursued here, but particularly noteworthy is an emphasis on ‘authenticity’ and a refusal to make comparisons – both derived from premises of the priority of a unique inner self-consciousness struggling to free itself from what Heidegger (1962) called the tyranny of ‘the they.’ The Western character of these premises is, I hope, evident.
  28. see Lindholm (1990) for a theoretical framework, and for analysis of more extreme cases of modern charisma: Nazism, the Manson Family, and Jim Jones’s Peoples Temple.
  29. The term is used by Roy Wallis to distinguish these positive movements from apocalyptic and millennial ‘world rejecting’ movements such as Jonestown (Wallis 1984).
  30. The material is taken from sources which rely both on the testimony of converts and of those who have ‘deconverted’. On the question of the moral stance of the informant, and its influence on the data, see the Appendix in Wallis (1984). Here, I have used material which is corroborated by sources both within and without the movements.
  31. Hubbard was officially reported dead in 1986, but he had not been seen in public for many years, and may have died sometime previously (see Lamont 1986 for an account). The difficulty of maintaining a charismatic organization after the death of the leader is probably one cause of the reluctance to admit his death.
  32. Erhard has subsequently resigned some of his positions of authority in the organization.
  33. These methods have been substantially altered as each organization moves through the cycle of charismatic routinization and then again attempts to restimulate fervor among the disciples. The examples used here date from the most expansive and charismatic phase of this process.
  34. See Bainbridge and Stark (1980), who argue that the lack of any real content in ‘clear’ status and the constantly shifting Scientology doctrine actually enhanced Scientology’s hold over its converts.
  35. Erhard, a postmodernist before his time, has commented that “there are only two things in the world, semantics and nothing” (quoted in Martin 1980: 114).
  36. I should note that of course not all participants prove to be equally susceptible to the lure of the group. Innumerable differences in personal and cultural background and circumstances will make a difference in the degree to which any individual will be likely to participate. But under the right conditions, it is also very possible that even the most resistant individual might be caught up in the compelling dynamic of a charismatic collective.
  37. Bainbridge (1978) has called this process “social implosion,” that is, the development of a tight knot of persons, interacting solely with one another, bound by powerful feelings of loyalty and of separateness from the rest of society.
     

Charisma, Crowd Psychology and Altered States of Consciousness, Charles Lindholm, University Professors Program and Dept. of Anthropology, Boston University