“When all’s said and done, more is said than done.” — Anon.
The main purposes of this review are to set out for neuroscientists one possible approach to the problem of consciousness and to describe the relevant ongoing experimental work. We have not attempted an exhaustive review of other approaches.
Clearing The Ground
We assume that when people talk about “consciousness,” there is something to be explained. While most neuroscientists acknowledge that consciousness exists, and that at present it is something of a mystery, most of them do not attempt to study it, mainly for one of two reasons:
- They consider it to be a philosophical problem, and so best left to philosophers.
- They concede that it is a scientific problem, but think it is premature to study it now.
We have taken exactly the opposite point of view. We think that most of the philosophical aspects of the problem should, for the moment, be left on one side, and that the time to start the scientific attack is now.
We can state bluntly the major question that neuroscience must first answer: It is probable that at any moment some active neuronal processes in your head correlate with consciousness, while others do not; what is the difference between them? In particular, are the neurons involved of any particular neuronal type? What is special (if anything) about their connections? And what is special (if anything)about their way of firing? The neuronal correlates of consciousness are often referred to as the NCC. Whenever some information is represented in the NCC it is represented in consciousness.
In approaching the problem, we made the tentative assumption (Crick and Koch, 1990) that all the different aspects of consciousness (for example, pain, visual awareness, self-consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then, we hope, we will have gone most of the way towards understanding them all.
We made the personal decision (Crick and Koch, 1990) that several topics should be set aside or merely stated without further discussion, for experience had shown us that otherwise valuable time can be wasted arguing about them without coming to any conclusion.
(1) Everyone has a rough idea of what is meant by being conscious. For now, it is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both. If this seems evasive, try defining the word “gene.” So much is now known about genes that any simple definition is likely to be inadequate. How much more difficult, then, to define a biological term when rather little is known about it.
(2) It is plausible that some species of animals — in particular the higher mammals — possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the mechanisms underlying consciousness. It follows that a language system (of the type found in humans) is not essential for consciousness — that is, one can have the key features of consciousness without language. This is not to say that language does not enrich consciousness considerably.
(3) It is not profitable at this stage to argue about whether simpler animals (such as octopus, fruit flies, nematodes) or even plants are conscious (Nagel, 1997). It is probable, however, that consciousness correlates to some extent with the degree of complexity of any nervous system. When one clearly understands, both in detail and in principle, what consciousness involves in humans, then will be the time to consider the problem of consciousness in much simpler animals. For the same reason, we won’t ask whether some parts of our nervous system have a special, isolated, consciousness of their own. If you say, “Of course my spinal cord is conscious but it’s not telling me,” we are not, at this stage, going to spend time arguing with you about it. Nor will we spend time discussing whether a digital computer could be conscious.
(4) There are many forms of consciousness, such as those associated with seeing, thinking, emotion, pain, and so on. Self-consciousness — that is, the self-referential aspect of consciousness — is probably a special case of consciousness. In our view, it is better left to one side for the moment, especially as it would be difficult to study self-consciousness in a monkey. Various rather unusual states, such as the hypnotic state, lucid dreaming, and sleep walking, will not be considered here, since they do not seem to us to have special features that would make them experimentally advantageous.
How can one approach consciousness in a scientific manner? Consciousness takes many forms, but for an initial scientific attack it usually pays to concentrate on the form that appears easiest to study. We chose visual consciousness rather than other forms, because humans are very visual animals and our visual percepts are especially vivid and rich in information. In addition, the visual input is often highly structured yet easy to control.
The visual system has another advantage. There are many experiments that, for ethical reasons, cannot be done on humans but can be done on animals. Fortunately, the visual system of primates appears fairly similar to our own (Tootell et al., 1996), and many experiments on vision have already been done on animals such as the macaque monkey.
This choice of the visual system is a personal one. Other neuroscientists might prefer one of the other sensory systems. It is, of course, important to work on alert animals. Very light anesthesia may not make much difference to the response of neurons in macaque V1, but it certainly does to neurons in cortical areas like V4 or IT (inferotemporal).
Why Are We Conscious?
We have suggested (Crick and Koch, 1995a) that the biological usefulness of visual consciousness in humans is to produce the best current interpretation of the visual scene in the light of past experience, either of ourselves or of our ancestors (embodied in our genes), and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech.
Philosophers, in their carefree way, have invented a creature they call a “zombie,” who is supposed to act just as normal people do but to be completely unconscious (Chalmers, 1995). This seems to us to be an untenable scientific idea, but there is now suggestive evidence that part of the brain does behave like a zombie. That is, in some cases, a person uses the current visual input to produce a relevant motor output, without being able to say what was seen. Milner and Goodale (1995) point out that a frog has at least two independent systems for action, as shown by Ingle (1973). These may well be unconscious. One is used by the frog to snap at small, prey-like objects, and the other for jumping away from large, looming discs. Why does not our brain consist simply of a series of such specialized zombie systems?
We suggest that such an arrangement is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation.
Milner and Goodale (1995) suggest that in primates there are two systems, which we shall call the on-line system and the seeing system. The latter is conscious, while the former, acting more rapidly, is not. The general characteristics of these two systems and some of the experimental evidence for them are outlined below in the section on the on-line system. There is anecdotal evidence from sports. It is often stated that a trained tennis player reacting to a fast serve has no time to see the ball; the seeing comes afterwards. In a similar way, a sprinter is believed to start to run before he consciously hears the starting pistol.
The Nature of the Visual Representation
We have argued elsewhere (Crick and Koch, 1995a) that to be aware of an object or event, the brain has to construct a multilevel, explicit, symbolic interpretation of part of the visual scene. By multilevel, we mean, in psychological terms, different levels such as those that correspond, for example, to lines or eyes or faces. In neurological terms, we mean, loosely, the different levels in the visual hierarchy (Felleman and Van Essen, 1991).
The important idea is that the representation should be explicit. We have had some difficulty getting this idea across (Crick and Koch, 1995a). By an explicit representation, we mean a smallish group of neurons which employ coarse coding, as it is called (Ballard et al., 1983), to represent some aspect of the visual scene. In the case of a particular face, all of these neurons can fire to somewhat face-like objects (Young and Yamane, 1992). We postulate that one set of such neurons will be all of one type (say, one type of pyramidal cell in one particular layer or sublayer of cortex), will probably be fairly close together, and will all project to roughly the same place. If all such groups of neurons (there may be several of them, stacked one above the other) were destroyed, then the person would not see a face, though he or she might be able to see the parts of a face, such as the eyes, the nose, the mouth, etc. There may be other places in the brain that explicitly represent other aspects of a face, such as the emotion the face is expressing (Adolphs et al., 1994).
Notice that while the information needed to represent a face is contained in the firing of the ganglion cells in the retina, there is, in our terms, no explicit representation of the face there.
How many neurons are there likely to be in such a group? This is not yet known, but we would guess that the number to represent one aspect is likely to be closer to 100-1,000 than to 10,000-1,000,000.
A representation of an object or an event will usually consist of representations of many of the relevant aspects of it, and these are likely to be distributed, to some degree, over different parts of the visual system. How these different representations are bound together is known as the binding problem (von der Malsburg, 1995).
Much neural activity is usually needed for the brain to construct a representation. Most of this is probably unconscious. It may prove useful to consider this unconscious activity as the computations needed to find the best interpretation, while the interpretation itself may be considered to be the results of these computations, only some of which we are then conscious of. To judge from our perception, the results probably have something of a winner-take-all character.
As a working hypothesis we have assumed that only some types of specific neurons will express the NCC. It is already known (see the discussion under”Bistable Percepts”) that the firing of many cortical cells does not correspond to what the animal is currently seeing. An alternative possibility is that the NCC is necessarily global (Greenfield, 1995). In one extreme form this would mean that, at one time or another, any neuron in cortex and associated structures could express the NCC. At this point, we feel it more fruitful to explore the simpler hypothesis — that only particular types of neurons express the NCC — before pursuing the more global hypothesis. It would be a pity to miss the simpler one if it were true. As a rough analogy, consider a typical mammalian cell. The way its complex behavior is controlled and influenced by its genes could be considered to be largely global, but its genetic instructions are localized, and coded in a relatively straightforward manner.
Where is the Visual Representation?
The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen.
We have also wondered (Crick, 1994) whether the visual representation is largely confined to certain neurons in the lower cortical layers (layers 5 and 6). This hypothesis is still very speculative.
What is Essential for Visual Consciousness?
The term “visual consciousness” almost certainly covers a variety of processes. When one is actually looking at a visual scene, the experience is very vivid. This should be contrasted with the much less vivid and less detailed visual images produced by trying to remember the same scene. (A vivid recollection is usually called a hallucination.) We are concerned here mainly with the normal vivid experience. (It is possible that our dimmer visual recollections are mainly due to the back pathways in the visual hierarchy acting on the random activity in the earlier stages of the system.)
Some form of very short-term memory seems almost essential for consciousness, but this memory may be very transient, lasting for only a fraction of a second. Edelman (1989) has used the striking phrase, “the remembered present,” to make this point. The existence of iconic memory, as it is called, is well-established experimentally (Coltheart, 1983; Gegenfurtner and Sperling, 1993).
Psychophysical evidence for short-term memory (Potter, 1976; Subramaniam et al., 1997) suggests that if we do not pay attention to some part or aspect of the visual scene, our memory of it is very transient and can be overwritten (masked) by the following visual stimulus. This probably explains many of our fleeting memories when we drive a car over a familiar route. If we do pay attention (e.g., a child running in front of the car) our recollection of this can be longer lasting.
Our impression that at any moment we see all of a visual scene very clearly and in great detail is illusory, partly due to ever-present eye movements and partly due to our ability to use the scene itself as a readily available form of memory, since in most circumstances the scene usually changes rather little over a short span of time (O’Regan, 1992).
Although working memory (Baddeley, 1992; Goldman-Rakic, 1995) expands the time frame of consciousness, it is not obvious that it is essential for consciousness. It seems to us that working memory is a mechanism for bringing an item, or a small sequence of items, into vivid consciousness, by speech, or silent speech, for example. In a similar way, the episodic memory enabled by the hippocampal system (Zola-Morgan and Squire, 1993) is not essential for consciousness, though a person without it is severely handicapped.
Consciousness, then, is enriched by visual attention, though attention is not essential for visual consciousness to occur (Rock et al., 1992; Braun and Julesz, 1997). Attention is broadly of two types: bottom-up, caused by the sensory input; and top-down, produced by the planning parts of the brain. This is a complicated subject, and we will not try to summarize here all the experimental and theoretical work that has been done on it.
Visual attention can be directed to either a location in the visual field or to one or more (moving) objects (Kanwisher and Driver, 1992). The exact neural mechanisms that achieve this are still being debated. In order to interpret the visual input, the brain must arrive at a coalition of neurons whose firing represents the best interpretation of the visual scene, often in competition with other possible but less likely interpretations; and there is evidence that attentional mechanisms appear to bias this competition (Luck et al., 1997).
Consciousness and Neuroscience, Francis Crick (The Salk Institute) & Christof Koch (Computation and Neural Systems Program, California Institute of Technology)
Has appeared in: Cerebral Cortex, 8:97-107, 1998
The Salk Institute
10010 North Torrey Pines Road
La Jolla, California 92037
(619) 453-4100 x1242
Fax: (619) 550-9959