Values, Science and Religion

It seems to me that the obligation to expose religious beliefs as nonsensical is an ethical one incumbent upon every anthropological scientist, for the simple reason that the essential ethos of science lies in an unwavering dedication to truth. As Frankel and Trend (1991:182) put it, “the basic demand of science is that we seek and tell the honest truth, insofar as we know it, without fear or favor.” In the pursuit of scientific knowledge, the evidence is the only thing that matters. Emotional, aesthetic, or political considerations are never germane to the truth or falsity of any propositional claim. (There are moons around Jupiter, just as Galileo claimed, even though the Catholic Church and most Christians at the time did not like him for saying it.) In science, there is no room for compromise in the commitment to candor. Scientists cannot allow themselves to be propagandists or apologists touting convenient or comforting myths.

It is not simply our desires for intellectual honesty and disciplinary integrity that compel us to face the truth about religious beliefs; as anthropologists, we are specifically enjoined to do so by our code of ethics. According to the Revised Principles of Professional Responsibility adopted by the American Anthropological Association in 1990, anthropologists have an explicit obligation “to contribute to the formation of informational grounds upon which public policy may be founded” (Fluehr-Lobban 1991:276). When anthropologists fail to publicly proclaim the falsity of religious beliefs, they fail to live up to their ethical responsibilities in this regard. In a debate concerning public policy on population control, for example, anthropologists have an ethical obligation to explain that God does not disapprove of the use of contraceptives because there is no such thing as God.

We also have an obligation not to pick and choose which truths we are willing to tell publicly. I think, for example, that the political threat from the oxymoronic “scientific creationists” would be better met if anthropologists were to debunk the entire range of creationist claims (including the belief that God exists as well as the belief that humans and dinosaurs were contemporaneous); otherwise the creationists will continue to criticize us, with considerable justification, for our arbitrariness and inconsistency in choosing which paranormal claims we will accept or tolerate and which we will attack (see Toumey 1994).

I am convinced that our collective failure to stake out a firm anthropological position on paranormal phenomena has compromised our intellectual integrity, weakened our public credibility, and hampered our political effectiveness. Carlos Castaneda was able to use his anthropological credentials to buttress the credibility (and the sales) of his paranormal fantasies, partly because, as far as the general public knew, the discipline of anthropology accepted the reality of hundred-foot gnats and astral projection (de Mille 1990). While it is true that most individual anthropologists rejected Castaneda’s paranormal claims, few did so publicly or effectively (Murray 1990). In fact, our discipline as a whole has a lamentable record when it comes to public responses to paranormal claims. There have been notable exceptions in archeology and biological anthropology, where a number of scholars have responded forcefully and well to the ancient astronaut and creationist myths (e.g., White 1974; Cole 1978; Rathje 1978; Cazeau and Scott 1979; Godfrey 1983; Stiebing 1984; Cole and Godfrey 1985; Harold and Eve 1987; Feder 1980, 1984, 1990), but cultural anthropologists have been remarkably remiss in responding to the myriad paranormal claims that fall within their domain (see Lett 1991).

Margaret Mead, for example, maintained a lifelong interest in paranormal phenomena and was an ardent champion of irrational beliefs (Gardner 1988). She was apparently persuaded that “some individuals have capacities for certain kinds of communications which we label telepathy and clairvoyance” (Mead 1977:48), even though the most casual scholarship would have revealed that that proposition has been decisively falsified (the evidence comes from more than a century of intensive research that has been thoroughly documented and widely disseminated–see Kurtz 1985; Druckman and Swets 1988; Hansel 1989; Alcock 1990). In 1969, Mead was influential in persuading the American Association for the Advancement of Science to accept the habitually pseudoscientific Parapsychological Association as a constituent member. In all of this, Mead used her considerable talents for popularization to promulgate nonsensical beliefs among the general public. However sincere and well-intentioned, her efforts were irresponsible, unprofessional, and unethical; worse still, they were not atypical of cultural anthropology. (See Note 6)

Even those anthropologists who do not share Mead’s gullibility have been notably reluctant to confront the truth about paranormal beliefs. Anthony Wallace, for example, in all likelihood thought he was being purely objective when he decided to avoid the “extremes of piety and iconoclasm” and to regard religion as “neither a path of truth nor a thicket of superstition” (Wallace 1966:5). In science, however, being objective does not entail being fair to everyone involved; instead, being objective entails being fair to the truth. The simple truth of the matter is that religion is a thicket of superstition, and if we have an ethical obligation to tell the truth, we have an ethical obligation to say so.

I find Wallace’s equivocation on the truth or falsity of religious beliefs to be particularly regrettable, because his Religion: An Anthropological View is one of the justly celebrated classics in the anthropology of religion. Wallace, of course, would not agree that his stance is anything less than fair and appropriate; indeed, he is very forthright in declaring and defending his value position. In the opening pages of his book, for example, he states that “although my own confidence has been given to science rather than to religion, I retain a sympathetic respect and even admiration for religious people and religious behavior” (Wallace 1966:vi).

I suspect that most anthropologists would be inclined to agree with Wallace. Eric Gans (1990:1), who has urged anthropologists to “demonstrate a far greater concern and respect for the form and content of religious experience,” is one who clearly shares Wallace’s sympathy for the religious temperament. Whether Wallace and Gans are justified in according religious people respect and admiration is a debatable question, however. No reasonable person would deny that religious people are entitled to their convictions, but an important distinction must be made between an individual’s right to his or her own opinion (which is always inalienable) and the rightness of that opinion (which is never unchallengeable). With that in mind, it could be argued that individuals who are led by ignorance or timidity to embrace incorrect opinions might deserve empathy and compassion, but they would hardly deserve respect and admiration. Respect and admiration, instead, should be reserved for individuals who exhibit dignity, courage, or nobility in response to the universal challenges of human life.

The philosopher Paul Kurtz (1983) articulates just such a position in a lengthy rebuttal to religious values entitled In Defense of Secular Humanism. From Kurtz’s point of view, religious people live in a world of illusion, unwilling to accept and face reality as it is. In order to maintain their beliefs, they must prostitute their intellectual integrity, denying the abundant contradictory evidence that constantly surrounds them. They exhibit an “immature and unhealthy attitude” that is “out of touch with cognitive reality” and that “has all the hallmarks of pathology” (Kurtz 1983:173). Religious people fail to exhibit the moral courage that is the foundation of a responsible approach to life.

The physicist Victor Stenger (1990) shares Kurtz’s disdain for religious commitment, and he is one of many skeptical rationalists in a variety of fields who do so. Religious people, Stenger argues, fail to accept responsibility for defining the meaning and conduct of their own lives; instead, they lazily and thoughtlessly embrace an inherited set of illogical wish-fulfillment fantasies. By refusing to fully utilize their quintessentially human attributes–the abilities to think, to wonder, to discover, to learn–religious people deny themselves the possibility of human dignity or nobility. It is only those with the courage to reject religious commitment, Stenger (1990:31-32) suggests, who deserve admiration; in his words, “those who have no need to deny the reality they see with their own eyes willingly trade an eternity of slavery to supernatural forces for a lifetime of freedom to think, to create, to be themselves.”

It would be disingenuous of me not to admit that I concur completely with Kurtz and Stenger. Nevertheless, my personal values regarding religion are entirely beside the point; I mention this only to point out the irony of our discipline’s frequent sympathy for religious commitment. In Western culture, the concept of religious “faith” has a generally positive connotation, but there is nothing positive about the reality masked by that obfuscatory term. “Faith” is nothing more than the willingness to reach an unreasonable conclusion–i.e., a conclusion that either lacks confirming evidence or one that contains disconfirming evidence. Willful ignorance, deliberate self-deception, and delusionary thinking are not admirable human attributes. Religion prejudicially regards faith as an exceptional virtue, but science properly recognizes it as a dangerous vice.

In the final analysis, however, it is irrelevant whether religious conviction deserves respect and admiration, as Wallace and Gans propose, or contempt and disdain, as I believe. My point instead is a very basic one: as scientists, we all have an ethical obligation to tell the truth, regardless of whether that truth is attractive or unattractive, diplomatic or undiplomatic, polite or impolite. As anthropologists, we have not been telling the truth about religion, and we should. The issue is just that simple.


Faith & Irrationality

I tend to go to some religious gatherings for a number of social reasons. Whenever I go to such religious gatherings, sermons and discourses, I have often looked at the twinkle in the eyes of the adherents and wondered as to how people can be so deluded, or how reasoning can be so flawed. I have studied the egotistical and selfish nature of the preacher(s) and these have taken many forms – from the overtly ‘I know what you should think because I have thought it out for you’ to ‘I am your humble servant, and I think that these are the solutions to all your searches’. I have looked into the eyes of the adherents who are trying to give meaning to their lives for a number of reasons – all primarily psychological. I have studies the reasons people convert from one religion to another (primarily from western to eastern religions), and on the selfish reasons for doing so. Psychological escapism is nothing new, and converts will justify their reasons without clarity and candor. The burden of a disturbed mind manipulates irrationality into reason.

Increasingly, many of us want to rid the world of dogmatically-held beliefs that are vapid, barbarous, anachronistic and wrong. Many religious people give us their take on how they and their system of beliefs would combat such beliefs, which is often scientifically baseless, psychologically uninformed, politically naïve, and counterproductive for goals we share. We have stated, here on The European Rationalist, that silence in the face of dangerous lunacy, or even in the face of moderate unreasonableness, can be just as culpable as lying.

I was recently reading ‘In Gods We Trust: The Evolutionary Landscape of Religion’ by Scott Atran (Anthropologist, University of Michigan) and an article at Edge ( “An Edge Discussion of BEYOND BELIEF: Science, Religion, Reason and Survival, Salk Institue, La Jolla November 5-7, 2006” and I find that most of the points I wanted to be make as a critique, have already been made there.

Most religious people are irrational, as most us are in many situations in our lives, as when we fall in love, or hope beyond reason. Of course, you could be uncompromisingly rational and try whispering in your honey’s ear: “Darling, you’re the best combination of secondary sexual characteristics and mental processing that my fitness calculator has come up with so far.” After you perform this pilot experiment and see how far you get, you may reconsider your approach. If you think that approach absurd to begin with, it is probably because you sincerely feel, and believe in, love.

Empirical research on the cognitive basis of religion over the last two decades has focused on a growing number of converging cross-cultural experiments on “domain-specific cognition” emanating from developmental psychology, cognitive psychology and anthropology. Such experiments indicate that virtually all (non brain-damaged) human minds are endowed by evolution with core cognitive faculties for understanding the everyday world of readily perceptible substances and events. The core faculties are activated by stimuli that fall into a few intuitive knowledge domains, including: folkmechanics (object boundaries and movements), folkbiology (biological species configurations and relationships), and folkpsychology (interactive agents and goal-directed behavior). Sometimes operation of the structural principles that govern the ordinary and “automatic” cognitive construction of these core domains are pointedly interrupted or violated, as in poetry and religion. In these instances, counterintuitions result that form the basis for construction of special sorts of counterfactual worlds, including the supernatural, for example, a world that includes self-propelled, perceiving or thinking mineral substances (e.g., Maya sastun, crystal ball, Arab tilsam [talisman]) or beings that can pass through solid objects (angels, ghosts, ancestral spirits).

Religious beliefs are counterintuitive, then, because they violate innate and universal expectations about the world’s everyday structure, including such basic categories of “intuitive ontology” (i.e., the ordinary ontology of the everyday world that is built into any language learner’s semantic system) as person, animal, plant and substance. They are generally inconsistent with fact-based knowledge, though not randomly. As Dan Sperber and Scot Altran pointed out a quarter of a century ago, beliefs about invisible creatures who transform themselves at will or who perceive events that are distant in time or space flatly contradict factual assumptions about physical, biological and psychological phenomena. Consequently, these beliefs more likely will be retained and transmitted in a population than random departures from common sense, and thus become part of the group’s culture. Insofar as category violations shake basic notions of ontology they are attention-arresting, hence memorable.
But only if the resultant impossible worlds remain bridged to the everyday world can information be readily stored, evoked and transmitted. For example, you don’t have to learn in bible class that God could pick up a basket ball if you’ve already been taught that He can topple a chariot. And you don’t have to be told that God can become angry if you worship other Gods or do things He doesn’t like once you’ve already learned that He’s a jealous God. This is because such further pieces of knowledge are “automatically” inferable from our everyday commonsense understanding of folkphysics and folkbiology (e.g., relative effort and strength required to displace different sized objects) and folkpsychology (e.g., how emotions are related to one another and to beliefs). Miracles usually involve a single ontological violation, like a talking bush or horse riding into the sky, but leave the rest of the everyday commonsense world entirely intact. Experiments show that if ideas are too bizarre, like a talking tea kettle that has leaves and roots like a tree, then they are not likely to be retained in memory over the long run.

Religious worlds with supernaturals who manage our existential anxieties — such as sudden catastrophe, loneliness, injustice and misery – are minimally counterintuitive worlds. An experimental setup for this idea is to consider a 3 x 4 matrix of core domains (folkphysics, folkbiology, folkpsychology) by ontological categories (person, animal, plant, substance). By changing one and only one intuitive relationship among the 12 cells you then generate what Pascal Boyer calls a “minimal counterintuition.” For example, switching the cell ( − folkpsychology, substance) to ( + folkpsychology, substance) yields a thinking talisman, whereas switching ( +  folkpsychology, person) to (−  folkpsychology, person) yields an unthinking zombie. But changing two or more cells simultaneously usually leads only to confusion. Our experiments show that minimally counterintuitive beliefs are optimal for retaining stories in human memory (mains results have been replicated by teams of independent researchers, see for example articles in the most recent issue of the Journal of Cognition and Culture).

In sum, the conceptual foundations of religion are intuitively given by task-specific panhuman cognitive domains, including folkmechanics, folkbiology, folkpsychology. Core religious beliefs minimally violate ordinary ontological intuitions about how the world is, with its inescapable problems. This enables people to imagine minimally impossible supernatural worlds that solve existential problems that have no rational solution, including avoiding death or deception. Because religious beliefs cannot be deductively or inductively validated, validation occurs only by ritually addressing the very emotions motivating religion, usually through chant and music, dance and sway, prostration and prayer  −  all somewhat derivate of primate expressions of social bonding and submission. Cross-cultural experimental evidence encourages these claims.

Are Religions composed of Memes?

Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root mimeme, with allusions to memory and mime (and the French word même, “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root with allusions to memory and mime (and the French word “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.For genes, there is an operational definition: DNA-encoded units of information that dependably survive reproductive division, that is, meiosis (although crossover can occur anywhere along a strand of DNA, whether at the divisions of functionally defined genes or within them). In genetic propagation, information is transmitted with an extremely high degree of fidelity. In cultural propagation, imitation is the exception, not the rule; the typical pattern is of recurrent, guided transformation. Modular and innate mental structures (like those responsible for folkphysics, folkbiology and folkpsychology) thus play a central role in stabilizing and directing the transmission of beliefs toward points of convergence, or cultural attractors.

Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root with allusions to memory and mime (and the French word “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.For genes, there is an operational definition: DNA-encoded units of information that dependably survive reproductive division, that is, meiosis (although crossover can occur anywhere along a strand of DNA, whether at the divisions of functionally defined genes or within them). In genetic propagation, information is transmitted with an extremely high degree of fidelity. In cultural propagation, imitation is the exception, not the rule; the typical pattern is of recurrent, guided transformation. Modular and innate mental structures (like those responsible for folkphysics, folkbiology and folkpsychology) thus play a central role in stabilizing and directing the transmission of beliefs toward points of convergence, or cultural attractors.Minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not by high-fidelity replication or imitation. For example, if a mother shows a child an abstract cartoon drawing of an animal that the child has never seen or heard of, and says to her child the equivalent of  “this platypus swims” in whatever human language, then any child whose linguistic faculty has matured enough to understand complete sentences, anywhere in the world, will almost immediately infer that mom is talking about: (a) something that belongs to the ontological category animal (because the lexical item “swims,” or its equivalent in another language, is cognitively processed under +animate, which is implicitly represented in every human’s semantic system), (b) this animal belongs to one and only one folk species (because an innately-determined and universal assumption of folkbiology is that animals divide into mutually exclusive folk species), and (c) the animal is probably aquatic (because part of the ordinary meaning of  “swims” is moves through water).

Inference in the communication of many religious beliefs, however, is cognitively designed never to come to closure, but to remain open-textured. For example, in a set of classroom experiments, we asked students to write down the meanings of three of the Ten Commandments: (1) Thou Shall Not Bow Down Before False Idols; (2) Remember the Sabbath; (3) Honor They Father and Thy Mother. Despite the students’ own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus.

In a serial attempt at replication a student in a closed room was given one of the Ten Commandments to paraphrase; afterwards the student would call in another student from the hallway and repeat the paraphrase; then the second student would paraphrase the paraphrase and call in a third student; and so on through. After 10 iterations the whole set of ten paraphrases was presented to another group of students who were asked to choose one phrase from a new list of phrases (including the original Ten Commandments) that “best describe the whole set of phrases before you.” Only “Thou shalt not kill” was reliably preferred as a descriptor of the set representing the chain of paraphrases initiated by a Commandment. (By contrast, control phrases such as “two plus two equals four” or “the grass is green” did replicate).

A follow-up study explored whether members of the same church have some normative notion of the Ten Commandments, that is, some minimal stability of content that could serve for memetic selection. Twenty-three members of a Bible class at a local Pentecostal Church, including the church pastor, were asked to define the three Commandments above, as well as “Thou shalt not kill,” “The Golden Rule,” “Lamb of God,” and “Why did Jesus die?” Only the first two produced anything close to consensus. In prior questioning all subjects agreed that the meanings of the Ten Commandments were fixed and had not changed substantially since Biblical times (so much for intuition).

In another project, students compared interpretations of ideological and religious sayings (e.g., “Let a thousand flowers bloom,” “To everything there is a season”) among 26 control subjects and 32 autistic subjects from Michigan. Autistics were significantly more likely to closely paraphrase and repeat content from the original statement (e.g., “Don’t cut flowers before they bloom”). Controls were more likely to infer a wider range of cultural meanings with little replicated content (e.g., “Go with the flow,” “Everyone should have equal opportunity”) – a finding consistent with previous results from East Asians (who were familiar with “Let a thousand flowers bloom” as Mao’s credo). Only the autistic subjects, who lack inferential capacity normally associated with aspects of folkpsychology came close to being “meme machines.” They may be excellent replicators of literal meaning, but they are poor transmitters of cultural meaning.

With some exceptions, ideas do not reproduce or replicate in minds in the same way that genes replicate in DNA. They do not generally spread from mind to mind by imitation. It is biologically prepared, culturally enhanced, richly structured minds that generate and transform recurrent convergent ideas from often fragmentary and highly variable input. Core religious ideas serve as conceptual signposts that help to socially coordinate other beliefs and behaviors in given contexts. Although they have no more fixed or stable propositional content than do poetic metaphors, they are not processed figuratively in the sense of an optional and endless search for meaning. Rather they are thought to be right, whatever they may mean, and to require those who share such beliefs to commune and converge on an appropriate interpretation for the context at hand. To claim that one knows what Judaism or Christianity is truly about because one has read the Bible, or that what Islam is about because one has read the Qur’an and Hadith, is to believe that there is an essence to religion and religious beliefs. But science (and the history of exegesis) demonstrates that this claim is false.

Humankind does not naturally divide into competing camps of reason and tolerance, on one side, and religion and intolerance, on the other. It is true that “scientists spend an extraordinary amount of time worrying about being wrong and take great pains to prove other so.” The best of our scientists make even greater efforts to prove themselves wrong. But it is historical nonsense to say that “pretending to know things you do not know… is the sine qua non of faith-based religion,” that doubt and attempts to “minimize the public effects of personal bias and self-deception” are alien to religion, or that religion but not scientific reason allows “thuggish lunacy.”

Is Augustine’s doubt really on a different plane than Descartes’? Are Gandhi’s and Martin Luther King’s religious appeals to faith and hope in the face of overwhelming material adversity truly beside the point? Did not the narrow focus of science on the evidence and argument of the task at hand allow the production of tens of thousands of nuclear weapons, and are not teams of very able and dedicated scientists today directly involved in constructing plausible scenarios for apocalyptic lunacy? Were not Nazi apologists Martin Heidegger and Werner Heisenberg among Germany’s preeminent men of reason and science (who used their reason and critical thought to apologize for Nazism)? Did not Bertrand Russell, almost everyone’s Hero of Reason (including mine), argue on the basis of clear and concise thought, and with full understanding and acknowledgement of opposing views and criticism, that the United states should nuke Soviet Russia before it got the bomb in order to save humankind from a worse evil? And Newton may have been the greatest genius that ever walked the face of the earth, as Neil de Grasse Tyson tells us, but if you read Newton’s letters at St. John’s College library in Cambridge, you’ll see he was one mean and petty son of a bitch.

The point is not, that some scientists do bad things and some religious believers do good things. The issue is whether or not there are reliable data to support the claim that religion engages more people who do bad than good, whereas science engages more people who do good than bad. One study might compare, say, standards of reason or tolerance or compassion among British scientists versus British clergy. My own intuition has it a wash, but even I wouldn’t trust my own intuitions, and neither should you.


Religion and Neurobiology

Religion is a societal entity that has subsisted since the earliest record of man’s existence. There are a multitude of religions as well as varying degrees of faith. Many religious convictions are based on spiritual knowledge or simple belief. However, science often searches for physical and mechanical understanding of knowledge. There are many issues in which science and religion clash. These issues range from the beginning of life, evolution versus creationism, to the idea of existence after death. As the advancement of science continues, physical explanations for life’s occurrences are presented. Do these explanations disprove religious accounts? Will science eventually disprove religion and render it useless? This question is analyzed in the occurrences of Near Death Experiences (NDE’s).

An NDE is defined as “a lucid experience associated with perceived consciousness apart from the body occurring at the time of actual or threatened imminent death (1).” Death is the final, irreversible end (2). It is the permanent termination of all vital functions. The occurrence of an NDE is not a rarity. Throughout time and from across the globe NDE’s have been described by many, and in these accounts there are several similarities among them. The commonalities of an NDE include a feeling of peace and connection with the universe, a sense of release from the body (often called an Out of Body Experience or OBE), a movement down a dark tunnel, the vision of a bright light, and the vision of deities or other people from their lives (2). Not every NDE contains each of these events, these are merely the most common similar events described. An NDE can range in magnitude from having all of these events occur to having none of them occur (2). There are two theories explaining the similarities among NDE’s. The scientific explanation describes a situation in which a mixture of effects due to expectation, administered drugs, endorphins, anoxia, hypercarbia, and temporal lobe stimulation create a unified core experience (3). The religious explanation claims that they are a glimpse of existence after death. The unified core experience is due to there being a destination after the body dies with a similar path for all. These two theories debate whether an NDE is simply the neural activity preparing the body for death or a preview of the beyond. To further understand the occurrences of an NDE neurobiological research has believed to have mapped the neural activity of an NDE.

The most common similarity of NDE’s is the feeling of peace, tranquility, spirituality, and oneness with all (3). This occurrence has been discovered to be associated with the release of endorphins as well as reactions between the right and left superior parietal lobe (4) (5). The right portion of this area of the brain is known to be responsible for the sense of physical space and body awareness. It is responsible for orienting the body. The left portion of the parietal lobe is responsible for the awareness of the self. During an NDE neural activity in these areas shuts down. The result of this is an inability for the mind to have distinction between the self and non-self. All of space, time, and self becomes one (4) (5). Essentially one feels as being the infinite, rather than part of the infinite because there is no realization of self. However, other aspects of the brain are still functioning and thoughts are occurring. These other thoughts are believed to be associated with the visions perceived (4). If a persons thoughts are focused on a deity or personal relation, without the ability to comprehend self, time, and space, the person may in fact see an image of that focused thought because visual neurons are still intact. It is the relation of neural inactivity in the parietal lobe combined with other activities within the human brain that are responsible for most aspects of an NDE (2) (3) (4).

The understanding of neural relationships during NDE’s has culminated in the ability to reproduce each phenomena in a controlled setting. It has been found that the intravenous administration of 50-100 mg of ketamine can safely reproduce all features of an NDE (2) and electrical stimulation of the right angular gyrus portion of the brain can safely reproduce an out of body experience (6). Scientific research has even explained why religion is emphasized during an NDE. Activation in the temporal lobe region, known as the “God Spot (7)” during an NDE is reported to stimulate religious themed thoughts (8). This research has major implications in the battle of science versus religion. It provides evidence that specific brain activity can create the perception of religion and divinity. If this is true than this brain activity can be turned off and in effect remove religion from our lives. Many wars would be stopped, borders would open up, life as we know it would change completely. However, there are many faults to this theory. The major error in the idea that understanding the mechanical brain activity of NDE’s and religion makes them useless is the assumption that the experience only exists within the brain. Begley (5) uses an example of apple pie to illustrate this point. Upon the site of a pie, the neural activity linking site, smell, memory, and emotion can all be mapped quite clearly. However, this mapping of activity does not disprove the existence of the pie. This is the precise reason the existence of God or any other religious deity or beliefs cannot be disproved. It is just as simple to believe that viewing the mechanics of the brain during an NDE or religious experience is like getting a glimpse of the tool or hardware used to experience religion (9). However, this does not prove the existence of a God, or any other belief, either. It is the principle that understanding the neurobiological mechanics of religion cannot disprove or prove the existence of God, religion, or spirituality that makes it improbable that science will eliminate religion.

Believing that science will eventually do away with religion wrongly assumes that knowledge of the mechanics of the brain and universe are capable of eradicating the importance of religion to humankind. Religion is present in society for a plethora of reasons branching far beyond the mere belief in an existence of a God. The multitude of religions, deities, and even atheism is evidence of this. Among many, the reasons for religion include fear, comfort, stability, and tradition. The NDE provides an excellent example of one of the importance’s of religion, the existence of life after death. Existence after death refutes the idea that we are simply organic material organized in a certain fashion with a certain time span of functionality. The religious belief than an DNE is a glimpse of our existence beyond life is valuable for peoples behavior in life, not just as evidence of a theory. In very few NDE’s do negative feelings occur. People often describe a “heavenly” light rather than a hell (1) (10) . This may be because of the power of suggestion (3) in that it is a common societal belief that when a person dies they are supposed to see a tunnel, a light, an angel, and heaven. So when an NDE occurs, this is what the person sees because it follows their thought process. Not many people believe that when they die they are going to go to hell. The idea of existence of a better place after death comforts and eases the pain of many who suffer in life. It can provide them with hope through troubling time whether they believe in Jesus, Buddha, Elijah, or no God at all. Religion is a tool of mankind to sustain a belief. The reasons for that belief vary among people and religions but the importance is in believing. Having a belief can instill a sense of pride, confidence, comfort, strength, and much more in a person. A single belief can provide a purpose for life. The actual beliefs of each religion are only important to the individual. However, the idea of belief itself is important to the foundations of religion. The importance of religion to mankind makes it improbable society will ever allow scientific understanding to overrule religion. Science may disprove religious stories such as Moses’ parting of the red sea, but the importance of religion goes beyond the stories. Religion is indispensable because it is a belief. For this reason science is incapable of eliminating religion.

1)Near-Death Experience, Religion, and Spirituality, a religion and spirituality article related to NDE’s
2)Ketamine Model of the NDE, Drug induced replication of the NDE
3) Blackmore, Susan. “Near Death Experiences,” Royal society of Medicine. Vol. 89. February 1996, pp. 73-76.
4)Why God Won’t Go Away: Brain Science and the Biology of Belief, Excerpts from the author
5) Begley, Sharon. Religion and the Brain. Newsweek, May 7, 2001, p. 50.
6) Blanke, O., Ortigue, S., Landis, T., Seeck, M. Stimulating Illusory Own-Body Perceptions,” Nature. Vol. 419. September 19, 2002. pp. 269-270.
7)God on the Brain, An article on the cross between neurobiology and faith
8)Meridian Institute, Transformational experiences
9)Tracing the Synapses of our Spirituality, Examination of brain and religion
10)Susan Blackmore Home Page, Experiences of Anoxia


Can Science Replace Religion? Analyzing the Neurobiology and Neurotheology of the Near Death Experience, Bradley Corr


Boston, MA—Prominent neuroscientists, theologians and bioethicists gathered at MIT on Sunday for a 3 day conference, Our Brains and Us: Neuroethics, Responsibility, and the Self, sponsored by the Dialogue on Science, Ethics, and Religion at the American Association for the Advancement of Science.

To a certain extent the title of the conference seems a bit strange to those who think that our brains pretty much are “us.” Brains are the organs in which our desires, memories, hopes, plans, and character all reside. We recognize the centrality of the brain to our personhood when we consider the question: would you prefer to be the donor or the recipient of a brain transplant?

The conferees are considering such issues as: If a brain scanning technology could reliably predict that someone will commit violence, should they be subject to prior restraint, or required to take medications that would moderate that tendency? Do people who have suffered painful abuse have an obligation to retain that memory or do they have the right to blunt it? Perhaps perpetrators of violence should be required to retain the memory of their evil, while victims would be allowed to moderate their recollections?

They are also debating questions of what constitutes neural normalcy: When can outsiders legitimately intervene to correct another person’s eccentricities? Religious scholar David Hogue suggests that modern neuroscience is encouraging unjustified notions of “perfectability” and that we “run the risk of becoming gods.”

Besides these large questions, neuroscientists are displaying some of the findings of their field. Floyd Bloom from the Scripps Research Institute showed a brain scan of two players engaged in a kind of tit-for-tat game in which one player learns to trust another. The interesting aspect of the brain scan was that areas of the basal ganglia associated with feelings of reward “light up” as the player comes to trust the other player. Positive social interaction elicits the same internal reward system that food, water and sex do. Have neuroscientists identified “trust” in the brain? University of Pennsylvania brain researcher Martha Farah reviewed the latest brain scanning literature which has tried to prove the hypothesis that there is a “self module” in the brain—that is, a network of brain cells that would respond predictably when a brain considers itself and its body. Farah’s review found that current brain imaging studies could not in fact confirm such a claim. There does seem to be a module (network) devoted to identifying “persons” that helps us predict the behavior of others in terms of reasons; assumes a continuity of identity of other persons; and enables us to assign blame and punish others.

Author Andrew Solomon’s struggle with depression led him to extensive study of neuroscience research. Solomon noted that in the past psychiatrists would argue that depression caused by psychological trauma (say child abuse or surviving the Holocaust) would be better treated by psychological means, such as talk therapy, whereas depression that doesn’t seem to come from any specific incident but seems to arise from a neurochemical shift is more amenable to drug treatments. Solomon pointed out that brain researcher Eric Kandel has found that talk therapy and anti-depressant drugs induce the same set of physical changes in the brain.

On the religious front, theologian Nancey Murphy from Fuller Theological Seminary described some remarkably interesting scholarly research that suggests that the early Christians did not subscribe to the idea of an immaterial soul separate from the body. Murphy argued that the idea of an immaterial soul was smuggled in when Hebrew scriptures were translated into Greek around 250 BCE. For example, the Hebrew word nefesh, which referred to the whole living person, was translated as psyche, or soul. In Hebrew thought, the concept of spirit stands the whole person in relation to God, not some separable part of a person. Murphy argued that New Testament authors were not teaching about the metaphysical condition of human beings or asking whether there is a period of conscious existence between death and bodily resurrection. “The Christian hope for eternal life is staked on bodily resurrection, not on the existence of an immaterial soul,” concluded Murphy. “Thus contemporary believers can formulate their views in conformance with science. There is no conflict between science and religion.”

Finally, David Hogue asked, “Is there anything the neuroscience will not be asked to explain? I suspect that the answer is ‘no'”. He seemed rather glum about the prospect.

To a certain extent the title of the conference seems a bit strange to those of us who think that our brains pretty much are “us.” Brains are the organs in which our desires, memories, hopes, plans, and character all reside. We recognize the centrality of the brain to our personhood when we consider the question: would you prefer to be the donor or the recipient of a brain transplant?

The conferees are considering such issues as: If a brain scanning technology could reliably predict that someone will commit violence, should they be subject to prior restraint, or required to take medications that would moderate that tendency? Do people who have suffered painful abuse have an obligation to retain that memory or do they have the right to blunt it? Perhaps perpetrators of violence should be required to retain the memory of their evil, while victims would be allowed to moderate their recollections?

They are also debating questions of what constitutes neural normalcy: When can outsiders legitimately intervene to correct another person’s eccentricities? Religious scholar David Hogue suggests that modern neuroscience is encouraging unjustified notions of “perfectability” and that we “run the risk of becoming gods.”

Besides these large questions, neuroscientists are displaying some of the findings of their field. Floyd Bloom from the Scripps Research Institute showed a brain scan of two players engaged in a kind of tit-for-tat game in which one player learns to trust another. The interesting aspect of the brain scan was that areas of the basal ganglia associated with feelings of reward “light up” as the player comes to trust the other player. Positive social interaction elicits the same internal reward system that food, water and sex do. Have neuroscientists identified “trust” in the brain? University of Pennsylvania brain researcher Martha Farah reviewed the latest brain scanning literature which has tried to prove the hypothesis that there is a “self module” in the brain—that is, a network of brain cells that would respond predictably when a brain considers itself and its body. Farah’s review found that current brain imaging studies could not in fact confirm such a claim. There does seem to be a module (network) devoted to identifying “persons” that helps us predict the behavior of others in terms of reasons; assumes a continuity of identity of other persons; and enables us to assign blame and punish others.

Author Andrew Solomon’s struggle with depression led him to extensive study of neuroscience research. Solomon noted that in the past psychiatrists would argue that depression caused by psychological trauma (say child abuse or surviving the Holocaust) would be better treated by psychological means, such as talk therapy, whereas depression that doesn’t seem to come from any specific incident but seems to arise from a neurochemical shift is more amenable to drug treatments. Solomon pointed out that brain researcher Eric Kandel has found that talk therapy and anti-depressant drugs induce the same set of physical changes in the brain.

On the religious front, theologian Nancey Murphy from Fuller Theological Seminary described some remarkably interesting scholarly research that suggests that the early Christians did not subscribe to the idea of an immaterial soul separate from the body. Murphy argued that the idea of an immaterial soul was smuggled in when Hebrew scriptures were translated into Greek around 250 BCE. For example, the Hebrew word nefesh, which referred to the whole living person, was translated as psyche, or soul. In Hebrew thought, the concept of spirit stands the whole person in relation to God, not some separable part of a person. Murphy argued that New Testament authors were not teaching about the metaphysical condition of human beings or asking whether there is a period of conscious existence between death and bodily resurrection. “The Christian hope for eternal life is staked on bodily resurrection, not on the existence of an immaterial soul,” concluded Murphy. “Thus contemporary believers can formulate their views in conformance with science. There is no conflict between science and religion.”

Finally, David Hogue asked, “Is there anything the neuroscience will not be asked to explain? I suspect that the answer is ‘no'”. He seemed rather glum about the prospect.

Minds on Brains, Hobnobbing with neuroscientists and theologians, Matt Welch, April 18, 2005



Charisma, Crowds, Psychology

I came across the following article related to some anthropological work that Charles Lindholm has been involved in, and reading it I found it quite useful (with some reservations) in understanding some eastern cultures – especially those of the Indian Subcontinent.  It is a long article that will require a a number of sittings to read and comprehend. If you don’t have the time, atleast try to understand the conclusions.

What does it mean to be ‘in one’s right mind’? Ordinary discourse and the technical languages of the social sciences assume that being in one’s right mind essentially means that one has the ability to calculate how to attain valued ends while avoiding injury and opprobrium (See Note 1). The calculating rationality which utilizes appropriate means to achieve desired ends is thought to be known and recognized both by rational subjects themselves and by equally rational observers; irrationality, then, is an incapacity to calculate, and is revealed in a lack of congruence between acts and goals.

Anthropologists, as professional iconoclasts, have often attempted to demonstrate that assumptions about ‘normal’ consciousness vary according to cultural context; what is madness here is sanity there, and vice versa. This approach is especially characteristic of interpretive anthropologists who wish to avoid imposing preconceived Western notions of rationality on what Clifford Geertz calls ‘local knowledge’.

However, although the range of goals and methods for achieving them has been greatly expanded by an awareness of cultural context, the interpretive approach does not really offer any significant challenge to the model of rationality outlined above, but rather remains grounded in standard utilitarian assumptions of rational individual actors calculating means to achieve valued ends. In this paper, I argue that a truly radical challenge to the notion of rationality already exists within the canon of Western social thought in the works of Max Weber and Emile Durkheim, as well as in the now forgotten writings of crowd psychologists Gustave Le Bon and Gabriel Tarde.

In the next few pages, I will outline these oppositional and radically non-calculative aspects of social theory, contrast them with the work of some influential modern scholars, and, by means of a discussion of typical recruitment mechanisms found in some ‘New Age’ movements, suggest a few ways these classic perspectives might help us to rethink our notions of person, agent, and sanity.

Max Weber and the Irrational

It is appropriate to begin with Max Weber, who is the predominant figure in the pantheon of modern American sociology and anthropology. For Weber and his orthodox followers sociology and anthropology were defined as the effort to reveal sympathetically yet systematically the significance of social action through exposing the cultural values and norms that motivate persons. This is the famous method of verstehen, or, in Geertzian terms, ‘taking the native’s point of view’, and is the foundation of interpretive anthropology. From this perspective, the interpreter reaches ‘understanding’ by realizing the meanings the local actor attaches to his or her actions in pursuit of culturally valued goals. In other words, Weberian and Geertzian actors are reasonable, although their reasons may not be immediately transparent to an uninitiated observer due to cultural and historical differences in value-systems and in the modes of rationality developed as a consequence of these differences.

We can see then that Weberian sociology and its modern interpretive descendants are approaches to social science that fit in well with the model of ‘standard’ consciousness I outlined above: human beings are assumed to be rational agents acting consciously and intelligently to maximize their valued goals; their thought is recognizable as reasonable by the thinker as well as by the culturally knowledgeable observer; furthermore, rationality is highly valued within its particular cultural setting, since only rational action can lead to attainment of culturally desirable ends. The contribution of interpretive social science, in the Weberian and Geertzian sense, is thus to reveal the rationality of apparent irrationality through supplying “the interpretive understanding of social action and thereby… a causal explanation of its course and consequences” (Weber 1978: 4).

For Weber, this approach, in which the point of view of the other is taken in order to display the underlying intent and purpose of social action for that other is the sole mode of inquiry proper to the social sciences. According to Weber, such a limitation of the possibilities of sociology is necessary because sociologists (and, by extension, anthropologists) are products and purveyors of rational analytic thought and can only practise their craft in this mode. Even more crucial, however, is Weber’s fundamental contention that any action orientation in which the actors’ motives and goals are not self-consciously determined is outside the realm of meaning, therefore unintelligible, and as such must be excluded from the central interpretive task of social theory.

But although Weber specifically excludes all irrational, unconscious, and purely reactive activity from the realm of theory and accordingly devotes himself to explicating the types of rationality that ‘make sense’ of other cultures and historical epochs, he himself was well aware that a great deal of human life – indeed, most of human life – is not experienced by self-conscious agents acting for achieving valued goals within coherent ‘webs of meaning’. Weber therefore breaks action orientations down into four ideal types. Two of these types – value rationality and instrumental rationality – are different forms of calculating consciousness based upon the rationality of the actor (See Note 3), and in most of his major writing Weber elaborates their distinctions and evolution. The other two types of action orientation, however, are deemed by Weber to be without any purpose or meaning whatsoever, and thereby to stand outside the range of social theory. These types are tradition and charisma (See Note 4).

Tradition is defined by Weber as “on the other side” of the borderline between meaningful and irrational action (Weber 1978: 25), since for him tradition ideally implies an automatic and unthinking repetition by the actor enmeshed within the confines of a mindless swarm; it is a state of torpor, lethargy and inertia, predictable and mechanical, reproducing itself in utter indifference and submerging the creative individualities of all persons caught within its coils (See Note 5). Here, Weber gives us a picture of mundane life governed by routine; a world of the passive crowd in which rational self-consciousness and goal-orientation has no part to play.

Yet, although tradition is sociologically unanalyzable in principle, Weber nonetheless notes that action motivated by habit and thoughtless conformity is hardly unusual. Instead, he writes that “in the great majority of cases actual action goes on in a state of inarticulate half-consciousness or actual unconsciousness of its subjective meaning” (Weber 1978: 21) and that the “bulk of all everyday action” is motivated by “an almost automatic reaction to habitual stimuli” (Weber 1978: 25). Weber freely acknowledges such “merely reactive imitation may well have a degree of sociological importance at least equal to that of the type which can be called social action in the strict sense” (Weber 1978: 24).

Of even greater importance is charisma, which stands in absolute contrast to tradition. In its simplest form, charisma is defined by Weber as “a certain quality of an individual personality by virtue of which he is considered extraordinary and treated as endowed with supernatural, superhuman or at least specifically exceptional powers or qualities” (Weber 1978: 242). Individuals possessing charisma are portrayed by Weber as above all else emotional and vitalizing, in complete opposition both to the ennervating authority of the patriarch and the rational efficiency of the technician-bureaucrat. Instead, whatever the charismatic leader says is right not because it makes sense, or because it coincides with what has always been done, but because the leader says it. Orders can therefore be completely whimsical, self-contradictory and even lead to death or destruction for the follower, demonstrating the disciple’s inner emotional compulsion to obey without regard for coherence or consequence.

The extraordinary figures who inspire such unreasoning devotion are imagined by Weber to be, in their typical form, berserk warriors, pirates and demagogues. They reveal their capacities through a highly intensified and emotionally labile state of consciousness that excites and awes the onlookers, and jolts them from the everyday (See Note 6). The primary type, from which the others spring, is the epileptoid magician-shaman who can incorporate the Gods and display divine powers primarily through convulsions, trembling and intense effusions of excitement (Weber 1972: 327, 1978: 401) (See Note 7). Through his capacity for epileptoid states, the shaman served both as an exemplar of ecstasy and as the leader in the rituals of communal intoxication and orgy Weber took as the original sacred experience (Weber 1978: 401, 539).

Why should such manifestations of apparent abnormality appeal to an audience? It is not intuitively obvious that a display of epileptoid behavior would be attractive to anyone; in our society quite the contrary is the case. But Weber postulated that extreme emotional states, such as those generated in seizures and other forms of emotionally heightened altered states of consciousness, had a contagious effect, spreading through the audience and infecting its members with corresponding sensations of enhanced emotionality and vitality; these expansive sensation are felt to be emanating from the stimulating individual, who is then attributed with superhuman powers. The charismatic appeal therefore lies precisely in the capacity of a person to display heightened emotionality and in the reciprocal capacity of the audience to imitation and corresponding sensations of altered awareness.

Thus for Weber, what is essential and compulsive in the charismatic relation is not its meaning, though explanatory meaning systems will certainly be generated after the fact (See Note 9). Rather, it is the participatory communion engendered by the epileptoid performance of the charismatic which experientially and immediately releases the onlookers from their mundane sufferings. “For the devout the sacred value, first and above all, has been a psychological state in the here and now. Primarily this state consists in the emotional attitude per se;” an attitude in which the following could momentarily escape from themselves by dissolving in “the objectless acosmism of love” (Weber 1972: 278, 330 emphasis in original). For Weber, such prophets provided the creative force in history; only through their inspiration could enough energy and commitment be generated to overturn an old social order. They are the heroes and saints who, he feared, could no longer be born in the rationalized world of modern society (See Note 11).

To recapitulate, we have then in Weber two forms of altered or dissociated states of consciousness that, from his point of view, are not amenable to sociological analysis since they stand outside rational goal-orientation, yet are nonetheless of crucial importance in history and culture. In fact, the question of what these states are altered or dissociated from becomes a difficult question to answer, since Weber sees the predominance of the rational ‘standard’ consciousness to be a relatively recent development. Perhaps, instead, it is more appropriate to say that rationality itself, especially in its modern instrumental version, is an altered state, vis-a-vis its powerful predecessors of tradition and charisma.

The Rationalization of Irrationality

But these opposites also continually transform one into the other in a continuous dialectic, and they move as well through history toward their own supercession by more rational modes of thought. Charisma occurs, Weber says, when tradition has lost its hold and people no longer feel compelled to repeat the old patterns, obey the old orders. Charismatic revolutions themselves are destined to be short-lived, and necessarily have a new tradition nascent within them; ritualization and bureacratization inevitably appear as the prophet’s original vitalizing revelation is repeated and institutionalized by his self-interested followers, who wish to cloak themselves with the sacred transformative quality originally imputed to the personal aura of the leader himself. This type of charisma supports the new traditions born of the original prophesy; but now the crown, the throne, the robe, instead of being the accoutrements of the ecstatic prophet, may legitimize a moribund time server. Charisma in this instance becomes co-terminus with tradition, justifying and validating the habitual obedience of the masses (See Note 12). From this perspective, tradition too changes in character, losing its irrational somnambulistic component to become a coherent framework within which free agents actively and rationally pursue the given values and goals elaborated by the prophet and his minions. In other words, both charisma and tradition become rationalized as they transform from their ideal-typical state.

Weber’s conceptualization of this process has had great influence upon his American followers. But where Weber placed the primary forms of charisma and tradition outside the boundaries of social thought, while still giving them credit as the precursors of rationality, his successors have tried to make them disappear completely by incorporating them within their systematic meaning-centered theories. Thus the influential sociologist Edward Shils claims that an innate human quest for a coherent and meaningful way of understanding the world is the sacred heart of every viable social formation. Therefore, it follows that “the charismatic propensity is a function of the need for order” (Shils 1965:203) and that charisma is felt automatically whenever one draws near the entities and institutions thought to embody and emanate that order. Tradition can then be understood as located precisely within the same order-giving central structures in which charisma inheres; structures that, far from being irrational, provide a sacred and coherent model for living a meaningful life. Shil’s paradigm is explicitly followed by Clifford Geertz, who argues for “the inherent sacredness of sovereign power” (1983: 123), and proceeds to analyze the manner in which this supposed sovereign, meaning-giving central power is manifested in various cultural frameworks.

These neo-Weberian perspectives have erased the image of charisma as an irrational emotional convulsion. Instead, all persons in all societies at all times are attempting, with greater or lesser success, to promote and to attain a culturally given sacred central symbolic system of accepted significance, as revealed in concrete institutional forms. The only human problem is not being able to achieve proximity to this holy order. From within this framework, the frenzy of the shaman is transformed into a reasonable search for coherence and significance, and tradition and charisma become equivalent to rationality (See Note 13).

Obviously, this version of society is far from the social and historical concept of irrational action that Weber knew, revealed, and set aside as ineffable and thus outside of sociological discourse. Weber certainly could not have accepted the reduction of charisma and tradition to ‘sacred order’. For him, the primary form of tradition remained imitative and senseless, and the primary form of charisma remained convulsive, revolutionary, and outside of ‘meaning’ entirely. The best that sociology could do, from his perspective, was to recognize the capacity of these irrational impulses to influence a rational course of action, and thereby to “assess the causal significance of irrational factors in accounting for the deviations from this type” (Weber 1978: 6) (See Note 14).

Durkheim and Group Consciousness

Let me turn now to Emile Durkheim, the other great ancestor of contemporary social thought, whose work offers what I believe to be a more theoretically compelling understanding of the irrational than does Weber. However, Durkheim’s concern with grasping irrational states of being is now more or less forgotten or else the object of misunderstanding and derision (See Note 15). Instead, he is known today primarily as he was interpreted by Talcott Parsons, ie., as a systematic thinker strongly associated with functionalism and with his pioneering use of statistical data to isolate variables for the purposes of demonstrating causal chains in social organizations. Here his great contributions are his dissection of the division of labor and its consequences, and his correlation of suicide rates with alienating social conditions. His other great project, one which strongly influenced later structuralism, was his effort to demonstrate that categories of thought are themselves social products, and thereby to ground Kantian metaphysical imperatives in a structured social reality.

But these are only a part of Durkheim’s sociology. In contrast to the Weberian concern with conscious agents struggling to achieve culturally mediated goals and values, Durkheim founded his sociology on the notion that ordinary consciousness is characterized more by rationalization than by rationality. For him, the reasons people claim to have for what they are doing and the meanings they attribute to their actions are post facto attempts to explain socially generated compulsions which they actually neither understand nor control.

Thus Durkheim, unlike Weber, draws a radical distinction between the goals and character of the group and the goals and characters of the individuals within the group, arguing that “social psychology has its own laws that are not those of individual psychology” (1966: 312). Furthermore, “the interests of the whole are not necessarily the interests of the part” (Durkheim 1973: 163); indeed, they may be, and often are, completely at odds. But the group imposes its own will upon the hearts and minds of its members and compels them to act in ways that run against their own subjective interests; these actions are later rationalized to ‘make sense’, and the rationalizations then become the value systems of a particular human society.

Durkheim therefore presents us with the extraordinary proposal that sociology cannot take as its subject the individual person who is manipulating within culture to maximize his or her own ends. Rather, he proposes a continuous conflictual ebb and flow between singularity and community, self and group (See Note 16). As he writes, “our inner life has something like a double center of gravity. On the one hand is our individuality – and, more particularly, our body in which it is based; on the other it is everything in us that expresses something other than ourselves…. (These) mutually contradict and deny each other” (1973: 152) (See Note 17).

Durkheim, like Weber, envisions the individual to be rationally calculating and maximizing. But far from assuming this form of consciousness to be the nexus of society or of sociology, Durkheim repudiates egoistic calculation as immoral, solipsistic, depraved, animalistic, and of no sociological interest. Instead, he argues that human beings rise above animality and pure appetite precisely at the point where the ‘normal’ mind of the self-aggrandizing egoistic actor is immersed and subdued within the transformative grip of the social (See Note 18).

Durkheim’s vision of the selfish actor dissolved within the crucible of society appears to parallel to Weber’s image of tradition as a state of deindividuated trance. But there is a very significant difference between the two, which derives from Durkheim’s understanding of the experience of group consciousness. Where for Weber the state of unthinking immersion in the group is associated with torpor and lethargy, Durkheim argues instead that people submerge themselves in the collective precisely because participation offers an immediate felt sense of transcendence to its members. It is a sensation of ecstasy, not boredom, that experientially validates self-loss in the community.

Influenced by studies of Mesmerism (See Note 19) and the same notions of emotional excitability that Weber also utilized, Durkheim thought that an extraordinary altered state of consciousness among individuals in a group, which he called ‘collective effervescence’ would occur spontaneously “whenever people are put into closer and more active relations with one another” (Durkheim 1965: 240-1). This experience is one of depersonalization, and of a transcendent sense of participation in something larger and more powerful than themselves (See Note 20). Durkheim, ordinarily a placid writer, paints a potent picture of this state, as the personal ego momentarily disintegrates under the influence of the fevered crowd. “The passions released are of such an impetuosity that they can be restrained by nothing…. Everything is just as though he really were transported into a special world, entirely different from the old one where he ordinarily lives, and into an environment filled with exceptionally intense forces that take hold of him and metamorphose him” (Durkheim 1965: 246, 249).

Durkheim imagines that within the excited mass, sensations of emotional intensification are released in impulsive outbursts that contagiously spread to those around. From this point of view, charisma exists only in the group; the charismatic leader who is Weber’s hero is here a passive symbol serving, in Elias Canetti’s words, as a ‘crowd crystal’ around whom the collective can solidify and resonate (Canetti 1978) (See Note 21). The result of this solidification is immediate imitation, magnified through the lens of the leader and synchronized within the group as a whole. In a feedback loop, this echoing and magnifying serves to further heighten emotion, leading to greater challenges to the ego and more potent feelings of exaltation. After this ecstatic experience “men really are more confident because they feel themselves stronger: and they really are stronger, because forces which were languishing are now reawakened in the consciousness”(Durkheim 1965: 387).

The physical experience of self-loss and intoxication in the crowd’s collective effervescence is, for Durkheim, the “very type of sacred thing” (Durkheim 1965: 140) and is the ultimate and permanent source of social cohesion; all else is secondary. Thus he writes that what is necessary for social life “is that men are assembled, that sentiments are felt in common and expressed in common acts; but the particular nature of these sentiments and acts is something relatively secondary and contingent” (1965: 431-2).

Tradition, from this perspective, is not seen as a torpid counter to the excitement of charisma, as in the Weberian model. Instead, a viable tradition is understood as suffused with the ecstatic experience of regular collective participation. Thus Durkheim conflates charisma and tradition in a manner completely the reverse of Shils and Geertz. For Durkheim, any attribution of meaning to the felt reality of collective effervescence is strictly a posteriori; an attempt by individuals try to explain and rationalize what is actually a primal, prelogical, experiential state of transcendent self-loss that provides the felt moral basis for all social configurations, and combats the solipsistic self-interest that would tear society apart.

Crowd Psychology

Durkheim’s positive moral view of group consciousness and Weber’s favorable portrait of charismatic relations were completely overturned in the early 20th century by the crowd psychologists Gustave Le Bon and Gabriel Tarde. These two French theorists, though now largely forgotten by academics, were tremendously influential in their time, and were the founders of the present-day practices of political polling and media consultation as well as the esoteric study of group psychology. For them the collective experience no longer had any redemptive features, and became instead a frightful combination of chaos, credulity and passion as persons within the crowd automatically regress to more primitive, child-like states of being while under the influence of their irrational, emotionally-compelling leader (See Note 22).

In this formulation, the ‘standard’ state of rational consciousness, which Le Bon and Tarde both quite explicitly took to be the consciousness of a masculine, calculating, utilitarian free agent, was fragile indeed. Indeed, though lauding rationality as the highest form of thought, the crowd psychologists, like Weber, were suspicious of the extent to which rational consciousness actually prevailed. Tarde, for example, believed that people, though imagining themselves to be free agents acting for understood goals, are in truth “unconscious puppets whose strings were pulled by their ancestors or political leaders or prophets” (1903:77). From this perspective, men and women, insofar as they are members of a group, are “in a special state, which much resembles the state of fascination in which the hypnotized individual finds himself in the hands of the hypnotiser”(Le Bon 1952: 31).

In this vision, even the most rational individual ran great risk of being quickly and irresistibly reduced to the lowest common denominator when immersed in a crowd, and consequently of acting in a savage, childish, ‘feminine’ and, in short, irrational manner that would never be condoned by ordinary standards of behavior. Rational consciousness, then, is portrayed and appreciated by these thinkers as a feeble refuge from the torrents of passion and destruction that seethe within the collective; a torrent that drowns all who are drawn into its vortex (See Note 23). The Durkheimian view of the power of the collective is here completely accepted, but this power is allowed only a negative moral content, while the good is found solely in the flimsy boat of rationality.

For the crowd psychologists, as for Durkheim, the mechanisms that stimulate the crowd are simple. Once a mass is gathered, any strong action excites immediate imitation and magnification in a cycle of intensification that eventually dies down, much like the ripples that appear after a stone is thrown into a pool. Only through such stimulation can human beings attain “the illusion of will” (Tarde 1903:77) (See Note 24). So, where Durkheim believed the primal group would coalesce spontaneously without the necessity of any external excitement, crowd psychology argued that someone had to throw the stone and provide the “dream of command” that stimulates the crowd to unite in pursuit of “a dream of action”(Tarde 1903: 77).

In postulating the need for a leader to galvanize the group, Le Bon and Tarde brought together Durkheimian and Weberian imagery. But where Weber had given the charismatic a positive value as the founder of new religions and the healer of the dispirited, Le Bon and Tarde see him in negative guise as a powerful and willful figure; a mesmerist who is capable of expressing in his person the electrifying excitement and volition that awakens the sleeping crowd, providing the masses with an irresistible command that solidifies and motivates them under his thrall (See Note 25). The inner character of this leader remained an enigma; far from a rational calculator, he is “recruited from the ranks of those morbidly nervous, excitable, half-deranged persons who are bordering on madness” (Le Bon 1952: 132). In particular, he had to be “obsessed” by an idea that “has taken possession of him”, in a way exactly parallel to the possession of the shaman by a god or gods (Le Bon 1952: 118). The crowd psychologists argue that it is precisely the leader’s obsessive self-absorption that appeals to the crowd, since only through feeling himself pulled and formed by forces beyond his control does the leader gain the power to act and thereby break the cycle of imitation and passivity that has held the collective in a somnambulistic stupor (See Note 26).

In the paradigm offered by crowd psychology, such persons elicit not only obedience, but also the love and adulation of the followers. By standing apart, completely focused on an inner vision which compels and energizes them, they embody and exemplify the “dream of command” that electrifies the following. So we have the paradox of a leader who, far from wishing to further the ends of his followers, instead “in perfect egotism offered himself to (their) adoration” (Tarde 1903: 203). The crowd psychologists thus come to the pessimistic conclusion that the group’s devotion has “never been bestowed on easy-going masters, but on the tyrants who vigorously oppressed them” in order to serve their own driven obsessions (Le Bon 1952: 54).

Crowd psychology therefore unites Durkheim and Weber by placing an ecstatic and convulsive charismatic at the center of a receptive group. The state of torpor that Weber saw in tradition is here understood as the somnambulistic trance that precedes charismatic involvement in a state of collective effervescence. The moral quality of crowd participation and charismatic excitement is now also reversed. Where Durkheim portrayed the vitality of society arising from communal experiences of unity, and where Weber hoped for the arrival of a transformative new prophet who could break open the iron cage of instrumental rationality, crowd psychology gives us frightening imagery of both groups and leaders; imagery that points not toward the church and the prophet, but toward Nazism and Hitler. As Le Bon prophetically writes, as a consequence of the erosion of traditional bonds of kinship, ethnicity and religion that kept the regression to mass consciousness at bay, “the age we are about to enter will in truth be the ERA OF CROWDS” (1952:14).

The Denial of Charisma

In so demonizing the altered states of charisma and group participation, crowd psychology prefigures the modern attitude, though unlike modern writers, the crowd psychologists retained a fearful appreciation of the potency of group consciousness. But this appreciation has been repressed by the efforts by Shils, Geertz and others of the interpretive school who aim to transform the charismatic appeal of the leader and the convulsive reaction of the group into a rational quest for meaning, order and coherence. In a parallel manner, ‘resource mobilization’ theorists of mass movements have argued that activist groups are made up of purposive and reasonable individual free agents voluntarily gathered together for the sake of commonly held goals of social justice. And, similarly, social constructivist theories of emotion portray emotion as ‘cognitive,’ and therefore consider emotions primarily as ’embodied appraisals’.

I want to be clear here that I do not dispute the salience of a search for meaning, coherence, and justice as causes for commitment to any movement; and certainly emotions are cognized (to be afraid of a cut electrical wire one must know that it is dangerous). But the feeling person, overwhelmed by nameless anxiety, immersed in the vortex of a mob, or irresistibly drawn to a charismatic figure like a moth to a flame, is hardly a rational calculator. The image of free agents making reasonable appraisals of risks, enacting values, construing meaningful systems and pursuing desired outcomes within a coherent cultural context is a vision of humanity that may be appropriate for understanding a great proportion of action and thought; but clearly the apotheosis of rationalization and voluntarism found in these contemporary theories ignores precisely the aspects of social behavior that Weber, Durkheim and the crowd psychologists sought to bring to the fore; i.e., the power of irrational group experience to stimulate men and women into actions that can only be called meaningful, orderly, and goal-oriented if these terms are emptied of all content.

Why has this denial of the irrational psychology of groups and leaders occurred? In part, the assertion of human reasonableness under even the most extraordinary circumstances can be considered an intellectual reaction to the implications of the horrible spectre of Nazism that the crowd psychologists so uncannily prophesied (See Note 27). But it is also clear that the denial of collective deindividuating altered states of consciousness corresponds with our present social formation, which mirrors and ratifies the rationalization processes of the society at large and finds its most powerful philosophical expression in the romantic existentialist apotheosis of the self (See Note 28). Because this model holds sway, a positive moral evaluation of collective charismatic states will be very difficult to achieve, as will the experience of charisma itself.

Charisma Today: est and Scientology

I can illustrate my point (See Note 29) by sketching the trajectory of two apparently pragmatic and “world affirming” (See Note 30) charismatic groups: est, founded and led by Werner Erhard and Scientology, founded and led by the late L. Ron Hubbard (See Note 31). In their stated purposes, these two groups appear highly instrumental, charging a substantial fee to help people to achieve better adjustment at work, new friends, greater happiness, a more satisfying love life. They have a strong continuity with the ‘healthy-minded’ ‘once-born’ religions that William James (1982) found so characteristic of American culture; religions which typically affirm the goodness of all creation and preach accommodation with the world as it is, attracting middle-class, white collar adherents anxious to better themselves. The est Forum, for instance, stresses that its program is suited to “the already successful… the already healthy…the already committed…the already accomplished…the already knowledgeable” (Forum pamphlet 1986). The purpose of joining is to learn a practice allowing one to manipulate “the levers and controls of personal effectiveness, creativity, vitality and satisfaction” (Forum pamphlet 1986); and testimonials from converts make claims not to higher wisdom, but rather that the discipline “has helped me to handle life better…. I get on better with people…. I can apply myself to work and study more easily than before” (Foster 1971: 119). Successful graduates are “people who know how to make life work” (Erhard quoted in Brewer 1975: 36).

In the pragmatic, cheerful ‘once born’ ethos, the desire for personal enlightenment is reconciled with practical action, doing well in the office becomes a pathway to self-fulfillment, and accepting hierarchy is understood not only as a useful strategy in business, but also as a spiritual exercise, since “you get power by giving power to a source of power” (Erhard quoted in Tipton 1982: 215). Armed with new perceptions, the trainees can acquiesce to whatever situation they find themselves in, confident that “being with it makes it disappear” (an est trainer, quoted in Tipton 1982: 209); that whatever one is doing is what one wants to do, and that the world is good and just.

“Everyone of us is a god in his own universe, and the creator of the very reality around ourselves” (an est trainer, quoted in Singh 1987: 10). As Ellwood remarks, from this perspective “an individual only gets into traps and circumstances he intends to get into…. the limitations he has must have been invented by himself” (1973: 175).
In keeping with the practical, work-oriented manifest content of this ideology, most participants have little involvement in any particular spiritual technology, judging efficacy, like any good consumer, solely by perceived results. They are, in Bird’s (1979) terminology, apprentices rather than devotees or disciples; persons merely looking for helpful knowledge in a complicated mystic marketplace.

Yet, despite their overtly instrumental character, utilitarian orientation, and constantly shifting peripheral membership, these groups paradoxically appear to have a strong tendency to develop highly committed charismatized inner cores of intensely loyal devotees gathered around a leader taken to be a demigod. As Roy Wallis puts it, “social reality outside the movement may come to seem a pale and worthless reflection of the social reality of the movement…. (as) the self and personal identity… become subordinated to the will and personality of the leader” (Wallis 1984: 122-24).

In Scientology, for instance, there was a “transformation from a loose, almost anarchic group of enthusiasts of a lay psychotherapy, Dianetics, to a tightly controlled and rigorously disciplined following for a quasi-religious movement, Scientology” (Wallis 1977:5). L. Ron Hubbard, the founder of this group, began as a science fiction writer and entrepreneur, but ended by claiming to be a Messiah “wearing the boots of responsibility for this universe” (Hubbard quoted in Ellwood 1973: 172). His disciples concurred, seeing him as a charismatic superman who could escape space and time, and whose insight into the world would lead to universal salvation.

For the inner cadre of Scientologists the ‘meaning’ of membership did not hinge on a coherent doctrine, since Hubbard “modified the doctrine frequently without precipitating significant opposition” (Wallis 1977: 153). As a result, “even the most doctrinally learned Scientologists may be unsure what palpable qualities a clear (an enlightened person) is supposed to manifest, other than confidence and loyalty to the cult” (Bainbridge and Stark 1980: 133). Participation rested instead on absolute faith in Hubbard himself and on one’s total unreserved commitment to the organization. As a former convert writes, “the extent of one’s faith was the measure of one’s future gains…. Everything depended on one’s own certainty at the moment” (Kaufman 1972: 25, 179). Any questioning showed one was not moving toward ‘clear’, whereas meditation on Hubbard’s often self-contradictory words was considered to be transformative in itself.

In the fully formed Scientology corporation a multi-million dollar enterprise was headed by a small, secretive, highly disciplined and fully committed central cadre, the Sea Org, marked by their esoteric practices, special language, and distinctive uniforms of white, with black boots and belt. Totally dedicated to Hubbard, they formed an inner circle of virtuosi living in seclusion aboard Hubbard’s yacht, proclaiming their devotion by signing ‘billion year contracts’ of spiritual service to their eternal leader.

As the group made claims to have the key, not simply to enhanced awareness, but to all the world’s problems, it also became more rigid and totalitarian; fear of ‘suppressives’ (Scientology language for opponents) heightened, leading to expensive lawsuits and countersuits; meanwhile Hubbard himself withdrew deeper into paranoia, eventually isolating himself so that only three people were actually permitted to see him, and it became a matter of controversy whether he was alive or dead (See Note 32).

Est has followed a similar trajectory. Beginning as the revelation of a former encyclopedia salesman and ex-Scientology convert, est brought together the techniques of Scientology, Buddhist meditation, existential philosophy and group therapy to form a potent self-help organization which soon began to exhibit a charismatic character. Werner Erhard, the founder, was idolized by his committed followers as a “fully realized human being” who “lives in risk and possibility… we catch up with him, then he moves ten steps ahead” (a convert quoted in Singh 1987: 89). An inner circle of devotees controlling the vast est empire were absolutely loyal to Erhard, whom they conceived to be a savior. This inner circle was tightly knit, strictly regulated, and required to have only “those purposes, desires, objectives, and intentions that Werner agreed for you to have” (the president of est, quoted in Martin 1980: 112). Not coincidentally, they began to resemble Erhard closely, down to mannerisms and dress.

The accommodative est message of “perfection as a state in which things are the way the are, and not the way they are not” (Erhard quoted in Martin 1980: 114) was taken by the inner circle to be a message that would transform the world through transforming consciousness, and est began to reorient itself into a more overtly religious salvationist direction, with Erhard as the prophet of the coming millennium. But the pressure of being a charismatic figure began to tell on Erhard, who showed signs of psychological disintegration, brutalizing members of his family and the inner core while simultaneously demanding greater and more violent tests of loyalty from those closest to him. The ensuing tension led, in recent years, to defections and litigation within the core, and to public attacks on Erhard by some of his closest relatives and associates (See Note 33).

The parallel descents of these groups into paranoia and authoritarianism are instructive, and illustrate the difficulties even the most accommodative charismatic movements and leaders have in adapting to modern social conditions. They also illustrate recurrent patterns of group processes that are not reducible to a quest for meaning or coherence or any other rational end, but that can better be conceptualized within a framework of charisma, collective effervescence, and the psychology of crowds. The same framework can help us to understand the methods of recruitment that drew people deeply into these organizations (See Note 34).

Essentially, recruitment to est and Scientology, in common with recruitment to many other modern cults, relies on techniques that reveal to the prospective clients the degree to which their personal identities are contingent and socially constructed. The stated end is to permit the convert to escape from obligations of should and ought (referred to as ‘garbage’) in order to find the authentic, eternal and vital selves that lie beneath social and familial conditioning.
The notion of a primal unsocialized vital center is taken absolutely literally by Scientology. In its doctrine, human beings are actually concrete emanations of timeless energy forces called Thetans, who manifested themselves in the material world for amusement, but who have been so absorbed in their games that they have forgotten their true transcendent identities. To remedy this unhappy condition, one must ‘clear’ material residues and memories away from Thetan consciousness and allow the Thetan to “relinquish his self-imposed limitations” (Hubbard quoted in Wallis 1977: 104).

The fantastic science fiction ideology would hardly be convincing to many potential converts without its experiential ratification through a long process of training in which the new member’s sense of identity and social context is consistently undermined via a bewildering, repetitious and emotionally charged sequence of ‘deprogramming’ exercises (‘auditing’) which utilize a fallacious instrument (the ‘e-meter’) that students believe registers fluctuations in their emotional responses (see Whitehead 1987 for a detailed account).

In the training, the student, under the eye of an experienced ‘auditor’, may be asked repeatedly to relive and repeat painful or intense experiences of the past. The auditor asks questions such as “tell me something you would be willing to have that person (indicated by the trainer) not know about you”, over and over again. No explanations are given, and the trainee is also constantly obliged to redefine the most common words and phrases he or she uses in response, and is required as well to master the complex Scientology jargon. The ‘runs’ of repeated questions and answers can go for many hours, confusing and exhausting the trainee. The ostensible aim of this ritual is to distance the trainee from emotional reactions to ‘garbage’ so he or she can become ‘at cause’ by getting a ‘clear’ reading on the e-meter. In consequence of this process, the trainee will hypothetically become free to experience unencumbered ecstatic Thetan awareness.

The training process occurs in an atmosphere of high anxiety, as the trainee struggles to control the random fluctuations of the e-meter while simultaneously feelings of disorientation, remorse, hatred, love, jealousy and so on are elicited by the repetitious, probing, highly personal questions and complex demands of the auditor, a powerful authority figure believed to have achieved a more evolved superhuman consciousness. Each auditing session concludes with cathartic group gatherings in which the participants ‘share wins’ and “were warmly welcomed into the group, greeted and applauded” (Wallis 1977: 173). This sequence proved to be remarkably effective in gaining great loyalty from many Scientology ‘preclears’, who would themselves move up the elaborate ladder toward ‘clear’ status and become ‘auditors’ of other initiates (See Note 35).

Est never utilized such a literal image of liberation as Scientology’s Thetan, but very similar techniques were in operation in the recruitment and training process. For est, as in Scientology, history and family are considered to be destructively enmeshing, and the point of training is to be released “from the cultural trance, the systematic self-delusion, to which most of us surrender our aliveness” (Marsh 1975: 38). The process is conceived as awakening to one’s timeless and vital transpersonal essence, thus becoming “truly able and perfect” (an est trainer, quoted in Tipton 1982: 177). As in Scientology, trainees cannot break through into this perfect realm by reason; reason is regarded as a defense against the intrinsic and immediate truth of intuitive feeling states. “If you experience it, it’s the truth. The same thing believed is a lie” (Erhard, quoted in Tipton 1982: 192).

As in Scientology, instruction is geared to break down the students’ reasoning power and ‘conditioning’ through emotionally charged training sessions designed to demonstrate that their beliefs and personalities are programmed by their past, their culture, and their associations. In the classical est seminar, 250 persons or so spend two weekends totalling 60 to 70 emotionally intense (and expensive) hours of lectures, meditation and confrontation. The trainer typically abuses and infantilizes the group, calling them ‘assholes’ whose lives are ‘shit’, and prohibiting them from using the toilet. The students are further bombarded by paradoxes undercutting logic (See Note 36), asked to relive traumatic emotional experiences of the past, incited to act out deep fears, or perhaps insulted and abused by the leader in front of the audience for arrogance or selfishness. Role playing, switching genders, taking on other identities, all are part of the repertoire. The effectiveness of these efforts to decenter the self in the context of the group is evident in one participant’s description: “It seems now that almost the entire roomful of people are crying, moaning, groaning, sobbing, screaming, shouting, writhing. ‘Stop it! Stop it!’ ‘No! No! No!’ ‘I didn’t do it! I didn’t do it!’ ‘Please….’ ‘Help!’ ‘Daddy, daddy, daddy….’ The groans, the crying, the shouts reinforce each other; the emotions pour out of the trainees” (quoted in Martin 1980: 123).

These methods are quite typical, and involve what Harriet Whitehead (1987) has called ‘renunciation,’ that is, a dedifferentiation of cognitive structures coupled with a withdrawal of affect from its previous points of attachment. In this process, the susceptible subject is pressed to become ‘deautomatized’ (Deikman 1969), hyperaware of the role of conditioning and the plasticity of the self, while simultaneously stimulated to emotionally charged abreactions which are mirrored and magnified by the group and the leader, who represents the sacred group founder. These deconditioning’ exercises are obviously not aimed at promoting adaption to ‘ordinary misery’ (Freud’s claim for psychotherapy), but rather to the revelation of a deeper, transcendent inner self no longer bound by the chains of culture or context, nor by the stimulus-response mechanisms of the mind. Instead, “you take responsibility…. in effect you have freely chosen to do everything that you have ever done and to be precisely what you are. In that instant you become exactly what you always wanted to be” (Brewer 1975).

For participants (See Note 37), this inner self is not a matter of conjecture or theory. It is really experienced in the effervescence of the collective – just as Durkheim hypothesized. The combination of an undermining of personal identity, systematic devaluation and confusion of ordinary thought, the stimulation of heightened abreactive emotions detached from original causes within the context of the mirroring group and under the protection of a god-like leader act together to provide expansive sensations of catharsis for those who are carried away by the techniques of collective ecstasy.

The individual participating in this experience is likely to attribute his or her feelings of expansion to the doctrine and the leader. The ‘perfect self’ that is then revealed when personal identity is stripped away is, more often than not, a self modeled after the charismatic group exemplar. A new identity then replaces that which has been abandoned as inauthentic – an identity legitimated by the intensity of the emotion generated in the altered state of consciousness of the ecstatic group context – but one which, in consequence, can only exist within this extraordinary situation (See Note 38). In other words, despite appearances of pragmatism, the world-affirming group is likely to develop into a node of collective effervescence that stands in opposition to the larger rationalized social organization, which is experienced as ‘dead’ and alienating. The next step is to try to make the world replicate the group; this is the road toward Messianism and paranoia.


Two points are especially worth reiterating here. The first is the repeated use of techniques aimed at demonstrating that the recruit is not an autonomous individual, but rather is ‘programmed’ and ‘conditioned’ by history, culture, and family. This revelation, engendered in a highly charged group context under the authority of an apparently powerful authority figure, is crucial in stimulating the emotional abreaction that helps lead the subject into collective participation. It is, it seems to me, an anthropological fact of considerable importance that persons in this culture can be transformed by discovering that their lives are not totally autonomous and that their identities are not completely self-manufactured. The efficacy of this technique is, quite evidently, closely related to the prevalent American capitalist social organization and its accompanying ideology of possessive individualism and purposive agency.

A connected point is that members of a configuration with such an ideological and social structure are highly susceptible to a covert hunger for the collective experience offered by charismatic immersion. As I have argued elsewhere (1990), when the feeling self is stripped of identity markers and significant emotional ties with others, and simultaneously affirmed as the sole source of action and preference, then the intensity and certainty of charismatic revelation will be extremely attractive, since participation in a charismatic group offers precisely the emotional gratification, self-loss and affirmation of a transcendent identity that the predominant social model of reality precludes.

However, because such movements are in conflict with the ruling order of thought, they must take on extreme forms. Charisma becomes not a moment, but eternal; the god is no longer manifested occasionally in an otherwise ordinary mortal, but the vehicle has to be holy all the time. So, paradoxically, a culture founded on the ‘standard’ consciousness of rationality and individual agency renders even more fervid and impetuous the expression of the altered state of awareness Weber called ‘charisma’.

To summarize, in this essay I have argued that ‘meaning-centered’ interpretive analysis is in fact located within a tradition that assumes as its basic premise the rationality of maximizing individual actors. This perspective is not adequate for understanding forms of social action that are outside the realm of rationality – a point recognized by Weber himself in his discussion of tradition and charisma.

Here I have sketched very lightly, with plenty of room for contradiction and dispute, some alternative views on irrationality, using the works of Weber, Durkheim, Le Bon and Tarde to argue that processes of charismatic involvement, collective effervescence, and crowd psychology may help us grasp the basic pattern of such apparently irrational action and to place it a framework of theoretical knowledge. Far too rapidly, I’ve applied this framework to the actual trajectories of two new religions, showing how their evolution and their mode of recruitment fit within it.

The final question is perhaps whether this mode of approach is applicable only for understanding cultic groups at the periphery of social life, or whether it might have some relevance for more mainstream medical practitioners and psychiatrists. I contend the latter is the case. For example, if we believe, with Durkheim, that human society is built upon an emotional experience of selflessness within the transcendent group, what then happens when the increasing dominance of the competitive economy and the worship of the individual make such experiences less and less likely to occur, or even to be imagined? One result might be the charisma hunger mentioned above, and the escalating excesses of charismatic groups. But the more prevalent result may be the appalling number of complaints about depression, deadness and detachment among psychiatric patients in the US, coupled with fevered efforts to stimulate some sense of vitality through various forms of addiction and thrill seeking. These may be the prices paid for the absence of any felt sense of connection to the social world.



  1. I am not claiming that Westerners only have positive evaluations of instrumental rationality; ‘sincere’ emotion is also highly valued. However, sincere feelings do not come from the mind, but from the heart.
  2. The ‘ideal type’ is a formal conceptual model to be used as a lens for viewing variations in real social configurations in order to make comparisons. This implies that ‘rational’ social formations are in actual fact never fully rational, but always have ‘traditional’ and ‘charismatic’ elements within them, even though these elements may be suppressed or denied. And, of course, the reverse is also the case. For more on Weber’s methodology, see Weber 1949.
  3. Instrumental rationality – the rationality typical of modernity and capitalism – is characterized by the most efficient use of means to reach an end. Value rationality – the rationality of premodern societies – envisions means as ends, with efficiency taking second place to proper modes of behavior. The complexities and ambiguities of this distinction are many, and the boundaries of the categories are by no means clear, but what is relevant here is simply that both types of social action, whatever their differences and similarities, involve conscious choices and acts aimed at maximizing valued goals.
  4. In a sense, charisma is the non-rational parallel to value-rationality, since charisma is the attachment of the self to another through affect, just as value-rationality involves an affective faith in a value. Tradition, which is cold and routinized, is, in this respect, analogous to the equally cold technical efficiency of instrumental rationality.
  5. Interestingly, Weber foresaw just such a hive-like future for rational man. Utmost rational efficiency will lead, he feared, to a rigid and immobile bureaucratic and technocratic social system.
  6. See Weber 1978: 242, 400-3, 535-6, 554, 1112, 1115. 1972: 279, 287 for the relationship between charismatic revelation and ecstatic states of excitement.
  7. The conjunction between epilepsy and charisma seems odd given our modern medical conception of grand-mal and petit-mal epileptic seizures as electrical storms in the brain that eliminate consciousness while causing gross motor convulsions. But Weber’s model (one common to his era) broadly imagined epileptic – or, more properly, epileptoid – seizures as closely akin to hypnotic states and to hysterical fits (see Thornton 1976, Massey and McHenry 1986 for more on this connection). Our modern counterpart might be the category of dissociation. However, it is also worth noting that Winkelman (1986), among others, has argued for a parallel between shamanic dissociation, temporal lobe epilepsy, and other forms of what Sacks (1985) has called mental superabundances, or disorders of excess, in which sensations of energy and vitality become morbid, and illness presents itself as euphoria. An example is Dostoyevsky, who writes, “You all, healthy people, can’t imagine the happiness which we epileptics feel during the second before our fit… I don’t know if this felicity lasts for seconds, hours or months, but believe me, I would not exchange it for all the joys that life may bring!” (quoted in Sacks 1985: 137). We might also recall that cross-cultural studies of shamanism do in fact show strong incidence of overtly epileptoid manifestations such as trembling and convulsions, especially in the early stages of shamanic initiation. Evidently there may be both a predisposition and an element of imitation and training at work in achieving shamanic trance, and the trance itself may have a considerable overlap with some mild forms of disturbance of the temporal lobe.
  8. “Ecstasy was also produced by the provocation of hysterical or epileptoid seizures among those with predispositions toward such paroxysms, which in turn produced orgiastic states in others” (Weber 1978: 535).
  9. Characteristically, Weber’s own intellectual concern is with typologizing and contextualizing the novel ethical meaning systems provoked by the prophet’s revelations. He notes that the prophet himself may believe the new meaning system is his major contribution. But Weber clearly states that for the masses, and especially for the impoverished, the prophet remains a charismatic with transcendent powers; the commitment of these followers is not to ideas, but to the prophet’s person and his promise of immediate experiential salvation (Weber 1978: 467, 487).
  10. Levi-Strauss (1967) takes a similar position, but with a very different analytical point.
  11. “Under the technical and social conditions of rational culture, an imitation of the life of Buddha, Jesus, or Francis seems condemned to failure for purely external reasons” (Weber 1972:357).
  12. See Greenfeld (1985) for a good statement of the distinction between primary and secondary charisma; though she too assumes as the essential driving force an orientation for building meaning.
  13. As Harriet Whitehead writes, “cultural anthropology has chosen the conservative route of merely noting that religious practices seem to have some intensifying or disordering effect upon experience, and retreating back into the realm of culturally organized meaning manipulation” (1987: 105). In Weberian terms, this ‘retreat’ has an ‘elective affinity’ for intellectuals, because it is founded on an assertion of the absolute value and importance of the scholarly professional faith in the primacy of reason and the possibility of approaching meaning through interpretation.
  14. Weber profoundly regretted his own incapacity to experience the compulsion of charisma, he lamented the decline of the ecstatic, and he longed for the advent of “entirely new prophets” who would bring, through their very presence, an escape from “the iron cage” of rational action without transcendent content that he envisioned as the inevitable and unhappy future of humanity (Weber 1958:181-2).
  15. See, for example, Meeker, who portrays Durkheim as believing “science would eventually prove fully adequate as a replacement for religion” (1990: 62), and who castigates him for his supposed dismissal of “human dreams and wishes” in favor of the apotheosis of an abstract emblem. Meeker here ignores Durkheim’s emphasis on passion and desire in the construction of the elementary forms of religious life.
  16. “We do not admit that there is a precise point at which the individual comes to an end and the social realm commences…. we pass without interval from one order of facts to the other” (Durkheim 1966: 313).
  17. In taking this perspective, Durkheim prefigures Freud, but with an entirely reversed moral viewpoint. And, of course, the influence of Rousseau and the Comptean vision of a revolutionary sociology are very strong indeed in Durkheim’s apotheosis of society.
  18. Durkheim argues in an important footnote that the realm of the economy, where the maximizing rational individual holds sway, is the only arena of social life that is in essence completely opposed to the sacred. The dominance of the economy in modern culture is therefore destructive of the moral bonds of society (1965: 466). Note how different his project is from Weber’s, who aimed to show the ways in which various prophecies favor or oppose the rise of capitalism.
  19. As Moscovici writes, the hypnotic state was envisioned in late 19th century French culture as “that strange drug which… releases the individual from his solitude and carries him off to a world of collective intoxication” (1985: 92). As already noted, hypnotism and epilepsy were thought to be similar in nature. The idea and experience of hypnotism and allied dissociated states was a romantic counter to Utilitarian individualism, and had a strong influence on social and psychological thought, as well as literature and the arts, in the late 19th and early 20th century.
    20 The similarity to Weber’s ‘objectless acosmism of love’ is evident.
  20. For this reason, Durkheim can make the seemingly paradoxical claim that “despotism is nothing more than inverted communism” (1984: 144).
  21. This image continues to prevail in medical theories of ‘mass hysteria’. See Bartholomew (in press) for a compendium of examples. Bartholomew’s paper is also an example of the interpretive attempt to validate all apparently irrational action by demonstrating its meaningfulness and intent within a cultural context.
  22. The tropes of the ‘feminine’, ‘savage’, ‘childish’ crowd are painfully clear indicators of the anxiety felt by these men over a possible loss of control and over the weakness of their masculine, civilized, adult personnas. An interesting, if obvious, analysis could be made of these metaphors, which relate to the changing political climate of France and heightened fear of lower class rebellion. What I wish to stress here, however, is the structure of the argument.
  23. Awareness makes no difference to this existential condition. “If the photographic plate became conscious at a given moment of what was happening to it, would the nature of the phenomenon be essentially changed” (Tarde).
  24. As Tarde writes, “volition, together with emotion and conviction, is the most contagious of psychological states. An energetic and authoritative man wields an irresistible power over feeble natures. He gives them the direction which they lack. Obedience to him is not a duty, but a need….. Whatever the master willed, they will; whatever the apostle believes or has believed, they believe” (1903: 198).
  25. Although the the leader’s appeal is irrational, it has certain pattern, and Le Bon gained much of his fame as a modern Machiavelli, telling rulers how to hold the reins of power in the new Age of the Crowd through the use of emotionally charged theatricality, large gestures, dramatic illusions and the rhetoric of myth. According to Le Bon, the modern leader’s technique must be “to exaggerate, to affirm, to resort to repetitions, and never attempt to prove anything by reasoning” (Le Bon 1952: 51). Le Bon’s instructions have been taken seriously by many demagogues, including Hitler, who cited him extensively in Mein Kampf.
  26. Those who believe that Nazi devotees and leaders were motivated by either value or instrumental rationality should consider work by Robert Waite (1977) and Ian Kershaw (1987), as well as Joachim Fest’s biography of Hitler (1974), and the numerous biographies of dedicated Nazis. For more on this, see Lindholm 1990: 93-116.
  27. The intellectual debt of much contemporary anthropological theory to existential and phenomenological thought cannot be adequately pursued here, but particularly noteworthy is an emphasis on ‘authenticity’ and a refusal to make comparisons – both derived from premises of the priority of a unique inner self-consciousness struggling to free itself from what Heidegger (1962) called the tyranny of ‘the they.’ The Western character of these premises is, I hope, evident.
  28. see Lindholm (1990) for a theoretical framework, and for analysis of more extreme cases of modern charisma: Nazism, the Manson Family, and Jim Jones’s Peoples Temple.
  29. The term is used by Roy Wallis to distinguish these positive movements from apocalyptic and millennial ‘world rejecting’ movements such as Jonestown (Wallis 1984).
  30. The material is taken from sources which rely both on the testimony of converts and of those who have ‘deconverted’. On the question of the moral stance of the informant, and its influence on the data, see the Appendix in Wallis (1984). Here, I have used material which is corroborated by sources both within and without the movements.
  31. Hubbard was officially reported dead in 1986, but he had not been seen in public for many years, and may have died sometime previously (see Lamont 1986 for an account). The difficulty of maintaining a charismatic organization after the death of the leader is probably one cause of the reluctance to admit his death.
  32. Erhard has subsequently resigned some of his positions of authority in the organization.
  33. These methods have been substantially altered as each organization moves through the cycle of charismatic routinization and then again attempts to restimulate fervor among the disciples. The examples used here date from the most expansive and charismatic phase of this process.
  34. See Bainbridge and Stark (1980), who argue that the lack of any real content in ‘clear’ status and the constantly shifting Scientology doctrine actually enhanced Scientology’s hold over its converts.
  35. Erhard, a postmodernist before his time, has commented that “there are only two things in the world, semantics and nothing” (quoted in Martin 1980: 114).
  36. I should note that of course not all participants prove to be equally susceptible to the lure of the group. Innumerable differences in personal and cultural background and circumstances will make a difference in the degree to which any individual will be likely to participate. But under the right conditions, it is also very possible that even the most resistant individual might be caught up in the compelling dynamic of a charismatic collective.
  37. Bainbridge (1978) has called this process “social implosion,” that is, the development of a tight knot of persons, interacting solely with one another, bound by powerful feelings of loyalty and of separateness from the rest of society.

Charisma, Crowd Psychology and Altered States of Consciousness, Charles Lindholm, University Professors Program and Dept. of Anthropology, Boston University

Delusions, Beliefs

I spotted the following article in The Psychologist Vol 16, and it made me realise how many people of different faiths, beliefs, and mindsets should be really considered to be deluded. Indeed, I know the case of a Swedish religious convert to an Eastern Religion that I could relate this article to (actually I can relate this article to many followers of Western and Eastern religions). The aspect of standing back and each one of us really looking at ourselves in a reasonably objective way is crucial for a rationalist. Unfortunately, it is a luxury that eludes many people. The aspect of being critical of everything and everyone is an intellectual honesty and baggage that not all people are capable of handling. Of course, intellectual honesty is a causality of people who want certainty in their lives at the expense of truth, and it is intellectual terrorism for those who center their lives around such zealous propagation of religion, faith, and unquestioning culture.

Early in his third month of office, President Reagan was on his way to address a conference when John Hinckley fired six gun shots at point blank range, wounding the president and three of his entourage. In the controversial trial that followed, three defence psychiatrists successfully argued that Hinckley was not guilty, on the grounds that he was suffering from the delusion that the assassination would cause Jodie Foster, the actress from Taxi Driver (a film which Hinckley was obsessed with), to fall in love with him. In the same year the award-winning author Philip K. Dick, whose books have been turned into major Hollywood films, such as Blade Runner, Total Recall and Minority Report, published one of his last books. The sprawling and eccentric VALIS is a novel based on delusions resulting from his own psychotic breakdown, which he drew on for much of his prolific career (see box 1).

From these and many other examples, it would appear that unusual or unlikely beliefs have significant consequences and continue to captivate the interest of many of us. But to examine such claims we need to know what is meant by a delusion. How do delusions differ from other abnormal beliefs? Does the study of delusions provide a productive way of understanding beliefs?

Box 1: Philip K. Dick
Many novels and short stories by Philip K. Dick contain elements from the delusions he suffered regarding identity and the nature of reality. Dick described many bizarre experiences and came to believe that human development was controlled by an entity called VALIS (Vast Active Living Intelligence System) and that his perception of Orange County, California was an illusion disguising the fact that he was really living in firstcentury Rome.There were multiple reasons for Dick’s bizarre beliefs, given his share of trauma, phobias and drug abuse, but it is likely that many of the delusions he wrote about stemmed from psychotic episodes he experienced as a sufferer and as an observer of others.This alone makes his work of great psychological interest. However, Dick also seems to have some knowledge of contemporary psychology himself, incorporating as he did the work of Penfield,Vygotsky and Luria (among others) into his stories.

Defining issues

Delusions are one of the most important constructs used by psychiatrists to diagnose patients who are considered to have lost touch with reality (Maher, 1988). For Jaspers (1963), one of the founders of modern psychiatry, delusions constituted the ‘basic characteristic of madness’ despite being ‘psychologically irreducible’.

More significantly, the detection of delusions has ‘enormous implications for diagnosis andtreatment, as well as complex notions concerning responsibility, prediction of behaviour, etc.’ (David, 1999). Yet, as pointed out by many commentators (see Jones, 1999), the clinical usage of the term delusion and its distinction from other abnormal beliefs involve a host of semantic and epistemological difficulties. Predominant amongst these is our belief that delusions are (to a large extent) self evident; that is, that they constitute a type of belief that (almost) everyone else would recognise as pathological. This, however, is more apparent than real, and is not even reflected in the many different opinions that surround the definition of the construct (Berrios, 1991; Garety &Hemsley, 1994; Spitzer, 1990). Indeed, David (1999) has suggested ‘there is no acceptable (rather than accepted) definition of a delusion’ (p.17).

For most of us, however, these thorny issues of definition can be sidestepped by choosing to adopt the descriptive and widespread characterisation offered by the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). This established psychiatric nosology text considers a delusion to be, first and foremost, a form of belief: a belief whose acceptance and subsequent behaviour can constitute the grounds for insanity. But no justification is offered and the statement itself amounts to a belief in delusions. More explicitly, the standard definition characterises delusions as false, based on an incorrect inference about external reality and different from what almost everyone else believes (APA, 1994). Other features such as degree of conviction and imperviousness to persuasion do not set delusions apart from other beliefs (Garety & Hemsley, 1994).

Delusions – An abnormal belief by any other name

Despite differences in emphasis, most definitions consider two criteria to be significant when establishing a delusion: falsifiability and bizarreness. Simply described, ‘bizarre delusions are generally impossible, whereas non-bizarre delusions are generally improbable’ (Sedler, 1995, p.256). The DSM-IV distinguishes these as follows: a non-bizarre delusion may involve situations that in principle could occur in real life but are thought (by the psychiatrist) to be highly improbable and therefore potentially falsifiable; a bizarre or fantastic belief, however, is considered impossible and therefore assumed to be one not normally held by others in the culture or society. The problem with each of these definitions lies not with the differential distinction, but with the absence of agreed operational definitions as to how these criteria are arrived at clinically.

The DSM definition does not specify how one might set about establishing the falseness or bizarreness of the belief; nor how one could know whether the belief was the product of an impaired inference, such as occurs in paranoid patients, who show a tendency to jump to conclusions in situations requiring probabalistic reasoning (Bentall, 1994). Here we turn to some specific problems.

Falsifiability Non-bizarre delusions involve situations and events that could occur in real life, such as believing that one is being followed, infected, poisoned or deceived by another. Therefore the ‘falsifiability’ criterion can mean that psychiatrists are often required to make judgements on claims of marital infidelity, persecution or conspiracy in the workplace (Jones, 1999), where the available relevant evidence is either limited, cannot be ascertained within the confines of the consulting room, or lies beyond the forensic capabilities of the clinician. As pointed out by Young (2000), ‘many of the beliefs considered to be delusions do not meet these criteria (or are not tested against them) in practice’ (p.47). This can have some curious consequences (see ‘The Martha Mitchell Effect, box 2).

Accordingly, this falsity criterion has been rightly questioned (Spitzer, 1990). Moreover, it is unclear what level of evidence would be required to consider a belief ‘incontrovertibly false’ and whether judgements should be based on the ‘balance of probabilities’ or the more stringent test of ‘beyond reasonable doubt’. ‘Delusional’ beliefs, consequently, may not be false (Heise, 1988) or even firmly sustained (Myin-Germeys et al., 2001).

Bizarre beliefs The attribution that a delusion is bizarre is typically defined in terms of beliefs considered not normally held by other members of a person’s culture or society. This, however, often first involves the psychiatrist’s own evaluation as regards the plausibility of the belief; after which the psychiatrist considers whether it is one typically sustained by the others in the person’s culture. Although both evaluations may be related, they need not be. If, based on his or her own beliefs and experience, the psychiatrist considers the belief sufficiently bizarre, then presumably a diagnosis of delusion can be made independent of ascertaining the actual prevalence of the belief in the patient’s culture. 

Box 2: The Martha Mitchell Effect
Sometimes improbable patient reports are erroneously assumed to be symptoms of mental illness (Maher, 1988).The ‘Martha Mitchell effect’ referred to the tendency of mental health practitioners not to believe the experience of the wife of the American attorney general, whose persistent reports of corruption in the Nixon White House were initially dismissed as evidence of delusional thinking, until later proved correct by the Watergate investigation. Such examples demonstrate that delusional pathology can often lie in the failure or inability to verify whether the events have actually taken place, no matter how improbable intuitively they might appear to the busy clinician. Clearly, there are instances ‘where people are pursued by the Mafia’ or are ‘kept under surveillance by the police’, and where they rightly suspect ‘that their spouse is unfaithful’ (Sedler, 1995).As Joseph H. Berke (1998) wrote, even paranoids have enemies! For understandable and obvious reasons, however, little effort is invested by clinicians into checking the validity of claims of persecution or harassment, and without such evidence the patient could be labelled delusional.

The DSM definition, however, clearly assumes that the criterion of abnormality or bizarreness should be obvious, given that the belief is one not ordinarily accepted by other members of a person’s culture or subculture. This is not necessarily a reliable strategy: many studies of psychiatrists show poor interrater reliability for ratings of bizarre beliefs (Flaum et al., 1991; Junginger et al., 1992). Moreover, most clinicians are not in a position to know or find out whether such beliefs comprise those normally accepted, except by direct comparison with those of his or her own peer group. One method of comparison is the use of large-scale surveys, but most clinical judgements on the prevalence of beliefs in society are not typically informed by empirical evidence.

In fact, beliefs in unscientific or parapsychological phenomena are not statistically uncommon (see Della Salla, 1999), and were this criterion alone employed as a sufficient condition, then many of us at times might be classified as delusional (Moor & Tucker, 1979). Large-scale marketing research polls carried out in the UK and North America consistently reveal that significant numbers of people within society hold strong beliefs about the paranormal. For example, a 1998 UK survey found that 41 per cent of respondents believed in communication with the dead, and 49 per cent believed in heaven – but only 28 per cent in hell (‘Survey of paranormal beliefs’, 1998). Such surveys also reveal important cultural differences in held beliefs. In many Western countries opinion polls confirm that large numbers believe in god(s) and hold other paranormal beliefs (Taylor, 2003). Consequently, religious beliefs, including praying to a deity, are not typically considered delusional, while believing and claiming that one is a deity (see ‘The Three Christs of Ypsilanti’, box 3) or that one’s spouse has been replaced (see ‘Capgras delusion’, box 4) typically are.

The existence of high levels of conviction in what might be considered abnormal, unscientific or paranormal beliefs raises important questions for mental health workers when justifying the notion of bizarre beliefs on purely conceptual or statistical grounds. As pointed out by French (1992), most beliefs are based upon ‘personal experiences perhaps supported by reports of trusted others, and the general cultural acceptance that such phenomena are indeed genuine’.

Although clinically important, the conceptual basis for the criteria of falsification or impossibility clearly breaks down under scrutiny. It is also problematic because psychotic symptoms such as delusions and hallucinations are not inevitably associated with the presence of a psychiatric disorder (Johns & van Os, 2001). Consequently, patients with DSM-IV-type delusions do not constitute a homogeneous group.

Box 3: The Three Christs Of Ypsilanti
In 1959 social psychologist Milton Rokeach brought together three schizophrenic patients in the same psychiatric ward in Ypsilanti, Michigan, all of whom suffered from the Messiah complex – each believed he was Jesus Christ. Rokeach was interested in seeing whether these mutually exclusive delusions would interact and affect the extent of conviction and content of each patient’s delusional beliefs. In his book Rokeach (1964/1981) records how each patient dealt with this conflict, one by avoidance, one by relinquishing his delusion and the other by attributing the identity claims of his compatriots to mental illness.Whilst this study would be considered ethically dubious today, it was one of the most original forays into the study of psychopathology where the explicit aim was to inform normal belief processes.

More often than not the decision about whether or not a belief is delusional is made on pragmatic grounds – namely, the evidential consequences of the beliefs including the extent of personal distress, potential or actual injury or social danger generated by the belief. Sometimes the decision may be simple – Cotard’s delusion, a person’s belief that they are dead, may be assessed differently from a delusion of grandeur such as believing that you are dating a famous TV star.

Can delusions tell us about ‘normal’ beliefs?

Notwithstanding difficulties with the standard psychiatric definitions, most people accept that normal beliefs perform an essential and fundamental process in establishing mental reference points from which to help explain and interact with the world. It is impossible to understand racism, prejudice, and political and religious conflict without considering discrepancy in fundamental belief systems. Fodor (1983) indicated that beliefs comprise a ‘central’ cognitive process and should be regarded as qualitatively different from the modular processes that have been well exploited by cognitive neuropsychologists (Coltheart, 1999). The proposition, however, is not matched by any clear consensus in neuropsychological accounts of what constitutes the cognitive or neural mechanisms involved, the evolutionary functions, or how such beliefs can be changed and maintained.

Jones (1999) describes beliefs as mental forms that incorporate the capacity to influence
behaviour and cognition and govern the way people think and what they do. But the debate as to what defines a belief or belief state rumbles on, and some researchers have instead opted to examine the ways in which damage or change to known cognitive processes can affect belief formation, as communicated or acted upon by patients diagnosed as suffering from delusions.

Bryant (1997) observed that over the past 20 years a variety of cognitive models of belief formation have drawn ‘empirical support from evidence that delusions can be elicited in normal individuals undergoing anomalous experiences (Zimbardo et al., 1981), the prevalence of delusions in neuropathological disturbances of sensory experience (Ellis & Young, 1990), reasoning deficits in deluded patients (Garety et al., 1991) and the tendency for deluded patients to make external attributions following negative life events (Kaney & Bentall, 1989)’ (p.44). Recent developments from cognitive neuropsychiatry have shown how detailed investigations of monodelusional conditions (e.g. Capgras) can help to generate testable theories of delusion, face recognition and normal belief formation (Ellis & Lewis, 2001). But this potentially rich vein of research for cognitive neuropsychiatry (see Coltheart and Davis, 2000; Halligan & David, 2001) does not necessarily imply that delusions are the primary source of psychopathology in patients diagnosed as psychotic.

Since most patients requiring psychiatric help have fully formed delusions by the time they are clinically diagnosed, establishing the causal factors responsible for the delusion is difficult. The neuropsychological or neurophysiological abnormalities observed could just as easily be interpreted as the product rather than the cause of these mental disorders.

However, if the formation of delusions as abnormal beliefs is the product of selective but as yet unspecified cognitive disturbance (e.g. in reasoning, thinking, attribution) then studying delusions may inform our understanding of how this psychopathology impacts on normal belief systems. Either way, they provide a platform for elucidating the cognitive architecture of belief formation itself.

Box 4: Capgras Delusion
Following a car crash in September 1995 Alan Davies became convinced that his wife of 31 years died in the accident and had been replaced by someone with whom he did not want to share his life. Diagnosed as suffering from Capgras syndrome, Mr Davies was awarded £130,000 damages after it was claimed that his rare psychiatric syndrome was caused by the crash that he and his wife, Christine, had survived. Despite suffering only minor physical injury he came to regard his wife, whom he now called Christine II, as an imposter and became stressed by any show of affection (de Bruxelles, 1999).

Future directions from a useful past

Despite the concept of delusion being common parlance in psychiatry and society, it is only in the last 20 years that serious attempts have been made to define and understand the construct in formal cognitive terms (Bentall et al., 2001; Coltheart & Davis, 2000; Garety & Hemsley, 1994).

One area that has been either ignored or relegated to a mysterious box in belief formation diagrams is the influence of our current ‘web of beliefs’ on the adoption or rejection of new beliefs. Stone and Young (1997) strongly argued that belief formation may involve weighing up explanations that are observationally adequate versus those that fit within a person’s current belief set. However, a plausible process by which beliefs may be integrated into such a belief set, or by which such a pre-existing set may influence how we generate beliefs about our perceptual world, has not been widely adopted.

Philosophers and social psychologists have attempted to piece together some of this network – and with some success. Quine and Ullian (1978) set out some philosophical principles by which a web of belief should operate. Of particular interest is their principle that beliefs are more easily shed, adopted or altered when the resulting network disruption is minimal, and that beliefs are validated by their relationships with existing beliefs. Moreover, they claim that any belief ‘can be held unrefuted no matter what, by making enough adjustments in other beliefs’ (p.79) – though sometimes this results in madness. Based on the idea that not all beliefs (or links) are created equal empirical work has shown that particular beliefs can be differentiated by the amount and strength of other beliefs, which are relied on for justification (Maio, 2002).

One theoretical framework that we are exploring in Cardiff is that provided by coherence theory (Thagard, 2000) when considering dynamic models of belief processes in action. Our working model describes how active beliefs can be evaluated for their acceptability by how well they cohere into existing belief sets. Beliefs and the constraints between them (for example, believing that Elvis is alive would constrain you to reject the belief that he is buried at Graceland) can be given values or weights. These allow an overall measure of coherence to be calculated and also permit a quantitative measure of disruption when beliefs are added, discarded or revised.

Sensory input may be a constraint in itself with the threshold for believing things obtained from  your own senses (‘I believe it was raining this morning’) considered higher than those taken on authority alone (‘I believe it was raining during the Battle of Waterloo’). This hierarchy may partly explain why in some cases delusional beliefs can be adopted over very short periods and with such conviction, and involve the sufferer dramatically revising other beliefs to cohere with their new-found preoccupation. Unusual experiences, which may accompany brain injury or mental illness, may also give direct perceptual experience for unlikely or bizarre beliefs that cause a radical reorganisation of a previously conservative belief network.

However, there must be more to pathological beliefs than simply reacting to unusual experiences, otherwise our belief systems would be in a constant state of flux. Influences on the ways in which individuals establish links between beliefs and their subsequent relevance for the individual also need to be taken into account when trying to explain why delusions are often considered bizarre.

A coherence theory account can address some of these problems by allowing reasoning biases to be modelled via damage to the constraints between beliefs. Of particular advantage to this approach is that coherence models can be implemented as artificial neural networks. This means the model can address predictions from neuropsychiatry. For example, Spitzer (1995) has argued for the the role of dopamine modulation in perceiving significance. He likens the role of dopamine to a perceptual ‘signal to noise ratio’ contrast control, where too little modulation could mean we make no useful distinction between meaningful and nonmeaningful information.

Too much, however, could lead us to see significance and meaning in perceptual information that we might otherwise ignore, causing, according to Spitzer, a range of unusual and unlikely beliefs. Given the heterogeneity and complexity of the factors involved, not least of agreeing a common language to describe and access the construct of abnormal beliefs in question, it would seem sensible to adopt an eclectic approach to delusions – one that links understanding from neuroscience, cognitive and social psychology. This would allow ‘abnormal’ and delusional beliefs to be understood as arising not simply from damaged biological mechanisms or information processing modules, but from cognitive beings firmly situated within their social milieu. Such an approach might also better allow us to treat patients with distressing beliefs, as well as provide a clearer insight into how each of us comes to hold our own beliefs, be they viewed by others as mundane, profound or peculiar.


American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th edn).Washington, DC:Author.
Bentall, R.P., Corcoran, R., Howard, R., Blackwood, N., & Kinderman, P. (2001). Persecutory delusions:A review and theoretical integration. Clinical Psychology Review, 21, 1143–1192.
Berke, J.H. (1998). Even paranoids have enemies: New perspectives on paranoia and persecution. London and New York: Routledge.
Berrios, G.E. (1991). Delusions as ‘wrong beliefs’:A conceptual history. British Journal of Psychiatry, 14, 6–13.
Bryant, R.A. (1997). Folie a familie:A cognitive study of delusional beliefs. Psychiatry, 60, 44– 50.
Coltheart, M. (1999). Modularity and cognition. Trends in Cognitive Science, 3(3), 115–120.
Coltheart, M. & Davis, M. (Eds.) (2000). Pathologies of belief. Oxford: Blackwell.
David,A.S. (1999). On the impossibility of defining delusions. Philosophy, Psychiatry and Psychology, 6, 17–20.
de Bruxelles, S. (1999, 5 March). Crash victim thinks wife is an imposter. The Times, p.7.
Della Salla, S. (Ed.) (1999). Mind myths. New York:Wiley.
Ellis, H.D. & Lewis, M.B. (2001). Capgras delusion:A window on face recognition. Trends in Cognitive Sciences, 5(4),149–156.
Flaum, M.,Arndt, S. & Andreasen, N.C. (1991).The reliability of ‘bizarre’ delusions. Comparative Psychiatry, 32, 59–65.
Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press.
French, C.C. (1992). Factors underlying belief in the paranormal: Do sheep and goats think differently? The Psychologist, 5, 295–299.
Garety, P.A. & Hemsley, D.R. (1994). Delusions: Investigations into the psychology of delusional reasoning. Oxford: Oxford University Press.
Halligan, P.W. & David,A.S. (2001). Cognitive neuropsychiatry: Towards a scientific psychopathology. Nature Neuroscience Review, 2, 209–215.
Heise, D.R. (1988). Delusions and the construction of reality. In T. Oltmanns & B. Maher (Eds) Delusional beliefs (pp.259–272) Chichester:Wiley.
Jaspers, K. (1963). General psychopathology (7th edn; J. Hoenig & M. Hamilton,Trans.). Manchester: Manchester University Press.
Johns, L.C. & van Os, J. (2001).The continuity of psychotic experiences in the general population. Clinical Psychology Review, 21, 1125–1141.
Jones, E. (1999).The phenomenology of abnormal belief. Philosophy, Psychiatry and Psychology, 6, 1–16.
Junginger, J., Barker, S. & Coe, D. (1992). Mood theme and bizarreness of delusions in schizophrenia and mood psychosis. Journal of Abnormal Psychology, 101, 287–292.
Maio, G.R. (2002).Values – Truth and meaning. The Psychologist, 15, 296–299.
Maher, B. (1988).Anomalous experience and delusional thinking:The logic of explanations. In T.F. Oltmanns and B.A. Maher (Eds.) Delusional beliefs (pp.15–33). Chichester:Wiley.
Moor, J.H. & Tucker, G.J. (1979). Delusions:Analysis and criteria. Comprehensive Psychiatry, 20, 388–393.
Myin-Germeys, I., Nicolson, N.A. & Delespaul, P.A.E.G. (2001).The context of delusional experiences in the daily life of patients with schizophrenia. Psychological Medicine, 31, 489– 498.
Quine,W.V. & Ullian, J.S. (1978). The web of belief (2nd edn). Toronto: Random House.
Rokeach, M. (1981). The three Christs of Ypsilanti. New York: Columbia University Press. (Original work published 1964)
Sedler, M.J. (1995). Understanding delusions. The Psychiatric Clinics of North America, 18, 251–262.
Spitzer,M. (1990). On defining delusion. Comprehensive Psychiatry, 31, 377–397.
Spitzer,M. (1995).A neurocomputational approach to delusions. Comprehensive Psychiatry, 36, 83–105.
Survey of paranormal beliefs. (1998, 2 February). Daily Mail.
Thagard, P. (2000). Coherence in thought and action. Cambridge, MA: MIT Press.
Stone,T. & Young,A.W. (1997). Delusions and brain injury:The philosophy and psychology of belief. Mind and Language, 12, 327–364.

Beliefs About Delusions, Vaughan Bell, Peter Halligan and Hadyn Ellis.
Published in The Psychologist Vol 16 No 8, pages 418-423.

Gloablization and Fundamentalism

The 56th Pugwash Conference on Science and World Affairs: A Region in Transition: Peace and Reform in the Middle East, 11-15 November 2006, Cairo, Egypt. (Working Group 4). Globalization and Fundamental Terrorism; Two sides of a coin in the modern world) 


The religious traditions of Judaism, Christianity and Islam are all today undergoing a transformation known generically as ‘fundamentalist’. Although this term is impossible any longer to define precisely, and although there are obvious differences between the movements to which the label is attached, numerous common features, including the original defining feature of fundamentalism-namely the idea of the inerrancy of a sacred text-remain. Together, these considerations justify an interpretation of contemporary religious transformations in a common framework of analysis, especially when account is taken of their global character.

This paper develops such an interpretation by focusing on two aspects of the globalism of fundamentalist movements-their transnational reach and the role played by globalism in their imaginary projections across time and space (David Lehmann 1998).

It also describes how globalization not only influence transnational fundamentalist terrorism as an extreme expression of protest against the modernity and globalization but also itself could simultaneously be considered as an hypothesis of this process(Modernity & Globalization) .

Finally, the core idea of the paper is that the increasing Fundamentalist movements specially in it’s Islamic form, first should be seen as an interpretation for unifying the social values, which appeal to blind violence to be seen under the shadow of the globalization waves.

The connections between globalization and fundamentalist terrorism that appeared in academic literature since the attacks of the September 11, 2001,with the emphasis on the Technological advances, loosening barriers and growing vulnerability of the integrated world ,causes the possibility to see terrorists as “global actors”, which along with other actors of the global economy and politics shape the future developments in the world.

The terrorist form of protest exhibits an extreme form of self described marginality. Terrorism seems to be the only expression of protest when the enemy is considered overwhelmingly powerful ,the struggle, as Hoffmann said, must however, not be lost.

Fundamental terrorists view themselves as being engaged in a cosmic war enforced on them by the enemy. Terrorist assaults are, therefore ,symbolic acts of violence against symbols of enemy’s power to demonstrate temporarily the enemy’s weakness(Jost Halfmann 2003).

This essay seeks not only to identify the most important studies in the field but to show what a critical roll globalization plays in increasingly rise of Fundamental Terrorism as a reaction to the crisis of  meaning and integration in the world.


At the end of the cold war, commentators were full of optimistic pronouncements for a global order based on liberal capitalism and democracy (e.g, Friedman (1999)).   In 1989 Frances Fukuyama famously announced the end of history, “…..not just the end of the Cold War, or the passing of a particular period of history, but the end of history as such:  that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government” (Fukuyama (1989), p.2). 

What happened?  In the view of many, the answer is simple:  the world changed on 9/11. But where did 9/11 come from?  And what does it represent? 

Many observers see the struggle between the United States and al-Qaeda as “World War IV,” while other experts caution that the September 11 attacks may be an anomaly.

Policy makers and consultants close to the current US government interpret Fundamental  terrorism as one among several responses to the military and economic superiority of the United States and its strong support for economic and political globalization.

As Jost Halfmann said in his essay “ Fundamental Terrorism ,the assault on the symbols of secular power”, Terrorism is considered a militant practice using asymmetric means of violence against American power by choosing strategies “designed to “exhaust American will, circumvent or minimize US strengths, and exploit perceived US weaknesses” (CIA/NIC 2000, quoted in Prados 2002: 24) rather than engaging in direct military confrontation.

This assessment obviously rests on a stark reduction of the complexity of the issues involved in terrorism.

But once again let’s return to the main question that where from comes The Fundamental Terrorism?

The probable answers can be put in the form of four propositions:

  1. Fundamentalism is, in a sense, always with us, and the particular manifestations of it that we see today in the form of Radical Islam was itself a response to the convergence observed at the end of the Cold War, a response that could have been foreseen.
  2. Fundamentalism can sometimes come to power even in democracy.
  3. The end of (most) Soviet –style totalitarian systems did not mean the end of dictatorship, and in fact other forms of dictatorship such as tyranny are more likely to engender terrorism.  In a globalized world, terrorist acts are also increasingly likely to be directed against countries in the West to the extent that these countries  are seen as supporting tyrannies.
  4. Globalization itself carries with it the seeds of Fundamentalism.

The paper elaborates on these.  The fourth is the most complex and its main subject.

Fundamentalism from different views

One of the most influential studies on the political ramifications of Islam and fundamentalist terrorism, Samuel P. Huntington’s book “The clash of civilizations” sets the tone for many social scientific and political analyses, which place terrorism in the context of a cultural conflict. Apparently, this analysis has gained some recognition among members and consultants of the current US government. Huntington in foreseeing conflicts between the Western (Christian) civilization and other civilizations such as the Islamic culture constructs a direct causal relationship between Islam as a religion and Islamic fundamentalism as a political movement: “The underlying problem for the West is not Islamic fundamentalism. It is Islam, a different civilization whose people are convinced of the superiority of their culture, and are obsessed with the inferiority of their power” (Huntington 1997: 217).

Huntington believes that conflicts between civilizations are replacing conflicts between nations and ideologies; Huntington describes civilizations as clusters of nations, ordered according to shared religious beliefs and cultural values.

He defines civilization as a “cultural entity … the highest cultural grouping … defined by both common objective elements, such as language, history, religion, customs, institutions, and by the subjective self-identification of people … A civilization may include several nation states or only one” (Huntington 1993: 24)

He anticipates a major fault line between the Islamic civilization and the Western civilization opening up due to Islamic hostility to “Western ideas of individualism, liberalism, constitutionalism, human rights, equality, liberty, the rule of law, democracy, free markets, the separation of church and state” (Huntington 1993: 40).

Huntington’s analysis implies a division of the world into different civilizations, which at any given point in time are at different stages in their life cycles. While the Western culture, which has spread its values and rules of conduct all over the world has reached its historical pinnacle, other cultures such as the Islam, but also the Chinese (Confucianism) are on the rise. The aggressive stance of these cultures prompts measures of the West to defend its culture of ”industrialization, urbanization, increasing levels of literacy, education, wealth, and social mobilization, and more complex and diversified occupational structures” (Huntington 1997: 68). Huntington’s concept of civilizations as clusters of states has contributed to the belief that fundamentalist and terrorist threats can be attributed not only to organizations and groups like Al-Quaeda, but also to states, which embrace or further fundamentalist views and terrorist activities. This analysis then, seems to support politics of military measures against so-called “rogue states”.

In contrast to Huntington’s view of point , Jean Baudrillard has pointed in describing terrorism as a virus of modern society at a major weakness of this kind of analysis. Baudrillard stipulates that it would be wrong to see the problem of terrorism as resulting from the confrontation of cultures with differing levels of modernity.

To him, it is rather a conflict within modern global society itself; it is, in Baudrillard’s words, “triumphant globalization battling against itself” (Baudrillard2002: 11).

Globalization or – in Huntington terms – the spread of Western concepts of free markets and democracy, has a self-destructive reverse side. According to Baudrillard’s analysis the fundamental terrorists are not pre- modern, but use all ingredients of modernity (“money and stock-market speculation, computer technology and aeronautics, spectacle, and the media”, Baudrillard 2002: 19) for one singular purpose: to turn their (often suicidal) terrorist attacks into symbolic weapons for which their opponents have no appropriate answer.

Engaging themselves in a “culture of death”(with the expectation to enter paradise after their death) fundamentalist terrorists see the weakness of their enemies in their adherence to a “culture of life”. In this vein, Baudrillard comes to the conclusion that even though religious terrorists apply the notion of (holy) war to their actions to counter terrorism by military means is the “continuation of the absence of politics by other means” (Baudrillard 2002: 34). Based on these introductory remarks, I will propose a preliminary definition of terrorism and particularly of fundamentalist\terrorism.

Terrorism can be defined as acts of violence against symbols of power of a state to demonstrate the enemy’s weakness and to mobilize a potential constituency.

Fundamentalist terrorism will deploy these acts of violence as part of an inevitable reaction in a cosmic war for protecting their existance.

Baudrillard’s essay, which does not pretend to be a fully- fledged sociological analysis of terrorism, points at two issues, which are important for any analysis of fundamentalist terrorism.

First, that terrorism is part and parcel of modern society and not in any way of traditional society; and that modern society has to be viewed as global society. Second, that the aim of terrorism is not to challenge state sovereignty, but its symbols of power.

The first claim is corroborated by Mark Juergensmeyer’s research who found striking similarities in the worldviews of religious terrorists from very different countries and denominations, be they US-American adherents of rightwing religious organizations such as Christian Identity, Israeli followers of Kahane’s Kach party or members of the Japanese Aum Shinrikyo sect. Fundamentalist terrorism is not simply a threat emerging from pre-modern or modernizing societies; it is generated in modern societies themselves

(Juergensmeyer 2001).

 A similar argument is made by Olivier Roy who describes in his book “ Globalized Islam, The search for a new Ummah”  the emergence of radical Islamic organizations in Western states with no ideological or organizational ties to Islamic countries. He portrayed fundamentalist movements, in particular al Qaeda, as a direct response to globalization pressures exerted by Western cultural and economic values. In his view “Islamic radicalization is a pathological consequence of Westernization,” .

“A significant number of fundamentalists are found in countries that are not predominantly Muslim”, Roy said. The movement tends to draw people who feel cut off from what they see as a traditionally Islamic lifestyle, including converts to Islam in European countries, such as Britain, France and Germany. At the same time, fundamentalists find themselves unable to fit in to the respective societies in which they are living. The fundamentalists have “recast religion outside of cultural contexts,” (Roy 2003).

As such, fundamentalists tend to hold idealized notions about Islam. Roy added that the lack of a firm grounding in traditional Islamic cultural values left  fundamentalists vulnerable to propaganda calling on them to express their faith in violent forms.

Roy drew a sharp distinction between Islamists and  fundamentalists. Islamists, he maintained, may once have held radical ideas, but those views have largely been channeled into mainstream political activity.

He cited Iran as an example. The religious fervor that buffeted Tehran at the time of 1979 revolution mellowed over time, as the ruling elite found it had to moderate its policies in order to maintain power.

In Roy words fundamentalists are disinclined to work toward the establishment of an Islamic nation state. Many Western policymakers mistakenly assume that the key to solving the Islamic radicalism issue requires a Mid-East settlement. But Roy’s theories on the nature of the  fundamentalist movement challenge some of the notions supporting the Western strategic response to radical Islamic-inspired terrorism. Many experts in the West consider the democratization of the Islamic world as the key to containing terrorism. Roy calls this notion only “half true.” Democratization would help “isolate” Islamic  fundamentalists,  but it would not likely dispel the sense of economic and cultural alienation that drives the movement (Roy 2003).

The second claim is shared by many analyses of terrorism (see i.e. Juergensmeyer 2001: 123; Crenshaw 1995, Hoffman5 In 1998, ) that Osame bin Laden together with other leaders of the “World Islamic Front” proclaimed that the American intervention in the Middle East is a “declaration of war on God, his messenger, and Muslims”; the fatwa, issued in response to this war calls for jihad, a holy war against America (Bin-Ladin et al. 2002:6 1998).6

Bruce Lawrence, in his book ‘Defenders of God: The Fundamentalist Revolt Against the Modern Age’ defines fundamentalism as ” the affirmation of religious authority as holistic and absolute, admitting of neither criticism nor reduction; it is expressed through the collective demand that specific creedal and ethical dictates derived from scripture be publicly recognized and legally enforced .” Lawrence argues that fundamentalism is a specific kind of religious ideology. It is antimodern, but not antimodernist. In other words, it rejects the philosophical rationalism and individualism that accompany modernity, but it takes full advantage of certain technological advances that also characterize the modern age. The most consistent denominator is opposition to Enlightenment values. It means as Marshal Berman explicates in his significant study, All That is Solid Melts Into Air: The experience of Modernity, that fundamental terrorists are modern but they are not modernist.. It also like Roy discusses the fundamentalist terrorism as non-state economic and political actors .

Lawrence believes that fundamentalism is a world-wide phenomena and that it must be compared in various contexts before it can be understood or explained with any clarity.

Lawrence lists five “family resemblances” common to fundamentalism.

  1. Fundamentalists are advocates of a minority viewpoint. They see themselves as a righteous remnant. Even when they are numerically a majority, they perceive themselves as a minority.
  2. They are oppositional and confrontational towards both secularists and “wayward” religious followers.
  3. They are secondary level male elites led invariably by charismatic males.
  4. Fundamentalists generate their own technical vocabulary.
  5. Fundamentalism has historical antecedents, but no ideological precursor.

Even considering the assumption that terrorism is a modern phenomenon like other features of modern society such as democracy or free markets, the  question yet to be solved is, however, on what grounds terrorists strike against symbols of power and what the meaning of such assaults is.

The answer should be seen  inside  the modernity and globalized order itself again, as Jost Hoffmann declares so ,the modernity of modern society consists precisely in the lack (or loss) of an instance of integration and unification of the diverse social systems in society. ( Hafmann, Globalization and Fundamentalism 2001)

This feature distinguishes modern from earlier forms of society, which were organized around some integrating instance such as religious or political peak institutions and where inclusion in society was provided to individuals by birth and divine order. For the individual actors the differentiation of social systems and the diversity of inclusions in these systems mean first and foremost the experience of risk and contingency. The lose coupling of the social systems in modern society and the mere procedural character of their operations have been noticed with regard to their consequences for the individual actors as “Sinnverlust” (the loss of meaning, Weber), the “Verlust der Mitte” (the loss of the center, Sedlmayr 1948), or as existentialist deprivation of any transcendental reassurance (Sartre, Camus).

The Problematic of the Unifying interpretations of modern society

Compared to pre-modern society, modern society is a society without a center, in an institutional and in an interpretative sense. It is a pervasive feature of modern society that the experience of differentiation and contingency has been dealt with by a variety of unifying interpretations of society, concepts, which try to make overarching sense of the confusing diversity of modern society. This is what will be called a unifying semantic throughout this presentation. Unifying semantics are interpretations of a right and good order of society, typically proposed from a particular social system perspective, which are geared at reducing the implications of the pluralist, heterogeneous and contingent character of modern society. Often, unifying views of society emerge from social movements and their adherents in literary circles, academia and the mass media.

Nationalism is a case in point: it describes society as a territorially circumscribed community of citizens. Nationalism is a unifying semantic, proposed from the point of view of the political system.

In his book on “Nations and Nationalism” (Gellner 1983), Ernest Gellner has made this argument convincingly. Nationalism is to Gellner a semantic device, which accompanies the societal transformation from agricultural to industrial society, from a society based on personal to one built on impersonal relationships. The social meaning of nationalism is to make the strain of alienation during that transformation process palpable. Similarly, pan national movements such as Pan-Slavism (Kohn 1960) or Pan-Arabism define community in ethnic rather than in territorial terms.

In a similar vein, fundamentalism can be interpreted as a unifying worldview, but in difference to nationalism, society is being viewed from a religious perspective and its constituency is not defined territorially, but universally(Hafmann 2002). Its specific target is the separation of politics from religion.

This has been noted by some scholars of fundamentalism. Bassam Tibi states that the secular nation-state is the “prime target of fundamentalism” (Tibi 1998: 6). But it is not the (cultural) fragmentation of modernity as such, as Bassam Tibi claims (Tibi 1998: 6), which is the direct cause of modern fundamentalism. It is the interpretation of fragmentation as a sign of cultural decay, which is at issue.

In Hafmann words, Cultural fragmentation – that is different views of society and particularly, different views of “good society” – is the norm in modern society, given the plurality of perspectives following from functional differentiation of society: one can chose to view society from an economics, a politics or a health perspective, and each time one looks at a different kind of society. Fragmented outlooks, fragmented identities are the norm in modern society.

There are different ways of dealing with fragmentation (or: functional differentiation): one might acknowledge and perhaps praise it, as cultural pluralism or postmodernism does, one can search for interpretations that make sense of society as a whole. These I will call unifying interpretations – and fundamentalism is one variant of this. Unifying semantics view society from one particular perspective, using values and symbols, which are geared at improving the chances for consensus on specific issues across diverging social groups and the system borders of functional differentiation. Unifying worldviews try to counter the contingency of outcomes by offering compensatory rewards and outlooks.

All unifying semantics view society from a vantage point, quasi from the outside in order to look at society as a whole. From the social scientific point of view, all unifying semantics are views constructed inside society, emerging from some particular social system, be it politics, religion or social movements. The attempt to pretend to take an outside look at society introduces a potential for excluding evidence which contradicts this view and which speaks toward plurality and contingency of the world. Cultural theory argues that the content and degree of rigidity of a unifying semantic depends on whether a unifying world view belongs to the center or the periphery of society, that is whether its potential for soliciting consensus is high or low (Douglas 1973, Douglas/Wildavsky 1983, Thompson/Ellis/Wildavsky 1990).

Rationalism is a unifying set of values and symbols, which assumes that roughly the same means/ends-calculi direct the behavior in politics, economics or the family(Hafmann 2002).

Rationalism, but also solidarity and community are sets of unifying values and meanings which are prevalent in segments of society with a high coping level concerning functional differentiation, such as in professional milieus of urban areas in advanced countries.

Such unifying worldviews found in the center of modern society contrast to unifying worldviews at the periphery of modern society (for this distinction see Shils 1961).

The distinction between central and peripheral positions is a cultural one, which denotes the degree of resistance a unifying view – and the intentions of changing society – would encounter in society; it signifies a high vs. a low acceptance of unifying semantics in society across the spectrum of differentiated social systems. Rationalism – the concept of applying cause-effect-types of explaining events in the world – is a unifying concept of central segments of modern society, which finds comparatively easy acceptance in economy, politics, law or sports due to a range of technologies, which embody the principles of rationalism. (Think of Max Weber’s theory of rationalism as leitmotif of modernization in society which cuts across the diverse “value spheres” as he called the diverse social contexts). Central segments of society have also developed a high acceptance of the ambiguity, which is involved in unifying concepts: tolerance for breakdowns of rationalism, which become probable under conditions of the diverse modes of operation and the diverse meanings of rationalism in the respective social systems.

Thus, unifying semantics in the center enjoy far reaching acceptance and exert relatively little exclusionary power to those who disagree; they allow at the same time for paradoxes and contradictions, that is for acknowledging breakdowns of rationalism which become starting points for critiques of this kind of unifying world views.* *(see the cultural critiques of modern “instrumental” views of social relations in the Critical Theory of the Frankfurt School, Habermas).

Peripheral segments in society are characterized by the experience that their unifying semantic meets much resistance. Political Islam in “modernizing” countries like Turkey or Egypt is stiffly resisted by the secular elites.

These modernizing regimes tend to exclude Islamist groups from political representation for reasons of keeping religion and politics apart.

Often these regimes are politically rigid and resistant to democratic government, as the example of Algeria in the nineties shows.

Center and periphery unifying semantics can exist in one country at the same time, as the co-presence of fundamentalist movements and professional elites in the US demonstrates.

In a world, which exhibits a plurality of values principles and in which no institution has evolved, which could establish a hierarchy of values only religion can offer absolute values. Other sources of quasi-absolute values such as Marxism, which have also been used as justification of terrorism (Red Army Faction in Germany or Red Brigades in Italy) are much less well suited as a basis for fundamentalism because of their close association with scientific reasoning and its intimate relationship to doubt and revision.

As Hoffmann  argued that the thrust of unifying world-views depends on how the experience of difference (in social systems, social values and life-styles) is negotiated against the drive for unity as a medium of sense-making.

Unifying world-views with a high regard for the diversity of incarnations of unifying semantics (depending on the social context in which they are used) and for the other side of unity (difference) could be called post-modern (example: rationalism). Unifying world-views which experience high resistance and which articulate little tolerance for alternatives (difference) would represent the other end of the spectrum (fundamentalist terrorism). Unifying semantics, which are posited in view of other competing unifying world-views might be called modernizing unifying semantics (example: Kemalism, Nasser’s Pan-Arabism in Egypt). Inward-oriented unifying semantics with little regard for other competing semantics might be labeled traditional(example: Sufi religion).

Therefore, what turns adherents to political Islam as a unifying interpretation into terrorists is neither poverty or deprivation nor alienation (the rejection of Western life-styles, the culture of individualism or sexual liberty – although all these motives may play a role in terrorists’ accounts of making sense of their decisions, see Juergensmeyer 2001), but the belief in the foreclosing of any other option, of the need to defend against a war which has been imposed by an overwhelmingly powerful enemy who is to destroy the Islamic unification interpretation by globalizing the world under it’s own different  modern values . It is the moment that the other side of the modernity coin shows it self. Violence is an answer to the experience of powerlessness to the fact that opposition to functional differentiation (in this case: between politics and religion) is treated as opposition against Western values. Violence is the expression of protest where no other forms of expression (such as reform) allow marking a difference to the opponent or enemy.

The ultimate expression of terrorism is the assault on symbols of the enemy’s power: it is the vacuous attack, void of any strategic significance. The assault on the Twin Towers has primarily symbolic character; it shall demonstrate “the vulnerability of governmental power” (Juergensmeyer 2001: 132). Because of the uncompromising character of terrorism the possible death of innocent victims is not an issue. The sharp division between the holy mission and the evil to be fought excludes any recognition of the idea that there can be innocence on the part of the enemy.

Fighting terrorism by military means to counter rogue states which provide terrorists with safe havens or by police means to counter terrorist organizations and their members is certainly the “gut reaction” of states whose prime task is to provide security for the constituency within the territorial realm. One should note, however, that terrorism is as much a feature of the modernity of modern society as are markets and democracy. This means that fighting terrorism by military and police means might successfully weaken terrorists and their organization, but not necessary fundamentalism. As Hassan II, the King of Morocco correctly observed: “… if fundamentalism has to be engaged in battle, it would not be done with tanks. Fundamentalists don’t have armored divisions, they have no Scud missiles, and not an atomic weapon” (Interview in International Herald Tribune, March 14, 1995, quotation taken from Tibi 1998: 4). The very character of modern society as a plurality of social systems each promoting different sets of values and worldviews invites permanently attempts at finding unifying views of society. The separation of politics and religion can become an issue of demanding a unity between both to control the contingency of outcomes in a society operating on the basis of procedures rather than values.

The Manichaean world-view, which goes along with any form of fundamentalist politics knows only sharp differences between friend and foe, us and them, the powerless periphery and the overwhelming center. This view lends itself to uncompromising attitudes. It is, therefore, critical, not to counter the terrorist Manichaeanism by an equivalent view on the side of the state.

Similarly, installing secular “democracies” in defeated rogue states such as Afghanistan or Iraq might replace governments, which have provided safe havens for terrorists, but it might also reinvigorate or create fundamentalism and possibly fundamentalist terrorism because secular statehood is what their protest is primarily about. Military intervention is obviously no longer a viable and prudent option in such a constellation.


Baudrillard, Jean, 2002, The Spirit of Terrorism, London: Verso

Bellah, Robert et al., 1991, The Good Society, New York: Knopf

Bin-Ladin, Usamah, 2002, Jihad against Jews and crusaders. Statement of the World Islamic Front, February 23, 1998, in: John Prados (ed.), America confronts terrorism. Understanding the danger and how to think about it, Chicago: Ivan R. Dee: 176-178

CIA/NIC (Central Intelligence Agency/National Intelligence Council), 2002, Global Trends 2015: A Dialogue about the Future with Nongovernmental Experts (December

2000), in: John Prados (ed.), America confronts terrorism: Understanding the danger and how to think about it, Chicago: Ivan R. Dee, pp. 23-4

Crenshaw, Martha (ed.), 1995, Terrorism in Context, University Park, PA: Pennsylvania State University Press

Douglas, Mary, 1973, Natural Symbols, New York: Vintage Books

Douglas, Mary, 1992, Muffled Ears, in: Mary Douglas, Risk and Blame, London: Routledge

Douglas, Mary/Wildavsky, Aaron, 1983, Risk and Culture, Berkeley, CA: University of California Press

Esposito, John, 1999, The Islamic Threat: Myth or Reality? New York: Oxford University Press

Gellner, Ernest, 1983, Nations and Nationalism, Oxford: Blackwell

Hoffman, Bruce, 1998, Inside Terrorism, New York: Columbia University Press

Huntington, Samuel P., 1993, Clash of Civilizations? In: Foreign Affairs, Vol. 72, No. 3

Huntington, Samuel P., 1997, The Clash of Civilizations and the Remaking of World Order, New York: Simon & Schuster

Juergensmeyer, Mark, 1992, Sacrifice and cosmic war, in: Mark Juergensmeyer (ed.), Violence and the Sacred in the World, London: Frank Cass, pp. 101-117

Juergensmeyer, Mark, 2001, Terror in the Mind of God: The Global Rise of Religious Violence, Berkeley: University of California Press

Kohn, Hans, 1960, Pan-Slavism: its History and Ideology, New York: Vintage Books

Lawrence, Bruce, 1998, Defenders of God; The Fundamentalist Revolt Against the Modern Age, New York

Lewis, Bernard, 1968, The Emergence of Modern Turkey, Oxford: Oxford University Press

Luhmann, Niklas, 1997, Die Gesellschaft der Gesellschaft, 2 vols., Frankfurt a.M.: Suhrkamp

Prados, John (ed.), 2002, America confronts terrorism. Understanding the danger and how to think about it, Chicago: Ivan R. Dee

Reinhard, Wolfgang, 1999, Geschichte der Staatsgewalt. Eine vergleichende Verfassungsgeschichte Europas von den Anfängen bis zur Gegenwart, München: C.H. Beck

Roy, Olivier, 2003, EuroIslam: the Jihad within?, in: The National Interest, Spring, pp. 63-73

Roy, Olivier, 2004, Globalized Islam, The Search for a New Ummah, OSL,NY

Sedlmayer, Hans, 1948, Verlust der Mitte: die bildende Kunst des 19. und 20. Jahrhunderts als Symptom und Symbol der Zeit, Salzburg: O. Müller

Shils, Edward, 1961: Centre and Periphery, in: Logic of Personal Knowledge: Essays Presented to Michael Polanyi on his Seventieth Birthday, London: Routledge & Kegan Paul

Thompson, Michael/Ellis, Richard/Wildavsky, Aaron, 1990, Cultural Theory, Boulder, Col.: Westview Press

Tibi, Bassam, 1998, The Challenge of Fundamentalism: Political Islam and the New World Disorder, Berkeley: University of California Press

Van Crefeld, Martin L., 1991, The transformation of war, New York: Free Press

Wuthnow, Robert, 1998, After heaven: spirituality in America since the 1950s, Berkeley, CA: University of California Press


Maryam Javanshahraki, M.A Political science, University of Tehran, Iran
November 2006

Anthropology – Conclusions

As a diverse, multifunctional cultural universal, religion is unavoidably a phenomenon of surpassing anthropological interest. What the anthropology of religion has long ignored, however, is the fact that religion and anthropology are competitors in the attempt to fulfill many of the same functions. Much of the domain of inquiry that anthropology has recently claimed for itself is one that religion has long considered its own, including the fundamental questions of human origins, human nature, and human destiny. Elman Service (1985:319) makes this point very tellingly in A Century of Controversy:

People, in the union of society, already know the answers to all of the questions they consider basic…Unlike the natural sciences, which at first were called on simply to fill the dark void of ignorance with increasingly sure, or testable, knowledge (and which were likely to be the ones asking the question), the behavioral sciences faced questions that had already been asked and answered by the culture itself.

The conflict between religion and anthropology comes about because the answers that the two offer to the “basic questions” concerning humanity are in most cases fundamentally opposed. Religious and scientific perspectives on such questions are rarely complementary, as it is popularly supposed. More often, religious and scientific perspectives are mutually contradictory and ultimately incompatible. Anthropological science reveals, in addition, that the contradictory answers offered by religion are clearly, demonstrably, and unequivocally wrong. When it comes to the questions of human origins and human nature, for example, it is evident that the world’s religions are mistaken. Consider the Judeo-Christian tradition as a single instance: the human species is not less than 10,000 years old, the present geographical distribution of human populations is not attributable to survivor dispersion following a universal flood, the origins of Homo sapiens are not distinct from the rest of the animal kingdom, the linguistic diversity of the human species is not the result of an historic event in southwest Asia 4,000 years ago, illness is not caused by the Devil, and women are not intellectually inferior to men.

In my view, the goal of anthropology should be to give us the right answers to the questions that human beings have always asked. The exceptional value of our discipline does not lie in our subject matter, which is neither unique nor original. Instead, it is the anthropological approach (specifically, the scientific perspective) which makes our discipline worthwhile. No rational person can doubt the unequaled value of scientific investigation. “Since the eighteenth century,” as Bernard (1988:25) aptly observes, “every phenomenon, including human thought and behavior, to which the scientific method has been systematically applied over a sustained period of time, by a large number of researchers, has yielded its secrets, and the knowledge has been turned into more effective human control of events.”

The unfortunate truth is, however, that the scientific study of human thought and behavior has lagged behind the scientific study of the natural world, in part because social scientists, out of deference to the emotional sensitivities of their fellow humans, have been especially reticent about applying the scientific method to the entire range of anthropological phenomena. The study of religion is only the most obvious instance of that reticence. If we would like to achieve something comparable to the success that our colleagues in physics, chemistry, and biology have achieved, we will have to be equally consistent in our application of the scientific method.

To summarize briefly, we know that no religious belief is true, because we know that all religious beliefs are either nonfalsifiable or falsified. In the interests of scientific integrity, we have an obligation to declare that knowledge. Doing so, of course, would not preclude other anthropological analyses of religion, and I would not want to be understood as having suggested that we should abandon the study of the social, psychological, ecological, symbolic, aesthetic, and ethical functions and dimensions of religion. It is precisely those areas where the anthropology of religion has made and continues to make its greatest contributions. Nevertheless, the scientific study of religion will never be fully legitimate until scientists recognize and proclaim the reality of religion.



1 There have been exceptions, of course. Murdock (1980:54), for example, makes this unambiguous observation: “There are no such things as souls, or demons, and such mental constructs as Jehovah are as fictitious as those of Superman or Santa Claus.” Similarly, Schneider (1965:85) offers this forthright declaration: “There is no supernatural. Ghosts do not exist.” But these are the exceptions that prove the rule.

2 Scientific objectivity is, admittedly, founded upon a pair of ultimately unprovable assumptions: first, the assumption that “reality is ‘out there’ to be discovered,” as Bernard (1988:12) says (or that “there are things outside of the observer which no amount of merely logical manipulation can create or destroy,” as Harris [1964:169] puts it), and second, the assumption that reality is amenable to human inquiry (or that reliable knowledge is attainable, in other words). However, while it may not be possible to conclusively prove the truth of either assumption, neither is it possible to reasonably doubt the validity of either. Both assumptions are decisively validated by the overwhelming weight of human experience. Our lives are not mere illusions, and we have succeeded in understanding and predicting much of the world. To deny the first assumption is to engage in the worst sort of solipsism; “it is quite true that facts do not speak for themselves,” as Spaulding (1988:264) astutely observes, “but a conclusion that therefore there are no facts is a crashing non sequitur.” To deny the second assumption is to claim to know that no knowledge is possible, and that, obviously, is self-contradictory.

3 It is a mistake that I myself have made. In the first edition of my textbook on anthropological theory (Lett 1987:26), I suggested that science could be defined as “a systematic method of inquiry based upon empirical observation that seeks to provide coherent, reliable, and testable explanations of empirical phenomena and that rejects all accounts, descriptions, and analyses that are either not falsifiable or that have been decisively falsified.” Of course, I was following some well-established anthropological precedents. Pelto and Pelto (1978:22), for example, define science as “the structure and the processes of discovery and verification of systematic and reliable knowledge about any relatively enduring aspect of the universe, carried out by means of empirical observations, and the development of concepts and propositions for interrelating and explaining such observations.” Harris (1979:27) maintains that science “seeks to restrict fields of inquiry to events, entities, and relationships that are knowable by means of explicit, logico-empirical, inductive-deductive, quantifiable public procedures or ‘operations’ subject to replication by independent observers.” I now recognize, however, that objectivity is the defining quality of science, and that science is empirical as a consequence of objectivity, not as a condition of objectivity.

4 The fact that scientific knowledge is not absolutely certain knowledge in no way diminishes the unique value and demonstrable superiority of the scientific approach. As Watson (1991:276) notes, “public, objective knowledge of the world including human beings is not certain, but neither is it merely one interpretation out of many, each of which is no better than any other.” When it comes to the acquisition of factual knowledge, the scientific method has a record of success that far outshines any other epistemological approach. The reliability, predictability, generalizability, and usefulness of scientific knowledge are simply unparalleled; the vindication of the scientific method on pragmatic grounds is decisive.

5 The term “paranormal” was first popularized by parapsychologists, but is likely to be most familiar to anthropologists through the efforts of The Committee for the Scientific Investigation of Claims of the Paranormal. CSICOP, which was founded in 1976 by the philosopher Paul Kurtz, is a national organization of philosophers, natural scientists, social scientists, physicians, engineers, attorneys, journalists, magicians, and other skeptical people committed to the rational analysis of paranormal claims. The organization includes a number of anthropologists among its Fellows and contributors to its quarterly journal, The Skeptical Inquirer.

6 Joseph K. Long’s (1977) edited volume Extrasensory Ecology: Parapsychology and Anthropology is perhaps the most regrettable example of the irrational approach to the paranormal within cultural anthropology. The collection can be described, somewhat charitably, as one of the saddest and silliest books ever published under an anthropological aegis. Long’s gullibility and flagrant disregard for rational principles of evidential reasoning are egregious. He baldly states, for example, that “ghosts, astral projections, and poltergeists are real” (1977:viii), he describes levitation as “probable” (1977:384-385), he claims that at least some so-called “psychic surgeons” (who are really sleight-of-hand artists) have successfully performed barehanded operations on human patients that involve “deep and random cutting, extraction of parts, and immediate healing of the wound leaving virtually no scar” (1977:375), and he endorses the transparently fraudulent “psychokinetic” stunts of the Israeli showman Uri Geller as genuine (1977:248).

Reproduced from Professor James Lett’s Faculty WebPage

The Nature of Religion

In Religion in Human Life, Edward Norbeck (1974:6) observes that “religion is characteristically seen by anthropologists as a distinctive symbolic expression of human life that interprets man himself and his universe, providing motives for human action, and also a group of associated acts which have survival value for the human species.” Various formulations could be subsumed under that general description, such as Lessa and Vogt’s (1972:1) notion that “religion may be described as a system of beliefs and practices directed toward the ‘ultimate concern’ of a society,” or Geertz’s (1973:90) concept of religion as “a system of symbols” that integrates a culture’s world view and ethos. Those definitions, however, could logically embrace existentialism, communism, secular humanism, or other philosophies which most anthropologists would be reluctant to call religion. How then is religion distinguished from comparable sets of beliefs and behaviors that fulfill similar functions?

As Norbeck (1974:6) explains, “the distinguishing trait commonly used is supernaturalism, ideas and acts centered on views of supernatural power.” The concept of the supernatural has been firmly tied to the anthropological definition of religion since the origins of the discipline. Edward Tylor (1958:8), for example, argued that “it seems best…to claim, as a minimum definition of Religion, the belief in Spiritual Beings.” Frazer (1963:58) maintained that “religion involves, first, a belief in superhuman beings who rule the world, and, second, an attempt to win their favour.” Malinowski (1954:17) observed that sacred “acts and observances are always associated with beliefs in supernatural forces, especially those of magic, or with ideas about beings, spirits, ghosts, dead ancestors, or gods.” The concept of the supernatural continues to dominate anthropological conceptions of religion today. Marvin Harris (1989:399), for example, declares that “the basis of all that is distinctly religious in human thought is animism, the belief that humans share the world with a population of extraordinary, extracorporeal, and mostly invisible beings.”

There is a fundamental problem with the term “supernatural,” however: it is so varyingly conceived in the different cultures of the world that it lacks a common, unambiguous definition. The Yanomamo, Roman Catholic, !Kung San, and Buddhist conceptions of the “supernatural” realm, for example, are widely divergent and even contradictory in some aspects. The problem is that the term “supernatural” is an emic concept, meaning that it is defined in terms of the categories and concepts regarded as meaningful and appropriate by the members of particular cultures; it is not an etic concept, one defined in terms of the categories and concepts regarded as meaningful and appropriate by the community of scientific observers (Lett 1990). As an emic concept, the term “supernatural” has as many definitions as there are cultures; as an etic concept, it has no recognized, agreed-upon definition.

Nor could any such objective, scientific definition be offered for the term “supernatural,” for the simple reason that the word is propositionally meaningless. The term “supernatural” is purportedly used to designate a reality that somehow transcends the natural universe of empirical reality, but what does it mean to “transcend empirical reality?” If such a thing as “nonempirical reality” exists, how could we, as empirical beings, even know about it? (Revelation and intuition, after all, are demonstrably unreliable–witness the mutually exclusive claims to knowledge made by different people on revelatory grounds.) If such a thing as “nonempirical reality” exists, by what mechanism is it connected to empirical reality? (How, in other words, do supernatural beings and forces have an impact on the natural world?) Further, if such a thing as “nonempirical reality” exists, why is there not a single shred of objective evidence to indicate its existence? As the physicist Victor Stenger (1990:33) points out, there is no rational reason whatsoever to even hypothesize the existence of the “supernatural:”

At this writing, neither the data gathered by our external senses, the instruments we have built to enhance those senses, nor our innermost thoughts require that we introduce a nonmaterial component to the universe. No human experience, measurement, or observation forces us to adopt fundamental hypotheses or explanatory principles beyond those of the Standard Model of physics and the chance processes of evolution.

The term “supernatural” thus purports to describe a reality that we could not know or recognize, one that could not have any impact on the reality we do know and recognize, and one for which we have no evidence whatsoever; it is, in short, unintelligible. The philosopher William Gray (1991:39) eschews the term “supernatural” and suggests instead that religious statements can be described as “metaphysical,” by which he means statements that refer to facts that could not possibly be observed. But what would an “unobservable fact” be? To substitute “metaphysical” for “supernatural” is simply to play a semantic game. Terms such as “supernatural,” “metaphysical,” and “nonempirical reality” are, in fact, oxymorons. It would make just as much sense to talk about the “unreal real.”

Connotatively, the term “supernatural” presents additional problems: it is not sufficiently comprehensive to embrace beliefs and behaviors that are virtually identical in form and function to so-called “religious” beliefs and behaviors, but which would not commonly be called “supernatural.” Gods, demons, angels, and souls, for example, could easily be called “supernatural,” and so too, perhaps, could incubi, succubi, ghosts, goblins, fairies, sprites, trolls, and leprechauns. But what about witches, clairvoyants, telepathists, psychokinetics, extraterrestrials, psychic surgeons, vampires, werewolves, spirit channelers, fire-walkers, astrologers, the Loch Ness Monster, and Sasquatch? Would those too be called “supernatural?” Would anthropologists call beliefs in such beings and forces “religious?”

At least one recent anthropological text on religion recognizes this problem. In Magic, Witchcraft, and Religion, Lehmann and Myers (1989:3) argue that it is time for anthropologists to abandon the restrictive connotations of the term “supernatural:”
Expanding the definition of religion beyond spiritual and superhuman beings to include the extraordinary, the mysterious, and unexplainable allows a more comprehensive view of religious behaviors among the peoples of the world and permits the anthropological investigation of phenomena such as magic, sorcery, curses, and other practices that hold meaning for both pre-literate and literate societies.

Lehmann and Myers fail, however, to suggest an alternative term to replace the word “supernatural.” Fortunately, there is an obvious alternative available, one that is winning increasing acceptance both inside and outside anthropology, namely the word “paranormal.” (Note 5) The term refers ostensibly to phenomena that lie beyond the normal range of human perception and experience, although in practice it does not denote simply anomalous phenomena. Instead, it describes putative phenomena whose existence would in fact violate the rules of reality revealed by science and common sense. From an etic point of view, therefore, the notion of the “paranormal,” like the notion of the “supernatural,” is propositionally meaningless. Unlike the term “supernatural,” however, the term “paranormal” is not restrictive in its connotations, and that is its principal advantage. “Paranormal” is a useful umbrella label for the complete set of emic beliefs concerning the unreal real. The term embraces the entire range of transcendental beliefs, covering at once everything that would otherwise be called magical, religious, supernatural, metaphysical, occult, or parapsychological.

Therein lies the real common denominator in all paranormal beliefs: not that they are all “supernatural,” but that they are all irrational, by which I mean that every single paranormal belief in the world, whether labeled “religious,” “magical,” “spiritual,” “metaphysical,” “occult,” or “parapsychological,” is either nonfalsifiable or has been falsified. (The vast majority of all paranormal propositions–such as the Judeo-Christian proposition that “God” exists–are nonfalsifiable and hence propositionally meaningless; a smaller percentage–such as the Judeo-Christian proposition that a universal flood covered the earth sometime within the past 10,000 years–are falsifiable but have invariably been falsified by objective evidence.)

The simple fact of the matter is that every religious belief in every culture in the world is demonstrably untrue. Regardless of whether the religious practices are organized communally or ecclesiastically, regardless of whether they are mediated by shamans or priests, regardless of whether the intent is manipulative or supplicative, the one constant that runs through all religious practices all over the world is that all such practices are founded upon nonfalsifiable or falsified beliefs concerning the paranormal.

Irrationality is thus the defining element in religion. Religion and science are not at odds because religion wants to be “supernatural” while science wants to be “empirical;” instead, religion and science are at odds because religion wants to be irrational (relying ultimately upon beliefs that are either nonfalsifiable or falsified), while science wants to be rational (relying exclusively upon beliefs that are both falsifiable and unfalsified).

I am aware that many anthropologists are likely to react negatively to the pejorative connotations of the word “irrational.” The term, however, is simply descriptive and therefore entirely appropriate. It is unarguably irrational to maintain a belief in an allegedly propositional claim when that claim is either propositionally meaningless or has been decisively repudiated by objective evidence. Whether it is laudable or forgivable to do so is another question: it is not, of course, a factual question, but neither is it a question that scientists can entirely avoid.


The Nature of Science


In the most fundamental sense, science can be defined as a systematic and self-correcting method for acquiring reliable factual knowledge. “It is the desire for explanations which are at once systematic and controllable by factual evidence that generates science,” the philosopher Ernest Nagel (1961:4) observes, “and it is the organization and classification of knowledge on the basis of explanatory principles that is the distinctive goal of the sciences.” The rules of the scientific method (which include testability, observer-independence, replicability, and logical consistency) do not restrict science to the pursuit of empirical knowledge, however. Instead, they restrict science to the pursuit of propositional knowledge.

A proposition is an assertion of fact, a statement which makes a claim that is either true or false depending on the evidence. The scientific method is simply a set of procedures for evaluating the evidence offered in support of any proposition. No proposition is ever rejected by science on an a priori basis (unless the proposition is self-contradictory); science is predicated upon the assumption that any factual assertion could be true. Nor does science demand that the evidence offered in support of any claim be empirical; science demands only that the evidence be objective.

As a set of guidelines for the acquisition of knowledge, scientific objectivity implies two things: first, that the truth or falsity of a given factual claim is independent of the claimant’s hopes, fears, desires, or goals; and second, that no two conflicting accounts of a given phenomenon can both be correct (Cunningham 1973:4). Critics of the scientific method commonly protest that objectivity in the first sense is unrealistic, because no individual scientist can ever be completely unbiased, and that objectivity in the second sense is unrealizable, because absolute certainty is unattainable. Both of those subordinate premises are correct (it is true that no individual can ever be completely unbiased, and it is true that absolute certainty about evidential questions can never be achieved) but neither of these points is relevant to the claim that science is objective, as Charles Frankel (1955:138-139) explains:

There are two principal reasons why scientific ideas are objective, and neither has anything to do with the personal merits or social status of individual scientists. The first is that these ideas are the result of a cooperative process in which the individual has to submit his results to the test of public observations which others can perform. The second is that these ideas are the result of a process in which no ideas or assumptions are regarded as sacrosanct, and all inherited ideas are subject to the continuing correction of experience.

To be objective, then, in the scientific sense of the term, a statement must fulfill two criteria: first, it must be publicly verifiable, and second, it must be testable. In the words of the philosopher Carl Hempel (1965:534), an “objective” statement is one that is “capable of test by reference to publicly ascertainable evidence.” The scientific claim to objectivity is thus not a dogmatically positivistic claim to absolute certainty (See Note 2). Scientific objectivity does not deny that perception is a process of active interpretation rather than passive reception, nor does it deny that the acquisition of reliable knowledge is a highly problematic undertaking. Instead, scientific objectivity merely denies that all claims to knowledge are equally valid, and it provides a set of standards by which to evaluate competing claims. To assert that science is objective, as Siegel (1987:161) does, is to assert simply that all claims to knowledge should be “assessed in accordance with presently accepted criteria (e.g. of evidential warrant, explanatory power, perceptual reliability, etc.), which can in turn be critically assessed.”

As a technique for acquiring reliable propositional knowledge, science necessarily demands objective evidence, which is to say evidence that is both publicly verifiable and testable. Evidence that was not publicly verifiable would not be reliable, and evidence that was not testable would not be propositional (since a proposition is, by definition, a statement that can be tested against the evidence). Objectivity, however, is all that science demands. As long as a propositional claim is both publicly verifiable and testable, it is scientific. There is nothing in the essential defining features of science which says that propositional claims must necessarily be empirical.

In practice, it is true, science has so far been restricted exclusively to empirical data and empirical data-collection procedures, but that restriction is neither prejudicial nor arbitrary. Instead, it is a result of the fact that the empirical approach is the only approach to propositional knowledge that has ever passed the test of public verifiability. If publicly verifiable evidence of non-empirical reality were presented, the recognition of such reality would be incorporated into the scientific world view. If non-empirical data collection procedures (e.g., faith, revelation, intuition) were publicly verifiable, they would be incorporated into the scientific method (Lett 1987:18-22). It is not the fact that science is empirical that makes science objective; instead, it is the fact that science is objective that makes science empirical.

Thus it is a mistake (although a common one – see note 3) to define science in terms of empiricism, as Bernard (1988:12) does when he says that the scientific method is based on the assumption that “material explanations for observable phenomena are always sufficient, and that meta-physical explanations are never needed.” Science, however, does not assume that material explanations are always sufficient; instead, science concludes, as an inductive generalization, that material explanations are always sufficient. (Further, under the epistemological principles of science, that conclusion would be subject to revision in the light of new evidence.) Bernard (1988:11-12) offers a better definition of science when he quotes Lastrucci (1963:6) to the effect that science is “an objective, logical, and systematic method of analysis of phenomena, devised to permit the accumulation of reliable knowledge.” The term “empirical” is appropriately missing from that definition.

“Scientific knowledge,” then, means “objective knowledge,” which means propositional knowledge that is both publicly verifiable and testable. In order to ensure the public verifiability of propositional claims, science relies upon the provisionally necessary rule of empiricism (while recognizing that empiricism is only a convenient means to an end–namely intersubjectivity–and leaving open the possibility that some as-yet-unidentified non-empirical approach might satisfy the criterion of public verifiability). In order to ensure the testability of propositional claims, science relies upon the logically necessary rule of falsifiability, Karl Popper’s (1959) indisputable sine qua non of the scientific approach to knowledge.
According to the rule of falsifiability, a claim or statement is to be considered propositional if and only if it is possible to conceive of evidence that would prove the claim false. The rule of falsifiability is simply a means of distinguishing propositional claims from non-propositional ones. If the claim were to fail the test of falsifiability (if it were not possible, in other words, to even imagine falsifying evidence) then all possible evidence would be irrelevant, and the claim would be propositionally meaningless (it might, of course, be emotively meaningful, but it would be entirely devoid of any factual content whatsoever). If the claim were to pass the test of falsifiability, on the other hand (if it were possible to conceive of data that would disprove the assertion) then the evidence would be relevant, the claim would be propositionally meaningful, and the truth or falsity of the proposition could be tested against the evidence (in which case, of course, science would demand that the evidence be publicly verifiable).

The rule of falsifiability is the single most important rule of science. It is the one standard that guarantees that all genuine scientific statements are propositional (rather than emotive or tautological or nonsensical), and it is the salient feature that sharply distinguishes science from other ways of knowing. It is, further, the one standard by which all scientific explanations are judged, as Cohen (1970:32) correctly observes: “Whether or not the theory is scientific depends ultimately on whether the ideas involved in the theory can be submitted to a test of their validity.”

Thus science is a technique for acquiring propositional knowledge that relies exclusively upon the publicly verifiable investigation of falsifiable claims, whatever those claims might be. In the insightful words of Richard Watson (1991:276), “science in the most general sense is an attempt to learn as much as possible about the world in as many ways as possible with the sole restriction that what is claimed as knowledge be both testable and attainable by everyone” (emphasis added). There is then no reason not to apply science to nonempirical claims. If the claim were a factual one, then it would be falsifiable, whatever the nature of its supporting evidence, and it would be the claimant’s responsibility to identify reliable (i.e., publicly verifiable) evidence that would falsify the claim. As Lakatos (1970:92) insists, “intellectual honesty consists…in specifying precisely the conditions under which one is willing to give up one’s position.”

Those who see empiricism as the defining element of science fail to recognize that the scientific method is a combination of both deduction and induction. Science, in other words, relies upon both logic and experience, both reason and observation, in the pursuit of knowledge. It would in fact be prejudicial to call science empirical; science demands only that the evidence collected through observation and experience be objective (i.e., publicly verifiable and testable), and it is at least logically possible that nonempirical evidence could be objective.

In sum, the essence of science lies in the exclusive commitment to rational beliefs, by which I mean beliefs that are both falsifiable and unfalsified. If a belief satisfies both criteria (if it is, in the first place, propositional, and it has, in the second place, survived unrelenting attempts at falsification in the light of publicly verifiable evidence), then it deserves to be called scientific knowledge. Scientific knowledge is thus provisional knowledge (it is always logically possible that evidence could be uncovered tomorrow that would falsify a previously unfalsified claim), but the scientific approach to propositional knowledge is nevertheless the only rational approach. (Note 4) It would obviously be irrational to give factual credence to a purportedly propositional claim that was either nonfalsifiable (i.e., propositionally meaningless) or falsified (i.e., evidentially wrong). That brings us to religion.