Human Rights and Sentimentality

In a report from the former Bosnia some fifteen years ago1, David Rieff said “To the Serbs, the Muslims are no longer human… Muslim prisoners, lying on the ground in rows, awaiting interrogation, were driven over by a Serb guard in a small delivery van”. This theme of dehumanization recurs when Rieff says

A Muslim man in Bosanski Petrovac… [was] forced to bite off the penis of a fellow-Muslim… If you say that a man is not human, but the man looks like you and the only way to identify this devil is to make him drop his trousers – Muslim men are circumcised and Serb men are not – it is probably only a short step, psychologically, to cutting off his prick… There has never been a campaign of ethnic cleansing from which sexual sadism has gone missing.

The moral to be drawn from Rieff’s stories is that Serbian murderers and rapists do not think of themselves as violating human rights. For they are not doing these things to fellow human beings, but to Muslims. They are not being inhuman, but rather are discriminating between the true humans and the pseudohumans. They are making the same sort of distinction as the Crusaders made between humans and infidel dogs, and the Black Muslims make between humans and blue-eyed devils. The founder of my university was able both to own slaves and to think it self-evident that all men were endowed by their creator with certain inalienable rights. He had convinced himself that the consciousness of Blacks, like that of animals, “participate[s] more of sensation than reflection”2. Like the Serbs, Mr. Jefferson did not think of himself as violating human rights.

The Serbs take themselves to be acting in the interests of true humanity by purifying the world of pseudohumanity. In this respect, their self-image resembles that of moral philosophers who hope to cleanse the world of prejudice and superstition. This cleansing will permit us to rise above our animality by becoming, for the first time, wholly rational and thus wholly human. The Serbs, the moralists, Jefferson, and the Black Muslims all use the term “men” to mean “people like us”. They think the line between humans and animals is not simply the line between featherless bipeds and all others. They think the line divides some featherless bipeds from others: There are animals walking about in humanoid form. We and those like us are paradigm cases of humanity, but those too different from us in behavior or custom are, at best, borderline cases. As Clifford Geertz puts it, “Men’s most importunate claims to humanity are cast in the accents of group pride”3.

We in the safe, rich, democracies feel about the Serbian torturers and rapists as they feel about their Muslim victims: They are more like animals than like us. But we are not doing anything to help the Muslim women who are being gang raped or the Muslim men who are being castrated, any more than we did anything in the thirties when the Nazis were amusing themselves by torturing Jews. Here in the safe countries we find ourselves saying things like “That’s how things have always been in the Balkans”, suggesting that, unlike us, those people are used to being raped and castrated. The contempt we always feel for losers – Jews in the thirties, Muslims now – combines with our disgust at the winners’ behavior to produce the semiconscious attitude: “a plague on both your houses”. We think of the Serbs or the Nazis as animals, because ravenous beasts of prey are animals. We think of the Muslims or the Jews being herded into concentration camps as animals, because cattle are animals. Neither sort of animal is very much like us, and there seems no point in human beings getting involved in quarrels between animals.

The human-animal distinction, however, is only one of the three main ways in which we paradigmatic humans distinguish ourselves from borderline cases. A second is by invoking the distinction between adults and children. Ignorant and superstitious people, we say, are like children; they will attain true humanity only if raised up by proper education. If they seem incapable of absorbing such education, that shows they are not really the same kind of being as we educable people are. Blacks, the whites in the United States and in South Africa used to say, are like children. That is why it is appropriate to address Black males, of whatever age, as “boy”. Women, men used to say, are permanently childlike; it is therefore appropriate to spend no money on their education, and to refuse them access to power.

When it comes to women, however, there are simpler ways of excluding them from true humanity: for example, using “man” as a synonym of “human being”. As feminists have pointed out, such usages reinforce the average male’s thankfulness that he was not born a woman, as well as his fear of the ultimate degradation: feminization. The extent of the latter fear is evidenced by the particular sort of sexual sadism Rieff describes. His point that such sadism is never absent from attempts to purify the species or cleanse the territory confirms Catharine MacKinnon’s claim that, for most men, being a woman does not count as a way of being human. Being a nonmale is the third main way of being nonhuman. There are several ways of being nonmale. One is to be born without a penis; another is to have one’s penis cut or bitten off; a third is to have been penetrated by a penis. Many men who have been raped are convinced that their manhood, and thus their humanity, has been taken away. Like racists who discover they have Jewish or Black ancestry, they may commit suicide out of sheer shame, shame at no longer being the kind of featherless biped that counts as human.

Philosophers have tried to clear this mess up by spelling out what all and only the featherless bipeds have in common, thereby explaining what is essential to being human. Plato argued that there is a big difference between us and the animals, a difference worthy of respect and cultivation. He thought that human beings have a special added ingredient which puts them in a different ontological category than the brutes. Respect for this ingredient provides a reason for people to be nice to each other. Anti-Platonists like Nietzsche reply that attempts to get people to stop murdering, raping, and castrating each other are, in the long run, doomed to fail – for the real truth about human nature is that we are a uniquely nasty and dangerous kind of animal. When contemporary admirers of Plato claim that all featherless bipeds – even the stupid and childlike, even the women, even the sodomized – have the same inalienable rights, admirers of Nietzsche reply that the very idea of “inalienable human rights” is, like the idea of a special added ingredient, a laughably feeble attempt by the weaker members of the species to fend off the stronger.

As I see it, one important intellectual advance made in our century is the steady decline in interest in the quarrel between Plato and Nietzsche. There is a growing willingness to neglect the question “What is our nature?” and to substitute the question “What can we make of ourselves?”. We are much less inclined than our ancestors were to take “theories of human nature” seriously, much less inclined to take ontology or history as a guide to life. We have come to see that the only lesson of either history or anthropology is our extraordinary malleability. We are coming to think of ourselves as the flexible, protean, self-shaping, animal rather than as the rational animal or the cruel animal.

One of the shapes we have recently assumed is that of a human rights culture. I borrow the term “human rights culture” from the Argentinian jurist and philosopher Eduardo Rabossi. In an article called “Human Rights Naturalized”, Rabossi argues that philosophers should think of this culture as a new, welcome fact of the post-Holocaust world. They should stop trying to get behind or beneath this fact, stop trying to detect and defend its so-called “philosophical presuppositions”. On Rabossi’s view, philosophers like Alan Gewirth are wrong to argue that human rights cannot depend on historical facts. “My basic point”, Rabossi says, is that “the world has changed, that the human rights phenomenon renders human rights foundationalism outmoded and irrelevant”4.

Rabossi’s claim that human rights foundationalism is outmoded seems to me both true and important; it will be my principal topic in this lecture. I shall be enlarging on, and defending, Rabossi’s claim that the question whether human beings really have the rights enumerated in the Helsinki Declaration is not worth raising. In particular, I shall be defending the claim that nothing relevant to moral choice separates human beings from animals except historically contingent facts of the world, cultural facts.

This claim is sometimes called “cultural relativism” by those who indignantly reject it. One reason they reject it is that such relativism seems to them incompatible with the fact that our human rights culture, the culture with which we in this democracy identify ourselves, is morally superior to other cultures. I quite agree that ours is morally superior, but I do not think this superiority counts in favor of the existence of a universal human nature. It would only do so if we assumed that a moral claim is ill-founded if not backed up by knowledge of a distinctively human attribute. But it is not clear why “respect for human dignity” – our sense that the differences between Serb and Muslim, Christian and infidel, gay and straight, male and female should not matter – must presuppose the existence of any such attribute.

Traditionally, the name of the shared human attribute which supposedly “grounds” morality is “rationality”. Cultural relativism is associated with irrationalism because it denies the existence of morally relevant transcultural facts. To agree with Rabossi one must, indeed, be irrationalist in that sense. But one need not be irrationalist in the sense of ceasing to make one’s web of belief as coherent, and as perspicuously structured, as possible. Philosophers like myself, who think of rationality as simply the attempt at such coherence, agree with Rabossi that foundationalist projects are outmoded. We see our task as a matter of making our own culture – the human rights culture – more self-conscious and more powerful, rather than of demonstrating its superiority to other cultures by an appeal to something transcultural.

We think that the most philosophy can hope to do is summarize our culturally influenced intuitions about the right thing to do in various situations. The summary is effected by formulating a generalization from which these intuitions can be deduced, with the help of noncontroversial lemmas. That generalization is not supposed to ground our intuitions, but rather to summarize them. John Rawls’s “Difference Principle” and the U.S. Supreme Court’s construction, in recent decades, of a constitutional “right to privacy” are examples of this kind of summary. We see the formulation of such summarizing generalizations as increasing the predictability, and thus the power and efficiency, of our institutions, thereby heightening the sense of shared moral identity which brings us together in a moral community.

Foundationalist philosophers, such as Plato, Aquinas, and Kant, have hoped to provide independent support for such summarizing generalizations. They would like to infer these generalizations from further premises, premises capable of being known to be true independently of the truth of the moral intuitions which have been summarized. Such premises are supposed to justify our intuitions, by providing premises from which the content of those intuitions can be deduced. I shall lump all such premises together under the label “claims to knowledge about the nature of human beings”. In this broad sense, claims to know that our moral intuitions are recollections of the Form of the Good, or that we are the disobedient children of a loving God, or that human beings differ from other kinds of animals by having dignity rather than mere value, are all claims about human nature. So are such counterclaims as that human beings are merely vehicles for selfish genes, or merely eruptions of the will to power.

To claim such knowledge is to claim to know something which, though not itself a moral intuition, can correct moral intuitions. It is essential to this idea of moral knowledge that a whole community might come to know that most of their most salient intuitions about the right thing to do were wrong. But now suppose we ask: Is there this sort of knowledge? What kind of question is that? On the traditional view, it is a philosophical question, belonging to a branch of epistemology known as “metaethics”. But on the pragmatist view which I favor, it is a question of efficiency, of how best to grab hold of history – how best to bring about the utopia sketched by the Enlightenment. If the activities of those who attempt to achieve this sort of knowledge seem of little use in actualizing this utopia, that is a reason to think there is no such knowledge. If it seems that most of the work of changing moral intuitions is being done by manipulating our feelings rather than increasing our knowledge, that will be a reason to think that there is no knowledge of the sort which philosophers like Plato, Aquinas, and Kant hoped to acquire.

This pragmatist argument against the Platonist has the same form as an argument for cutting off payment to the priests who are performing purportedly war-winning sacrifices – an argument which says that all the real work of winning the war seems to be getting done by the generals and admirals, not to mention the foot soldiers. The argument does not say: Since there seem to be no gods, there is probably no need to support the priests. It says instead: Since there is apparently no need to support the priests, there probably are no gods. We pragmatists argue from the fact that the emergence of the human rights culture seems to owe nothing to increased moral knowledge, and everything to hearing sad and sentimental stories, to the conclusion that there is probably no knowledge of the sort Plato envisaged. We go on to argue: Since no useful work seems to be done by insisting on a purportedly ahistorical human nature, there probably is no such nature, or at least nothing in that nature that is relevant to our moral choices.

In short, my doubts about the effectiveness of appeals to moral knowledge are doubts about causal efficacy, not about epistemic status. My doubts have nothing to do with any of the theoretical questions discussed under the heading of “metaethics”, questions about the relation between facts and values, or between reason and passion, or between the cognitive and the noncognitive, or between descriptive statements and action-guiding statements. Nor do they have anything to do with questions about realism and antirealism. The difference between the moral realist and the moral antirealist seems to pragmatists to be a difference which makes no practical difference. Further, such metaethical questions presuppose the Platonic distinction between inquiry which aims at efficient problem-solving and inquiry which aims at a goal called “truth for its own sake”. That distinction collapses if one follows Dewey in thinking of all inquiry – in physics as well as in ethics – as practical problem-solving, or if one follows Peirce in seeing every belief as action-guiding5.

Even after the priests have been pensioned off, however, the memories of certain priests may still be cherished by the community – especially the memories of their prophecies. We remain profoundly grateful to philosophers like Plato and Kant, not because they discovered truths but because they prophesied cosmopolitan utopias – utopias most of whose details they may have got wrong, but utopias we might never have struggled to reach had we not heard their prophecies. As long as our ability to know, and in particular to discuss the question “What is man?” seemed the most important thing about us human beings, people like Plato and Kant accompanied utopian prophecies with claims to know something deep and important – something about the parts of the soul, or the transcendental status of the common moral consciousness. But this ability, and those questions, have, in the course of the last two hundred years, come to seem much less important. Rabossi summarizes this cultural sea change in his claim that human rights foundationalism is outmoded. In the remainder of this lecture, I shall take up the questions: Why has knowledge become much less important to our self-image than it was two hundred years ago? Why does the attempt to found culture on nature, and moral obligation on knowledge of transcultural universals, seem so much less important to us than it seemed in the Enlightenment? Why is there so little resonance, and so little point, in asking whether human beings in fact have the rights listed in the Helsinki Declaration? Why, in short, has moral philosophy become such an inconspicuous part of our culture?

A simple answer is that between Kant’s time and ours Darwin argued most of the intellectuals out of the view that human beings contain a special added ingredient. He convinced most of us that we were exceptionally talented animals, animals clever enough to take charge of our own future evolution. I think this answer is right as far as it goes, but it leads to a further question: Why did Darwin succeed, relatively speaking, so very easily? Why did he not cause the creative philosophical ferment caused by Galileo and Newton?

The revival by the New Science of the seventeenth century of a Democritean-Lucretian corpuscularian picture of nature scared Kant into inventing transcendental philosophy, inventing a brand-new kind of knowledge, which could demote the corpuscularian world picture to the status of “appearance”. Kant’s example encouraged the idea that the philosopher, as an expert on the nature and limits of knowledge, can serve as supreme cultural arbiter1. By the time of Darwin, however, this idea was already beginning to seem quaint. The historicism which dominated the intellectual world of the early nineteenth century had created an antiessentialist mood. So when Darwin came along, he fitted into the evolutionary niche which Herder and Hegel had begun to colonize. Intellectuals who populate this niche look to the future rather than to eternity. They prefer new ideas about how change can be effected to stable criteria for determining the desirability of change. They are the ones who think both Plato and Nietzsche outmoded.

The best explanation of both Darwin’s relatively easy triumph, and our own increasing willingness to substitute hope for knowledge, is that the nineteenth and twentieth centuries saw, among the Europeans and Americans, an extraordinary increase in wealth, literacy, and leisure. This increase made possible an unprecedented acceleration in the rate of moral progress. Such events as the French Revolution and the ending of the trans-Atlantic slave trade prompted nineteenth-century intellectuals in the rich democracies to say: It is enough for us to know that we live in an age in which human beings can make things much better for ourselves7. We do not need to dig behind this historical fact to nonhistorical facts about what we really are.

In the two centuries since the French Revolution, we have learned that human beings are far more malleable than Plato or Kant had dreamed. The more we are impressed by this malleability, the less interested we become in questions about our ahistorical nature. The more we see a chance to recreate ourselves, the more we read Darwin not as offering one more theory about what we really are but as providing reasons why we need not ask what we really are. Nowadays, to say that we are clever animals is not to say something philosophical and pessimistic but something political and hopeful, namely: If we can work together, we can make ourselves into whatever we are clever and courageous enough to imagine ourselves becoming. This sets aside Kant’s question “What is Man?” and substitutes the question “What sort of world can we prepare for our great-grandchildren?”.

The question “What is Man?” in the sense of “What is the deep ahistorical nature of human beings?” owed its popularity to the standard answer to that question: We are the rational animal, the one which can know as well as merely feel. The residual popularity of this answer accounts for the residual popularity of Kant’s astonishing claim that sentimentality has nothing to do with morality, that there is something distinctively and transculturally human called “the sense of moral obligation” which has nothing to do with love, friendship, trust, or social solidarity. As long as we believe that, people like Rabossi are going to have a tough time convincing us that human rights foundationalism is an outmoded project.

To overcome this idea of a sui generis sense of moral obligation, it would help to stop answering the question “What makes us different from the other animals?” by saying “We can know, and they can merely feel”. We should substitute “We can feel for each other to a much greater extent than they can”. This substitution would let us disentangle Christ’s suggestion that love matters more than knowledge from the neo-Platonic suggestion that knowledge of the truth will make us free. For as long as we think that there is an ahistorical power which makes for righteousness – a power called truth, or rationality – we shall not be able to put foundationalism behind us.

The best, and probably the only, argument for putting foundationalism behind us is the one I have already suggested: It would be more efficient to do so, because it would let us concentrate our energies on manipulating sentiments, on sentimental education. That sort of education sufficiently acquaints people of different kinds with one another so that they are less tempted to think of those different from themselves as only quasi-human. The goal of this manipulation of sentiment is to expand the reference of the terms “our kind of people” and “people like us”.

All I can do to supplement this argument from increased efficiency is to offer a suggestion about how Plato managed to convince us that knowledge of universal truths mattered as much as he thought it did. Plato thought that the philosopher’s task was to answer questions like “Why should I be moral? Why is it rational to be moral? Why is it in my interest to be moral? Why is it in the interest of human beings as such to be moral?”. He thought this because he believed the best way to deal with people like Thrasymachus and Callicles was to demonstrate to them that they had an interest of which they were unaware, an interest in being rational, in acquiring self-knowledge. Plato thereby saddled us with a distinction between the true and the false self. That distinction was, by the time of Kant, transmuted into a distinction between categorical, rigid, moral obligation and flexible, empirically determinable, self-interest. Contemporary moral philosophy is still lumbered with this opposition between self-interest and morality, an opposition which makes it hard to realize that my pride in being a part of the human rights culture is no more external to my self than my desire for financial success.

It would have been better if Plato had decided, as Aristotle was to decide, that there was nothing much to be done with people like Thrasymachus and Callicles, and that the problem was how to avoid having children who would be like Thrasymachus and Callicles. By insisting that he could reeducate people who had matured without acquiring appropriate moral sentiments by invoking a higher power than sentiment, the power of reason, Plato got moral philosophy off on the wrong foot. He led moral philosophers to concentrate on the rather rare figure of the psychopath, the person who has no concern for any human being other than himself. Moral philosophy has systematically neglected the much more common case: the person whose treatment of a rather narrow range of featherless bipeds is morally impeccable, but who remains indifferent to the suffering of those outside this range, the ones he or she thinks of as pseudohumans8.

Plato set things up so that moral philosophers think they have failed unless they convince the rational egotist that he should not be an egotist – convince him by telling him about his true, unfortunately neglected, self. But the rational egotist is not the problem. The problem is the gallant and honorable Serb who sees Muslims as circumcised dogs. It is the brave soldier and good comrade who loves and is loved by his mates, but who thinks of women as dangerous, malevolent whores and bitches.

Plato thought that the way to get people to be nicer to each other was to point out what they all had in common – rationality. But it does little good to point out, to the people I have just described, that many Muslims and women are good at mathematics or engineering or jurisprudence. Resentful young Nazi toughs were quite aware that many Jews were clever and learned, but this only added to the pleasure they took in beating them up. Nor does it do much good to get such people to read Kant, and agree that one should not treat rational agents simply as means. For everything turns on who counts as a fellow human being, as a rational agent in the only relevant sense – the sense in which rational agency is synonomous with membership in our moral community.

For most white people, until very recently, most Black people did not so count. For most Christians, up until the seventeenth century or so, most heathen did not so count. For the Nazis, Jews did not so count. For most males in countries in which the average annual income is under four thousand dollars, most females still do not so count. Whenever tribal and national rivalries become important, members of rival tribes and nations will not so count. Kant’s account of the respect due to rational agents tells you that you should extend the respect you feel for people like yourself to all featherless bipeds. This is an excellent suggestion, a good formula for secularizing the Christian doctrine of the brotherhood of man. But it has never been backed up by an argument based on neutral premises, and it never will be. Outside the circle of post-Enlightenment European culture, the circle of relatively safe and secure people who have been manipulating each others’ sentiments for two hundred years, most people are simply unable to understand why membership in a biological species is supposed to suffice for membership in a moral community. This is not because they are insufficiently rational. It is, typically, because they live in a world in which it would be just too risky – indeed, would often be insanely dangerous – to let one’s sense of moral community stretch beyond one’s family, clan, or tribe.

To get whites to be nicer to Blacks, males to females, Serbs to Muslims, or straights to gays, to help our species link up into what Rabossi calls a “planetary community” dominated by a culture of human rights, it is of no use whatever to say, with Kant: Notice that what you have in common, your humanity, is more important than these trivial differences. For the people we are trying to convince will rejoin that they notice nothing of the sort. Such people are morally offended by the suggestion that they should treat someone who is not kin as if he were a brother, or a nigger as if he were white, or a queer as if he were normal, or an infidel as if she were a believer. They are offended by the suggestion that they treat people whom they do not think of as human as if they were human. When utilitarians tell them that all pleasures and pains felt by members of our biological species are equally relevant to moral deliberation, or when Kantians tell them that the ability to engage in such deliberation is sufficient for membership in the moral community, they are incredulous. They rejoin that these philosophers seem oblivious to blatantly obvious moral distinctions, distinctions any decent person will draw.

This rejoinder is not just a rhetorical device, nor is it in any way irrational. It is heartfelt. The identity of these people, the people whom we should like to convince to join our Eurocentric human rights culture, is bound up with their sense of who they are not. Most people – especially people relatively untouched by the European Enlightenment – simply do not think of themselves as, first and foremost, a human being. Instead, they think of themselves as being a certain good sort of human being – a sort defined by explicit opposition to a particularly bad sort. It is crucial for their sense of who they are that they are not an infidel, not a queer, not a woman, not an untouchable. Just insofar as they are impoverished, and as their lives are perpetually at risk, they have little else than pride in not being what they are not to sustain their self-respect. Starting with the days when the term “human being” was synonomous with “member of our tribe”, we have always thought of human beings in terms of paradigm members of the species. We have contrasted us, the real humans, with rudimentary, or perverted, or deformed examples of humanity.

We Eurocentric intellectuals like to suggest that we, the paradigm humans, have overcome this primitive parochialism by using that paradigmatic human faculty, reason. So we say that failure to concur with us is due to “prejudice”. Our use of these terms in this way may make us nod in agreement when Colin McGinn tells us, in the introduction to his recent book9, that learning to tell right from wrong is not as hard as learning French. The only obstacles to agreeing with his moral views, McGinn explains, are “prejudice, vested interest and laziness”.

One can see what McGinn means: If, like many of us, you teach students who have been brought up in the shadow of the Holocaust, brought up believing that prejudice against racial or religious groups is a terrible thing, it is not very hard to convert them to standard liberal views about abortion, gay rights, and the like. You may even get them to stop eating animals. All you have to do is convince them that all the arguments on the other side appeal to “morally irrelevant” considerations. You do this by manipulating their sentiments in such a way that they imagine themselves in the shoes of the despised and oppressed. Such students are already so nice that they are eager to define their identity in nonexclusionary terms. The only people they have trouble being nice to are the ones they consider irrational – the religious fundamentalist, the smirking rapist, or the swaggering skinhead.

Producing generations of nice, tolerant, well-off, secure, other-respecting students of this sort in all parts of the world is just what is needed – indeed all that is needed – to achieve an Enlightenment utopia. The more youngsters like this we can raise, the stronger and more global our human rights culture will become. But it is not a good idea to encourage these students to label “irrational” the intolerant people they have trouble tolerating. For that Platonic-Kantian epithet suggests that, with only a little more effort, the good and rational part of these other people’s souls could have triumphed over the bad and irrational part. It suggests that we good people know something these bad people do not know, and that it is probably their own silly fault that they do not know it. All they have to do, after all, is to think a little harder, be a little more self-conscious, a little more rational.

But the bad people’s beliefs are not more or less “irrational” than the belief that race, religion, gender, and sexual preference are all morally irrelevant – that these are all trumped by membership in the biological species. As used by moral philosophers like McGinn, the term “irrational behavior” means no more than “behavior of which we disapprove so strongly that our spade is turned when asked why we disapprove of it”. It would be better to teach our students that these bad people are no less rational, no less clearheaded, no more prejudiced, than we good people who respect otherness. The bad people’s problem is that they were not so lucky in the circumstances of their upbringing as we were. Instead of treating as irrational all those people out there who are trying to find and kill Salman Rushdie, we should treat them as deprived.

Foundationalists think of these people as deprived of truth, of moral knowledge. But it would be better – more specific, more suggestive of possible remedies – to think of them as deprived of two more concrete things: security and sympathy. By “security” I mean conditions of life sufficiently risk-free as to make one’s difference from others inessential to one’s self-respect, one’s sense of worth. These conditions have been enjoyed by Americans and Europeans – the people who dreamed up the human rights culture – much more than they have been enjoyed by anyone else. By “sympathy” I mean the sort of reaction that the Athenians had more of after seeing Aeschylus’ The Persians than before, the sort that white Americans had more of after reading Uncle Tom’s Cabin than before, the sort that we have more of after watching TV programs about the genocide in Bosnia. Security and sympathy go together, for the same reasons that peace and economic productivity go together. The tougher things are, the more you have to be afraid of, the more dangerous your situation, the less you can afford the time or effort to think about what things might be like for people with whom you do not immediately identify. Sentimental education only works on people who can relax long enough to listen.

If Rabossi and I are right in thinking human rights foundationalism outmoded, then Hume is a better advisor than Kant about how we intellectuals can hasten the coming of the Enlightenment utopia for which both men yearned. Among contemporary philosophers, the best advisor seems to me to be Annette Baier. Baier describes Hume as “the woman’s moral philosopher” because Hume held that “corrected (sometimes rule-corrected) sympathy, not law-discerning reason, is the fundamental moral capacity”10. Baier would like us to get rid of both the Platonic idea that we have a true self, and the Kantian idea that it is rational to be moral. In aid of this project, she suggests that we think of “trust” rather than “obligation” as the fundamental moral notion. This substitution would mean thinking of the spread of the human rights culture not as a matter of our becoming more aware of the requirements of the moral law, but rather as what Baier calls “a progress of sentiments”11. This progress consists in an increasing ability to see the similarities between ourselves and people very unlike us as outweighing the differences. It is the result of what I have been calling “sentimental education”. The relevant similarities are not a matter of sharing a deep true self which instantiates true humanity, but are such little, superficial, similarities as cherishing our parents and our children – similarities that do not interestingly distinguish us from many nonhuman animals.

To accept Baier’s suggestions, however, we should have to overcome our sense that sentiment is too weak a force, and that something stronger is required. This idea that reason is “stronger” than sentiment, that only an insistence on the unconditionality of moral obligation has the power to change human beings for the better, is very persistent. I think that this persistence is due mainly to a semiconscious realization that, if we hand our hopes for moral progress over to sentiment, we are in effect handing them over to condescension. For we shall be relying on those who have the power to change things – people like the rich New England abolitionists, or rich bleeding hearts like Robert Owen and Friedrich Engels – rather than on something that has power over them. We shall have to accept the fact that the fate of the women of Bosnia depends on whether TV journalists manage to do for them what Harriet Beecher Stowe did for black slaves, whether these journalists can make us, the audience back in the safe countries, feel that these women are more like us, more like real human beings, than we had realized.

To rely on the suggestions of sentiment rather than on the commands of reason is to think of powerful people gradually ceasing to oppress others, or ceasing to countenance the oppression of others, out of mere niceness, rather than out of obedience to the moral law. But it is revolting to think that our only hope for a decent society consists in softening the self-satisfied hearts of a leisure class. We want moral progress to burst up from below, rather than waiting patiently upon condescension from the top. The residual popularity of Kantian ideas of “unconditional moral obligation” – obligation imposed by deep ahistorical noncontingent forces – seems to me almost entirely due to our abhorrence for the idea that the people on top hold the future in their hands, that everything depends on them, that there is nothing more powerful to which we can appeal against them.

Like everyone else, I too should prefer a bottom-up way of achieving utopia, a quick reversal of fortune which will make the last first. But I do not think this is how utopia will in fact come into being. Nor do I think that our preference for this way lends any support to the idea that the Enlightenment project lies in the depths of every human soul. So why does this preference make us resist the thought that sentimentality may be the best weapon we have? I think Nietzsche gave the right answer to this question: We resist out of resentment. We resent the idea that we shall have to wait for the strong to turn their piggy little eyes to the suffering of the weak. We desperately hope that there is something stronger and more powerful that will hurt the strong if they do not – if not a vengeful God, then a vengeful aroused proletariat, or, at least, a vengeful superego, or, at the very least, the offended majesty of Kant’s tribunal of pure practical reason. The desperate hope for a noncontingent and powerful ally is, according to Nietzsche, the common core of Platonism, of religious insistence on divine omnipotence, and of Kantian moral philosophy12.

Nietzsche was, I think, right on the button when he offered this diagnosis. What Santayana called “supernaturalism”, the confusion of ideals and power, is all that lies behind the Kantian claim that it is not only nicer, but more rational, to include strangers within our moral community than to exclude them from it. If we agree with Nietzsche and Santayana on this point, however, we do not thereby acquire any reason to turn our backs on the Enlightenment project, as Nietzsche did. Nor do we acquire any reason to be sardonically pessimistic about the chances of this project, in the manner of admirers of Nietzsche like Santayana, Ortega, Heidegger, Strauss, and Foucault.

For even though Nietzsche was absolutely right to see Kant’s insistence on unconditionality as an expression of resentment, he was absolutely wrong to treat Christianity, and the age of the democratic revolutions, as signs of human degeneration. He and Kant, alas, shared something with each other which neither shared with Harriet Beecher Stowe – something which Iris Murdoch has called “dryness” and which Jacques Derrida has called “phallogocentrism”. The common element in the thought of both men was a desire for purity. This sort of purity consists in being not only autonomous, in command of oneself, but also in having the kind of self-conscious self-sufficiency which Sartre describes as the perfect synthesis of the in-itself and the for-itself. This synthesis could only be attained, Sartre pointed out, if one could rid oneself of everything sticky, slimy, wet, sentimental, and womanish.

Although this desire for virile purity links Plato to Kant, the desire to bring as many different kinds of people as possible into a cosmopolis links Kant to Stowe. Kant is, in the history of moral thinking, a transitional stage between the hopeless attempt to convict Thrasymachus of irrationality and the hopeful attempt to see every new featherless biped who comes along as one of us. Kant’s mistake was to think that the only way to have a modest, damped-down, nonfanatical version of Christian brotherhood after letting go of the Christian faith was to revive the themes of pre-Christian philosophical thought. He wanted to make knowledge of a core self do what can be done only by the continual refreshment and re-creation of the self, through interaction with selves as unlike itself as possible.

Kant performed the sort of awkward balancing act required in transitional periods. His project mediated between a dying rationalist tradition and a vision of a new, democratic world, the world of what Rabossi calls “the human rights phenomenon”. With the advent of this phenomenon, Kant’s balancing act has become outmoded and irrelevant. We are now in a good position to put aside the last vestiges of the ideas that human beings are distinguished by the capacity to know rather than by the capacities for friendship and intermarriage, distinguished by rigorous rationality rather than by flexible sentimentality. If we do so, we shall have dropped the idea that assured knowledge of a truth about what we have in common is a prerequisite for moral education, as well as the idea of a specifically moral motivation. If we do all these things, we shall see Kant’s Foundations of the Metaphysics of Morals as a placeholder for Uncle Tom’s Cabin – a concession to the expectations of an intellectual epoch in which the quest for quasi-scientific knowledge seemed the only possible response to religious exclusionism13.

Unfortunately, many philosophers, especially in the English-speaking world, are still trying to hold on to the Platonic insistence that the principal duty of human beings is to know. That insistence was the lifeline to which Kant and Hegel thought we had to cling14. Just as German philosophers in the period between Kant and Hegel saw themselves as saving “reason” from Hume, many English-speaking philosophers now see themselves saving reason from Derrida. But with the wisdom of hindsight, and with Baier’s help, we have learned to read Hume not as a dangerously frivolous iconoclast but as the wettest, most flexible, least phallogocentric thinker of the Enlightenment. Someday, I suspect, our descendants may wish that Derrida’s contemporaries had been able to read him not as a frivolous iconoclast, but rather as a sentimental educator, another of “the women’s moral philosophers”15.

If one follows Baier’s advice one will not see it as the moral educator’s task to answer the rational egotist’s question “Why should I be moral?” but rather to answer the much more frequently posed question “Why should I care about a stranger, a person who is no kin to me, a person whose habits I find disgusting?”. The traditional answer to the latter question is “Because kinship and custom are morally irrelevant, irrelevant to the obligations imposed by the recognition of membership in the same species”. This has never been very convincing, since it begs the question at issue: whether mere species membership is, in fact, a sufficient surrogate for closer kinship. Furthermore, that answer leaves one wide open to Nietzsche’s discomfiting rejoinder: That universalistic notion, Nietzsche will sneer, would only have crossed the mind of a slave – or, perhaps, the mind of an intellectual, a priest whose self-esteem and livelihood both depend on getting the rest of us to accept a sacred, unarguable, unchallengeable paradox.

A better sort of answer is the sort of long, sad, sentimental story which begins “Because this is what it is like to be in her situation – to be far from home, among strangers”, or “Because she might become your daughter-in-law”, or “Because her mother would grieve for her”. Such stories, repeated and varied over the centuries, have induced us, the rich, safe, powerful, people, to tolerate, and even to cherish, powerless people – people whose appearance or habits or beliefs at first seemed an insult to our own moral identity, our sense of the limits of permissible human variation.

To people who, like Plato and Kant, believe in a philosophically ascertainable truth about what it is to be a human being, the good work remains incomplete as long as we have not answered the question “Yes, but am I under a moral obligation to her?”. To people like Hume and Baier, it is a mark of intellectual immaturity to raise that question. But we shall go on asking that question as long as we agree with Plato that it is our ability to know that makes us human.

Plato wrote quite a long time ago, in a time when we intellectuals had to pretend to be successors to the priests, had to pretend to know something rather esoteric. Hume did his best to josh us out of that pretense. Baier, who seems to me both the most original and the most useful of contemporary moral philosophers, is still trying to josh us out of it. I think Baier may eventually succeed, for she has the history of the last two hundred years of moral progress on her side. These two centuries are most easily understood not as a period of deepening understanding of the nature of rationality or of morality, but rather as one in which there occurred an astonishingly rapid progress of sentiments, in which it has become much easier for us to be moved to action by sad and sentimental stories.

This progress has brought us to a moment in human history in which it is plausible for Rabossi to say that the human rights phenomenon is a “fact of the world”. This phenomenon may be just a blip. But it may mark the beginning of a time in which gang rape brings forth as strong a response when it happens to women as when it happens to men, or when it happens to foreigners as when it happens to people like us.

1. “Letter from Bosnia”, New Yorker, November 23, 1992, 82-95.

2. “Their griefs are transient. Those numberless afflictions, which render it doubtful whether heaven has given life to us in mercy or in wrath, are less felt, and sooner forgotten with them. In general, their existence appears to participate more of sensation than reflection. To this must be ascribed their disposition to sleep when abstracted from their diversions, and unemployed in labor. An animal whose body is at rest, and who does not reflect must be disposed to sleep of course”. Thomas Jefferson, “Notes on Virginia”, Writings, ed. Lipscomb and Bergh (Washington, D.C.: 1905),1:194.

3. Geertz, “Thick Description” in his The Interpretation of Culture (New York: Basic Books, 1973), 22.

4. Rabossi also says that he does not wish to question “the idea of a rational foundation of morality”. I am not sure why he does not. Rabossi may perhaps mean that in the past – for example, at the time of Kant – this idea still made a kind of sense, but it makes sense no longer. That, at any rate, is my own view. Kant wrote in a period when the only alternative to religion seemed to be something like science. In such a period, inventing a pseudoscience called “the system of transcendental philosophy” – setting the stage for the show-stopping climax in which one pulls moral obligation out of a transcendental hat – might plausibly seem the only way of saving morality from the hedonists on one side and the priests on the other.

5. The present state of metaethical discussion is admirably summarized in Stephen Darwall, Allan Gibbard, and Peter Railton, “Toward Fin de Siècle Ethics: Some Trends”, The Philosophical Review 101 (1992): 115-89. This comprehensive and judicious article takes for granted that there is a problem about “vindicating the objectivity of morality” (127), that there is an interesting question as to whether morals is “cognitive” or “non-cognitive”, that we need to figure out whether we have a “cognitive capacity” to detect moral properties (148), and that these matters can be dealt with ahistorically.

When these authors consider historicist writers such as Alasdair MacIntyre and Bernard Williams, they conclude that they are “[meta]théoriciens malgré eux” who share the authors’ own “desire to understand morality, its preconditions and its prospects” (183). They make little effort to come to terms with suggestions that there may be no ahistorical entity called “morality” to be understood. The final paragraph of the paper does suggest that it might be helpful if moral philosophers knew more anthropology, or psychology, or history. But the penultimate paragraph makes clear that, with or without such assists, “contemporary metaethics moves ahead, and positions gain in complexity and sophistication”.

It is instructive, I think, to compare this article with Annette Baier’s “Some Thoughts On How We Moral Philosophers Live Now”, The Monist 67 (1984): 490. Baier suggests that moral philosophers should “at least occasionally, like Socrates, consider why the rest of society should not merely tolerate but subsidize our activity”. She goes on to ask, “Is the large proportional increase of professional philosophers and moral philosophers a good thing, morally speaking? Even if it scarcely amounts to a plague of gadflies, it may amount to a nuisance of owls”. The kind of metaphilosophical and historical self-consciousness and self-doubt displayed by Baier seems to me badly needed, but it is conspicuously absent in Philosophy in Review (the centennial issue of The Philosophical Review in which “Toward Fin de Siècle Ethics” appears). The contributors to this issue are convinced that the increasing sophistication of a philosophical subdiscipline is enough to demonstrate its social utility, and are entirely unimpressed by murmurs of “decadent scholasticism”.

6. Fichte’s Vocation of Man is a useful reminder of the need that was felt, circa 1800, for a cognitive discipline called philosophy that would rescue utopian hope from natural science. It is hard to think of an analogous book written in reaction to Darwin. Those who couldn’t stand what Darwin was saying tended to go straight back past the Enlightenment to traditional religious faith. The unsubtle, unphilosophical opposition, in nineteenth-century Britain and France, between science and faith suggests that most intellectuals had become unable to believe that philosophy might produce some sort of superknowledge, knowledge that might trump the results of physical and biological inquiry.

7. Some contemporary intellectuals, especially in France and Germany, take it as obvious that the Holocaust made it clear that the hopes for human freedom which arose in the nineteenth century are obsolete – that at the end of the twentieth century we postmodernists know that the Enlightenment project is doomed. But even these intellectuals, in their less preachy and sententious moments, do their best to further that project. So they should, for nobody has come up with a better one. It does not diminish the memory of the Holocaust to say that our response to it should not be a claim to have gained a new understanding of human nature or of human history, but rather a willingness to pick ourselves up and try again.

8. Nietzsche was right to remind us that “these same men who, amongst themselves, are so strictly constrained by custom, worship, ritual gratitude and by mutual surveillance and jealousy, who are so resourceful in consideration, tenderness, loyalty, pride and friendship, when once they step outside their circle become little better than uncaged beasts of prey”. The Genealogy of Morals, trans. Golffing (Garden City, N.Y.: Doubleday, 1956), 174.

9. Colin McGinn, Moral Literacy: or, How to Do the Right Thing (London: Duckworth, 1992), 16.

10. Baier, “Hume, the Women’s Moral Theorist?”, in Eva Kittay and Diana Meyers, eds., Women and Moral Theory (Totowa, N.J.: Rowman and Littlefield, 1987), 40.

11. Baier’s book on Hume is entitled A Progress of Sentiments: Reflections on Hume’s Treatise (Cambridge, Mass.: Harvard University Press, 1991). Baier’s view of the inadequacy of most attempts by contemporary moral philosophers to break with Kant comes out most clearly when she characterizes Allan Gibbard (in his book Wise Choices, Apt Feelings) as focusing “on the feelings that a patriarchal religion has bequeathed to us”, and says that “Hume would judge Gibbard to be, as a moral philosopher, basically a divine disguised as a fellow expressivist” (312).

12. Nietzsche’s diagnosis is reinforced by Elizabeth Anscombe’s famous argument that atheists are not entitled to the term “moral obligation”.

13. See Jane Tompkins, Sensational Designs: The Cultural Work of American Fiction, 17901860 (New York: Oxford University Press, 1985), for a treatment of the sentimental novel that chimes with the point I am trying to make here. In her chapter on Stowe, Tompkins says that she is asking the reader “to set aside some familiar categories for evaluating fiction – stylistic intricacy, psychological subtlety, epistemological complexity – and to see the sentimental novel not as an artifice of eternity answerable to certain formal criteria and to certain psychological and philosophical concerns, but as a political enterprise, halfway between sermon and social theory, that both codifies and attempts to mold the values of its time” (126).

The contrast that Tompkins draws between authors like Stowe and “male authors such as Thoreau, Whitman and Melville, who are celebrated as models of intellectual daring and honesty” (124), parallels the contrast I tried to draw between public utility and private perfection in my Contingency, Irony and Solidarity (Cambridge, England: Cambridge University Press, 1989). I see Uncle Tom’s Cabin and Moby Dick as equally brilliant achievements, achievements that we should not attempt to rank hierarchically, because they serve such different purposes. Arguing about which is the better novel is like arguing about which is the superior philosophical treatise: Mill’s On Liberty or Kierkegaard’s Philosophical Fragments.

14. Technically, of course, Kant denied knowledge in order to make room for moral faith. But what is transcendental moral philosophy if not the assurance that the noncognitive imperative delivered via the common moral consciousness shows the existence of a “fact of reason” – a fact about what it is to be a human being, a rational agent, a being that is something more than a bundle of spatio-temporal determinations? Kant was never able to explain how transcendental knowledge could be knowledge, but he was never able to give up the attempt to claim such knowledge.

On the German project of defending reason against Hume, see Fred Beiser, The Fate of Reason: German Philosophy From Kant to Fichte (Cambridge, Mass.: Harvard University Press, 1987).

15. I have discussed the relation between Derrida and feminism in “Deconstruction; Ideology and Feminism: A Pragmatist View”, forthcoming in Hypatia, and also in my reply to Alexander Nehamas in Lire Rorty (Paris: éclat, 1992). Richard Bernstein is, I think, basically right in reading Derrida as a moralist, even though Thomas McCarthy is also right in saying that “deconstruction” is of no political use.

Richard Rorty, Belgrade Circle Journal.

Conspiracies & Rationality

A conspiracy theory usually attributes the ultimate cause of an event or chain of events (usually political, social, pop cultural or historical events), or the concealment of such causes from public knowledge, to a secret, and often deceptive plot by a group of powerful or influential people or organizations. Many conspiracy theories imply that major events in history have been dominated by conspirators who manipulate political happenings from behind the scenes. Historians often take conspiracy theories as actual theory, i.e., the viewpoint with the greatest explanatory value and the greatest utility as a starting point for further investigation, explanation and problem solving.

There are no less than 10,000 sites on the internet that explore, or further conspiracy theories. Amongst the leading theories are the following (Wired Magazine, Issue 15.11):

Nasa Faked the Moon Landings
And Arthur C. Clarke wrote the script, at least in one version of the story. Space skeptics point to holes in the Apollo archive (like missing transcripts and blueprints) or oddities in the mission photos (misplaced crosshairs, funny shadows). A third of respondents to a 1970 poll thought something was fishy about mankind’s giant leap. Today, 94 percent accept the official version… Saps!

The US Government Was Behind 9/11
Or Jews. Or Jews in the US government. The documentary Loose Change claimed to find major flaws in the official story — like the dearth of plane debris at the site of the Pentagon blast and that jet fule alone could never vaporize a whole 757. Judge for yourself: After Popular Mechanics debunked the theory, the magazine’s editors faced off with proponents in a debate, available on YouTube.

Princess Diana Was Murdered
Rumors ran wild after Princess Diana’s fatal 1997 car crash, and they haven’t stopped yet. Reigning theories: She faked her death to escape the media’s glare, or the royals snuffed her out (via MI6) to keep her from marrying her Muslim boyfriend. For the latest scenarios, check out www.alfayed.com, the Web site of her boyfriend’s dad, Mohamed Al Fayed.

The Jews Run Hollywood and Wall Street
A forged 19th-century Russian manuscript called “The Protocols of the Elders of Zion” (virtually required reading in Nazi Germany) purports to lay out a Jewish plot to control media and finance, and thus the world. Several studies have exposed the text as a hoax, but it’s still available in numerous languages and editions.

The Scientologists Run Hollywood
The long list of celebrities who have had Dianetics on their nightstands fuels rumors that the Church of Scientology pulls the strings in Tinseltown — vetting deals, arranging marriages, and spying on stars. The much older theory is that Jews run Hollywood, and the Scientologists have to settle for running Tom Cruise.

Paul Is Dead
Maybe you’re amazed, but in 1969 major news outlets reported on rumors of the cute Beatle’s death and replacement by a look-alike. True believers pointed to a series of clues buried in the Fab Four’s songs and album covers. Even for skeptics, McCartney’s later solo career lent credibility to the theory.

AIDS Is a Man-Made Disease
A number of scientists have argued that HIV was cooked up in a lab, either for bioweapons research or in a genocidal plot to wipe out gays and/or minorities. Who supposedly did the cooking? US Army scientists, Russian scientists, or the CIA. Mainstream researchers point to substantial evidence that HIV jumped species from African monkeys to humans.

Lizard-People Run the World
If a science fiction-based religion isn’t exotic enough, followers of onetime BBC reporter David Icke believe that certain powerful people — like George W. Bush and the British royals — actually belong to an alien race of shape-shifting lizard-people. Icke claims Princess Diana confirmed this to one of her close friends; other lizard theories (there are several) point to reptilian themes in ancient mythology. And let’s not forget the ’80s TV show V.

The Illuminati Run the World
The ur-conspiracy theory holds that the world’s corporate and political leaders are all members of an ancient cabal: Illuminati, Rosicrucians, Freemasons — take your pick. It doesn’t help that those secret societies really existed (George Washington was a Mason). Newer variations implicate the Trilateral Commission, the New World Order, and Yale’s Skull and Bones society.

 

The expression “conspiracy theory” has strongly negative connotations; it is almost invariably used in a way which implies that the theory in question is not to be taken seriously. However careful consideration of what a conspiracy theory is reveals that this dismissive attitude is not justified.

A “conspiracy” is simply a secret plan on the part of a group of people to bring about some shared goal, and a “conspiracy theory” is simply a theory according to which such a plan has occurred or is occurring. Most people can cite numerous examples of conspiracies from history, current affairs, or their own personal experience. Hence most people are conspiracy theorists. The problem is that when people think of particular examples of conspiracy theories they tend to think of theories that are clearly irrational.

When asked to cite examples of typical conspiracy theories, many people will refer to theories involving conspirators who are virtually all-powerful or virtually omniscient.

Others will mention theories involving alleged conspiracies that have been going on for so long or which involve so many people, that it implausible to suppose that it could have remained undetected (by anyone other than the conspiracy theorists).

Still others refer to theories involving conspirators who appear to have no motive to conspire (unless perhaps the desire to do evil for its own sake can be thought of as a motive).

Such theories are conspiracy theories and they are irrational, but it does not follow, nor is it true, that they are irrational because they are conspiracy theories. Thinking of such irrational conspiracy theories as paradigms of conspiracy theories is like thinking of numerology as a paradigm of number theory, or astrology as a paradigm of a theory of planetary motion. The subject matter of a theory does not in general determine whether belief in it is rational or not.

People do conspire. Indeed almost everyone conspires some of the time (think of surprise birthday parties) and some people conspire almost all the time (think of CIA agents). Many things (for example, September 11) cannot be explained without reference to a conspiracy. The only question in such cases is “Which conspiracy theory is true?”.

The official version of events (which in this case I accept) is that the conspirators were members of al-Qaida. This explanation is, however, unlikely to attract the label “conspiracy theory”. Why not? Because it is also the “official story”.

Although it is common to contrast conspiracy theories with the official non-conspiratorial version of events, quite often the official version of events is just as conspiratorial as its rivals. When this is the case, it is the rivals to the official version of events that will inevitably be labelled “conspiracy theories” with all the associated negative connotations. So, “conspiracy theory” has become, in effect, a synonym for a belief which conflicts with an official story.

This should make it clear how dangerous the expressions “conspiracy theory” and “conspiracy theorist” have become. These expressions are regularly used by politicians and other officials, and more generally by defenders of officialdom in the media, as terms of abuse and ridicule.

Yet it is vital to any open society that there are respected sources of information which are independent of official sources of information, and which can contradict them without fear. The widespread view that conspiracy theories are always, or even typically, irrational is not only wrongheaded, it is a threat to our freedom.

Of course, no one should deny that there are people who have an irrational tendency to see conspiracies everywhere, and it would, of course, be possible to restrict the expression “conspiracy theorist” in such a way that it only referred to such people. But if we do this, we should also remember that there is another form of irrationality, namely the failure to see conspiracy, even when one is confronted with clear evidence of it, which is at least as widespread, and which is far more insidious.

We need a name for people who irrationally reject evidence of conspiracy, to give our political discourse some much needed balance.

I think the expression “coincidence theorist”, which has gained a certain currency on the Internet, is a suitable candidate. A coincidence theorist fails to connect the dots, no matter how suggestive of an underlying pattern, they are.

A hardened coincidence theorist may watch a plane crash into the second tower of the World Trade Centre without thinking that there is any connection between this event and the plane which crashed into the other tower of the World Trade Centre less than an hour earlier.

Similarly, a coincidence theorist can observe the current American administration’s policies in oil rich countries from Iraq and Iran to Venezuela, and see no connection between those policies and oil.

A coincidence theorist is just as irrational as a conspiracy theorist (in the sense of someone excessively prone to conspiracy theorising). They are equally prone to error, though their errors are of different and opposing kinds. The errors of the conspiracy theorist, however, are much less dangerous than the errors of the coincidence theorist. The conspiracy theorist usually only harms himself. The coincidence theorist may harm us all by making it easier for conspirators to get away with it.

Also see: Conspiracy Theories: The Philosophical Debate, David Coady, Ashgate, 2006.

Recommended Reading: Pinker’s ‘How the Mind Works’

Stephen Pinker’s How the Mind Works is an ambitious attempt to bring recent developments in cognitive science to a non-specialist audience. Philosophers’ quibbles be damned, Pinker reaches right for the brass ring: his title refers to the mind, not just to gray matters like the brain. Pinker means to do for mentality what Stephen Jay Gould does for life or Carl Sagan did for the universe.

He’s got a lot of company. There’s been a stampede lately of would-be “re-discoverers” and “rethinkers” and “explainers” of the mind and consciousness, including John Searle, the Churchlands, John Eccles, Alwyn Scott, David Chalmers, Daniel Dennett, et al. Indeed, this field is becoming so crowded it may well take Pinkeresque cheek to, as the advertisers dream, “cut through the clutter.”

How the Mind Works reads like a more broadly-focused sequel to Pinker’s fast-selling The Language Instinct (1994). That book attempted a synthesis of Chomskian generative linguistics and Darwinian natural selection-a shotgun marriage if there ever was one, since Noam Chomsky is renowned for his evolutionary agnosticism. The new book seems to have been written in a spirit of “mopping up” remaining pockets of resistance to Pinker’s view of the mind as a bunch of specialized processing “organs,” like Chomsky’s language module (add a vision module, a physics module, a sex-getting module etc.) Two other leading proponents of this model, evolutionary psychologists Leda Cosmides and John Tooby, have famously advanced the simile of the mind being like a swiss Army knife: an all-in-one collection of purpose-built, content-rich devices, albeit in the mind’s case designed not by the Swiss, but by natural selection. Writes Pinker:

The mind is a system of organs of computation, designed by natural selection to solve the kinds of problems our ancestors faced in their foraging way of life, in particular, understanding and outmaneuvering objects, animals, plants, and other people…The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. The modules’ basic logic is specified by our genetic program.

He develops these ideas into a smooth but selective confection of experimental results, reasonable-sounding argument, and trenchant criticism, leavened frequently with humor and counter-intuitive but demonstrable observations of how the mind “really” works:

When Hamlet says, “What a piece of work is man! how noble in reason! how infinite in faculty! in form and moving how express and admirable!” we should direct our awe not at Shakespeare or Mozart or Einstein or Kareem Abdul-Jabbar but a four-year old carrying out a request to put a toy on a shelf…I want to convince you that our minds are not animated by some godly vapor or single wonder principle. The mind, like the Apollo spacecraft, is designed to solve many engineering problems, and thus is packed with high-tech systems each contrived to overcome its own obstacles.

The dish goes down surprisingly easily-Pinker makes hundreds of pages of technospeak go by about as quickly as Tom Clancy. Perhaps this is because, like any airport thriller, Pinker’s book has clear villains. These are perpetrators of what Cosmides and Tooby call the “Standard Social Science Model”: those folks (once based in philosophical behaviorism and psychology, with certain elements inherited by current cultural anthropology and literary studies) who insist on believing that the mind is primarily a social construction, entering the world as a blank slate that gets written upon by the environment and by culture.

As Pinker everywhere argues, “learning” as commonly conceived is simply too underpowered a process to explain the complex abilities the mind acquires and performs, all largely beneath our conscious awareness. Hence those “high tech” features our minds possess as “standard equipment;” hence the special scorn Pinker pours on those who continue to press the “folklore” that language, perception, etc. are the fruits of general learning mechanisms. “…the contents of the world are not just there for the knowing,” he asserts, “but have to be grasped with suitable mental machinery.”
High-tech and mechanistic imagery aside, a clear intellectual pedigree can be traced from Pinker to Chomsky to Descartes and the decidedly unmechanistic Plato. Chomsky himself has been more forthcoming in his debt to these earlier thinkers, on occasion allowing himself to be called “neo-Cartesian.”

Indeed, so enamored is Chomsky of his “law and order” view of mental life, he has denied the legitimacy of studying real-life utterances in a “properly” scientific linguistics. Instead, the Chomskian view of language study verges on medieval scholasticism, with colleges of closeted linguists hunched over their manuscripts, musing over rules like how many prepositions can dance on the head of a noun phrase. Actual language, meanwhile, rages on unstudied outside the monastery walls.

Pinker can never quite bring himself to go this far. Sometimes, he comes close:

Systems of [mental] rules are idealizations that abstract away from complicating aspects of reality. They are never visible in pure form, but are no less real for all that…[the idealizations] are masked by the complexity and finiteness of the world and by many layers of noise…Just as friction does not refute Newton, exotic disruptions of the idealized alignment of genetics, physiology, and law do not make “mother” any fuzzier within each of these systems.

That is, in Pinker’s mind the rules are just as real as the reality, though they are abstractions. Of course, what counts as a natural “law” and what counts as distracting “noise” is never so easily resolvable: while it is true that friction doesn’t refute Newtonian notions of gravitation, the “laws” of friction are also handy for keeping airplanes in the air and braking your car. What counts as law and what counts as “noise” therefore depends on context. Unfortunately, in the Chomskian case- Pinker included- these are suspiciously often matters of authority and/or selective attention.

The Dawn of the Chuck

As in The Language Instinct, Pinker has an unfortunate habit of making issues seem resolved that aren’t. As Cosmides and Tooby themselves note, there’s a problem with using “learning” as an explanation:

Advocates of the Standard Social Science Model have believed for nearly a century that they have a solid explanation for how the social world inserts organization into the psychology of the developing individual. They maintain that structure enters from the social (and physical) world by a process of “learning”- individuals “learn” their language, they “learn” their culture, they “learn” to walk, and so on…Of course, as most cognitive scientists know (and all should), “learning”…is not an explanation for anything, but is rather a phenomenon that itself requires explanation.

Cosmides and Tooby use their critique of learning to support their innativist views. However, it also suggests that it is not learning itself that is lacking, but our conception of learning. More to the point, there’s some evidence that Skinnerian stimulus-response, rats-pressing-levers-and-running-mazes-type learning is not the only kind of learning there is, especially in infants and children. This has led some fans of general intelligence to tell the evolutionary psychologists to put away their Swiss Army knives.

From outside academe, the differences between camps of cognitive scientists look positively trifling. Most within the field agree that the mind/brain does not come “out of the box”totally unstructured, a blank slate mostly “filled” by culture. Most agree that this innate structure is mediated to some degree by natural selection. There’s even some broad agreement that cultural factors (language, for instance), if they operate long enough and consistently enough (a few thousand generations, give or take), can also act as selective pressures, helping to reshape both the mind and body of what Jared Diamond has called “the third chimpanzee.”

From within the field, the remaining arguments look bitter. People like Pinker, Cosmides and Tooby, Dan Sperber, Nicholas Humphrey, and Elizabeth Spelke see modules, modules everywhere, each as innate and superbly adapted for their functions as the pancreas or an elephant’s trunk. Meanwhile, “domain-generalists” like Jeffrey Elman, Elizabeth Bates and Anna Karmiloff-Smith turn this reasoning on its head . That is, they don’t deny modularity per se (modules are, after all, a good way to package complex neural structures in the limited volume inside the skull) but maintain that our specialized abilities emerge from our predisposition to attend to certain regularities in the world. What’s innate is not the knowledge, but the capacity to observe the regularities and learn them quickly.

For example, Pinker invites us to marvel at the “software driver” that controls the human hand:

A still more remarkable feat is controlling the hand. . . It is a single tool that manipulates objects of an astonishing range of sizes, shapes, and weights, from a log to a millet seed. . . “A common man marvels at uncommon things; a wise man marvels at the commonplace . ” Keeping Confucius’ dictum in mind, let’s continue to look at commonplace human acts with the fresh eye of a robot designer seeking to duplicate them . . .

Typically, Pinker notes a complex adult capacity and wonders how to “reverse engineer” what natural selection hath wrought. He hardly considers the alternative, however: that the mind/brain is disposed to learn quickly and efficiently how to operate whatever appendage it happens to find at the end of its arm, whether it be a hand, a paw, or a flipper.

Neuroscientists have long known that when neurons fire, they not only can make muscles move and glands secrete, they also reinforce their own tendency to fire the same way in the future. (The principle is called Hebbian learning; its dictum is “Fire together, wire together.”) Conceivably, the act of using the hand reinforces the pattern of synaptic connections that control the thumb, the fingers, etc. The end result in the adult looks so well-designed and appropriate it might seem like an innate “program” for moving the hand was genetically “wired in”-but it wasn’t. We began only with a proper neural connection between hand and motor cortex, and a need to manipulate.

The point is not whether a Hebbian model fully explains motor learning. The point is that “gee whiz” explanations of complex adult abilities don’t necessarily prove full-blown innateness. (Most obviously because nobody starts off as an adult!) Furthermore, in evolutionary terms, the weaker model of predisposition, the “disposed to learn” model, is a more parsimonious explanation than inborn mental modules. This is because, in the case of the hand, it doesn’t require genes to somehow code the “hook grip,” “the five-jaw chuck”, “the two-jaw pad-to-side chuck,” “the scissors grip” etc. etc.. It only obliges our genes to motivate us to learn.

Indeed, exactly how genes might code things like language or “social intelligence” or “natural history intelligence” has never been too clear. DNA, after all, actually regulates nothing more than protein production and other DNA. In this sense, Cosmides and Tooby’s critique of learning might also be leveled at the catch-all notion of innateness. Like learning, innateness “…is not an explanation for anything, but is rather a phenomenon that itself requires explanation.”

Toy Neurons

Perhaps the most fascinating aspect of How the Mind Works is watching Pinker wrestle with the problem of connectionism. On the one hand, the fact that experimenters have succeeded in teaching artificial neural nets to do some pretty human-like things, like recognize written letters and put English verbs into the past tense, is a vindication of one of the pillars of his model: the computational theory of mind. On the other hand, the uncanny way neural nets have of learning the regularities of input data without set rules being programmed in is a challenge to Chomsky’s “poverty of the stimulus” arguments for innate knowledge. Some connectionist nets have shown modularity of function, and even human-like cognitive deficits when experimenters simulate “injuries” by removing parts of the system. Importantly, these mind-ish qualities have emerged with learning, and were not introduced pre-formed.

Pinker navigates this quandary by pure elan. After devoting 14 pages to the nature and advantages of “toy neurons” for understanding the mind, and thereby establishing his connectionist credentials, he suddenly takes to calling nets “connectoplasm” (a term clearly meant to evoke that other discredited substance, protoplasm), and asserts “neural networks alone cannot do the job” of accounting for human intelligence:

I do think that connectionism is oversold. Because networks are advertised as soft, parallel, analogical, biological and continuous, they have acquired a cuddly connotation and a diverse fan club. But neural networks don’t perform miracles, only some logical and statistical operations.

Of course, nobody thinks networks or neurons “perform miracles.” As a supporter of the computational theory of mind, Pinker must also believe that the mind/brain itself, at a certain basic level, performs “only some logical and statistical operations.” So what is he talking about?
The real sin of the “strong” version of connectionism- the argument that language, creativity, consciousness itself are all ultimately explicable along connectionist lines- is that it resurrects the associative model of learning. Connectionist nets, after all, learn by doing, and by crudely “associating” certain patterns of inputs and outputs. Following Chomsky, Pinker prefers to imagine a structure of cognitive rules and regulations is doing the real work of mindedness. These rules are not epiphenomenal artifacts of the learning process, or post hoc abstractions from regularities of behavior. Rather, they are ontologically “real.”

To debunk associationism, Pinker makes a list of human-like things that nets can’t do (yet). At least one of these is downright silly: Pinker claims that nets can’t distinguish individual examples of a class of things from each other. Within the connectionist paradigm, “there is no longer a way to tell apart individuals with identical properties. They are represented in one and the same way, and the system is blind to the fact that they are not the same hunk of matter.” People make such distinctions all the time; for instance, identical twins are different people, regardless of how much they look and seem alike: “The spouse of one identical twin feels no romantic attraction toward the other twin. Love locks our feelings in to another person as that person, not as a kind of person, no matter how narrow the kind.”

The fallacy here resides in the assumption that any two examples of any real-world class are really identical with each other. Distinguishing individuals has to do with noticing subtler and subtler kinds of variation. It often takes some time, for instance, for field ethologists to begin to see their subject animals as individuals- at first, they all look the same. To take a more commonplace example, my wife and I own two Himalayan cats that happen to be siblings. Despite the fact that the female is a tortoise-shell point, has a smaller head, and a completely different carriage and personality, houseguests invariably can’t distinguish her from her blue-point, big-headed, lay-about brother. My wife and I have had a longer amount of time to make the appropriate, fine associations.

Despite their genetic identity, not even monozygotic twins are phenotypically or behaviorally identical. Indeed, the one place where such identities do exist is the abstract mathematical world that inspired Chomskian linguistics- it’s a rule, for instance, that a line segment of length X is identical to any other of length X. This is an area where Pinker’s intellectual roots are exposed, and they mislead him. (In the non-mathematical world, incidentally, people do have an uncanny knack for associating certain romantic feelings with the same “types”- the same hair, same build, same foibles. It’s no secret. Maybe Pinker just doesn’t get out much.)

Some of his other objections are more persuasive. It is indeed hard to visualize how nets can handle complex combinatorics, or alter the quantification of elements in a problem when they’re the same-but-different, or process recursively unless specifically constructed to do so. (In such cases the connection weights have a tendency to interfere with each other.) On the other hand, all of these problems have the definite air of claims once made by reputable Victorian physicists who asserted the physical impossibility of heavier than-air flight. Above all, we should know by now that it’s not too smart to bet the farm on something(s) being technically impossible.

Pinker’s treatment of the other key concept in the book-evolution- is equally provocative. He’s clearly very much aware of the principles and objections to the reigning synthesis of Darwinian natural selection and Mendelian genetics. Steering clear of the pan-adaptationism decried by Gould and Richard Lewontin, he rightly observes that not everything about an organism is necessarily adaptive: “A sane person can believe that a complex organ is an adaptation, that is, a product of natural selection, while also believing that features of an organism that are not complex organs are a product of drift or a by-product of some other adaptation”

Gould and Lewontin once wrote of the “Panglossian paradigm”: the tendency among some evolutionary scientists to mistake how things actually happened for the optimal way things could  have happened. Though Pinker disavows it, his work fits the paradigm anyway. For instance, in a discussion of whether the development of intelligent life is inevitable on any life-supporting planet, he compiles a list of the unlikely factors that “made it especially easy and worth their while [for organisms] to evolve better powers of causal reasoning.” First on the list is the primates’ fortunate dependence on the visual sense. Why? Because “Depth perception defines a three-dimensional space filled with movable solid objects…Our capacity for abstract thought has coopted the coordinate system and inventory of objects made available by a well-developed visual system.”
Compare this to other mammals, such a dogs, who rely more on olfactory information:

Rather than living in a three-dimensional coordinate space hung with movable objects, standard mammals [sic] live in a two-dimensional flatland [the ground] which they explore through a zero-dimensional peephole [the nose]…If most mammals think in a cognitive flatland, they would lack the mental models of movable solid objects in 3-D spatial and mechanical relationships that became so essential to our mental life.

Anyone who has seen an earthworm bury itself or a dog sniff his way up the trunk of a tree knows olfactory dependence is not synonymous with living in a “two-dimensional flatland.” Nor does Pinker take note of other 3-D modalities, such as echolocation in bats and cetaceans, which likewise represent a world of “solid objects in 3-D spatial and mechanical relationships.” Faced with such a poverty of imagination with respect to terrestrial creatures, it’s hard to take Pinker’s musings over the unlikelihood of extraterrestrial intelligence very seriously. What about an alien creature living in liquid methane that uses short-wave radar? Or one that lives underground and finds petrocarbon “food” by using seismic “thumps”? What’s wrong with 2-D intelligence anyway? Would a creature able to reason in 4-, 8-, or 1,000- dimensions be justified in denying the significance of our 3-D intelligence? Perhaps we shouldn’t give up on SETI just yet.

Pinker’s discussion falls prey to the Panglossian paradigm because he thinks a sufficient condition for human intelligence is a necessary one for all intelligence-the particular way we evolved, in other words, is established as “optimal” for invading the cognitive niche. He plays Pangloss again elsewhere, in his criticism of idea of “meme evolution.” This is the notion, notably suggested by Richard Dawkins, that ideas, like organisms, might reproduce and evolve in the “habitat” of human brains. Sensing an opening for the cultural constructivists, Pinker tries to slam the door by asserting “When ideas are passed around, they aren’t merely copied with occasional typographical errors; they are evaluated, discussed, improved on, or rejected. Indeed, a mind that passively accepted ambient memes would be a sitting duck for exploitation by others and would have quickly been selected against.”

Try telling that to a Scientologist. Unlike in Pinker’s cognitive symposium, real people are actually very good at “passively accepting ambient memes.” It might even be adaptive to do so: Pinker himself suggests the survival benefit of not standing out, of hanging with the herd. In fact, Pinker is telling a variation on a “just so” story here, using an argument for adaptation to justify a point he asserts to be true. This is precisely what Gould and Lewontin warned against when they observed how, wrongly applied, tales of adaptation could be concocted to justify virtually any position. They note “…Since the range of adaptive stories is as wide as our minds are fertile, new stories can always be postulated.” Though Pinker professes an understanding of non-adaptationist factors in evolution, his work clearly falls into that category where, as Gould and Lewontin lament, “Constraints upon the pervasive power of natural selection are recognized…But…are usually dismissed as unimportant or else, more frustratingly, simply acknowledged and then not taken to heart and invoked.”

All of these problems might be traced to the consequences of Pinker’s primary methodology. This is the idea that we can figure out the mind/brain by “reverse engineering” it:

…psychology is engineering in reverse. Reverse engineering is what the boffins at Sony do when a new product is announced by Panasonic, or vice versa. They buy one, bring it back to the lab, take a screwdriver to it, and try to figure out what all the parts are for and how they combine to make the device work.

Up to a point, this seems like a reasonable analogy. Bodies and brains are, after all, kinds of organo-chemical mechanisms, and as Dawkins has notably observed, natural selection is “the blind watchmaker.” Why not pry the back off the timepiece of the mind and take a look?
Trouble is, human engineers and natural selection work in quite different ways. Following C.G. Langton, Daniel Dennett explains in Consciousness Explained:

…human engineers, being farsighted but blinkered, tend to find their designs thwarted by unforeseen side effects and interactions, so they try to guard against them by giving each element in the system a single function, and insulating it from all the other elements. In contrast, Mother Nature…is famously myopic and lacking in goals. Since she doesn’t foresee at all, she has no way of worrying about unforeseen side effects. Not “trying” to avoid them, she tries out designs in which many side effects occur…[and] every now and then there is a serendipitous side effect: two or more unrelated functional systems interact to produce a bonus: multiple functions for single elements.

The difference in how human engineers and the natural one build mechanisms entails more than the obvious fact that organisms self-organize (they grow) and machines get built. It affects every stage of the “design” process. When some capacity evolves in nature (say, flight), Darwinian selection doesn’t start out with a dream and a blank piece of paper- it starts out with an existing, functional organism. If the Wright Brothers had worked this way, they wouldn’t have designed a new machine from scratch. Instead, they would have gradually “retrofitted” some existing vehicle, like a horseless carriage. The resulting “flying flivver” might have taken much longer to realize than a purpose-built flyer; it might have suffered many more failed test flights until it achieved a sustained glide, then powered flight; it might have taken longer to get the heavy weight of the car down and the wingspan just right. In any case, aeronautical history would have been quite different.

All of which goes to show the problem with “reverse engineering” natural mechanisms: you can never be sure a widget was designed for some function, only that it presently serves that function. In the case of the “flying flivver,” it would be useless to wonder how the fenders and the bumper help the car fly better. Those features have to do with the history of structure, not its present function.

Of course, Pinker and every informed adaptationist knows all this. Furthermore, they would argue that certain essential features (like the wings) are so directly necessary in the evolved function that we must invoke adaptation. All true enough. But this is not the same as saying the human mind is “like the Apollo spacecraft…packed with high tech systems, each contrived to overcome its own obstacles.” As Langton argues, each system may well overcome several obstacles, and it pays not to be too categorical in assigning roles to each widget. If I were asked whether the brain is more like the Apollo spacecraft or a more like a petunia, I’d have to confess I’m not sure.

The Nature of Nature

Pinker is a master rhetorician. When he is on firm ground, he’s a superbly articulate popularizer. When he isn’t, he spins beautifully, exploits what he can, and knows when to beat a tactical retreat. His wit can disarm criticism.

All of which makes it surprising when his sense of humor deserts him and he reverts to dull partisanship. The ceaseless drumbeat of distortion and belittlement of social scientists is one such puzzling element of his book. These people, we learn, are too dense to understand the problem with Lamarckianism; they’re wrong, wrong, wrong about associationism; they insist on believing in “folklore” about the mind because they’re either bent on “feel good” politics or distracted by moral straw-men like genetic determinism.

If cultural anthropologists agree on any human universal, it is the tendency of all cultures to justify their own cultural constructions by “naturalizing” them. As Cosmides and Tooby argue and Pinker agrees, this has led many anthropologists either to deny any “human nature” exists, or to declare the search for universals as unavoidably an exercise in Western ethnocentrism.

Yet human beings did have an origin, and do have some sort of nature. Dread or misunderstanding of these facts have too often resulted in an incurious particularism that prefers to celebrate, not to explain, difference. If anthropology is traditionally a boat powered by two oars-the study of difference and the study of commonality amongst peoples-then the modern discipline has an empty oarlock and is rowing in circles.

But none of this is to say that “naturalization” doesn’t happen, especially among thinkers who profess totalizing theories. When Pinker is spinning his synthesis with respect to stereoscopic vision and incest avoidance, he talks a good game. But when we are expected to believe that, for instance, most peoples’ taste in landscapes is a feature of Cosmides and Tooby’s Swiss Army Knife, he strays into the full-blown ridiculous. He argues, for instance, that we exhibit a “default habitat preference” for savannas-according to certain cross-cultural surveys, everybody likes “semi-open space…even ground cover, views to the horizon, large trees, changes in elevation, and multiple paths leading out…” Though the very idea that we evolved in savannas is fiercely debated, Pinker conclusively declares “No one likes the deserts and the rainforests.” (Color me weird, then.) Nor does Pinker shy from drawing the logical aesthetic conclusions from this bit of human standard equipment-“…we are designed to be dissatisfied by bleak, featureless scenes and attracted to colorful, patterned ones.” There, I knew there was a reason I prefer Henri Rousseau to Georgia O’Keefe.
This is naturalizing. Based on such arguments, and observations of the range of human variation, anthropologists et al. may still have quite defensible reservations about importing whole disciplinary paradigms like that of cognitive science into anthropology, history, linguistics, etc. As Pinker himself suggests, it is quite reasonable for people- and that does include social scientists- not to “passively accept ambient memes.”

 

How the Mind Works,  Stephen Pinker, W.W. Norton, 565 pages
This review by Nick Nicastro

Xenophobia, Homophobia, Psychology, Politics

 

We, at the European Rationalist, rarely stray into areas of immediate political concern. However, the recent controversy surrounding Dr Rowan Willams related to his call for parts of Sharia law to be recognised in the UK gives us all an insight into the dynamics of present society and culture. Whilst we encourage peoples of all faiths to abandon irrational belief (indeed, we encourage the abandonment of faith), the considered case Dr Williams makes for inclusion of all peoples in our apparently open and progressive liberal society has to be the least painful way forward. Society can only evolve so fast, and it is clear that many people feel that this suggestion is absolutely nuts. The strain in society is at a level that many people have started to show knee-jerk reactions. I have always found Dr Williams to make suggestions that provoke soul-searching. If one reads what he has said in full, it is indeed thought-provoking. As long as we push for laws that genuinely show no religious preference we should be fine. I am reminded about an article that I read some time ago related to Homophobia, Religion, and Psychology that I think covers many of the same aspects as this debate. From the viewpoint of progressive rational debate, the fact that the Archbishop said what he said is a tremendous credit to the Anglican Christian and English community.

For those that are interested in reading what the Archbishop said, we reproduce his talk in PDF format here:

Islam in English Law: Civil and Religious Law in England – lecture by the Archbishop of Canterbury, Dr Rowan Williams
From Lambeth Palace, 7 February 2008.

Homophobia, the term for inflammatory remarks about homosexuals, can be more aptly described as “the socialized state of fear, threat, aversion, prejudice, and irrational hatred of the feelings of same-sex attraction” (Smith 88). In the following paragraphs, we will examine studies that focus the effects that religion and psychology can have upon homophobia. In general, there is a positive correlation between religious authoritarianism and homophobia. What we will try to add to the studies – using elaborations of ideas from Sigmund Freud and Erich Fromm – is an answer to questions like the following: What does this have to do with psychology? Why should the average person care about this? What can we do to reverse homophobia?

Much of the studies that we found deal with two factors – religious fundamentalism and right-wing authoritarianism – that make for a strong correlation for homophobia. The first, an article by Bruce Hunsberger, examines the first of these two facets of religious belief. Religious fundamentalism is defined as “the belief that there is one set of religious teachings that clearly contains the fundamental, basic, intrinsic essential inerrant truth about humanity and deity” (Hunsberger 5). The basic idea is that people who are religious fundamentalists believe that their way is the only true way, and that they must fight against all who oppose it. In the case of Hunsberger’s study, religious fundamentalism was positively correlated with homophobia, in both of the two areas in which the tests were conducted, Ghana and Canada.

In the case of this study, there could be many reasons for the relationship between religious fundamentalism and homophobia. Hunsberger concludes that, when men come from same-sex schools, homophobia among religious fundamentalists increases; when women from same-sex schools are evaluated, however, the result is a decrease in homophobia (8).

Now we must go further and provide some psychological insight into this finding…this yearning for solidarity, according to Fromm, extends to “the tribe, the nation, the race, the state, the social class, [and] political parties,” and becomes “the roots of nationalism and racism, which in turn are symptoms of man’s inability to experience others and himself as free human beings” (81). If we extend this idea even further, we can begin to understand the relationship between religious fundamentalism and homophobia. By this definition, one’s religion would become part of things such as the tribe or nation; and homophobia would become just another word for such hatred as nationalism and racism.

In the book Overcoming Heterosexism and Homophobia, Warren J. Blumenfeld creates a link between homophobia and anti-Semitism. He states that, throughout history, dominant groups represent target groups “in a variety of ways in order to maintain control or mastery” (Blumenfeld 131). He also gives the example of the Employment Discrimination Act, which grants rights to gays and lesbians. Some people, he says, oppose it on the grounds that gay people are using their status as a victim as a way to obtain special privileges. This logic infuriated people such as Senator Paul Wellston, who said it is “precisely the kind of argument that has been made . . . in behalf of the worst kind of discrimination against Jewish people” (Blumenfeld). The point here is that there are many types of incestuous groups. In making a connection between homophobia and anti-Semitism, we can start to disassemble the hatred.

The second article to address is entitled “Homophobia, Irrationality, and Christian Ideology: Does a Relationship Exist?” The authors of the article, Caroll Plugge-Foust and George Strickland, found a positive correlation between both Christian ideology and irrational beliefs with the Homophobia Scale (2). Here the idea is that homophobia is based on an irrational hatred, as we stated in the first sentence of this section. The religious ideas are the same as the first study. The most important part of this study is the use of Ellis’ Irrational Beliefs Scale. This scale contains eleven items dealing with beliefs common in U.S. culture that, if endorsed by the taker, would indicate a neurosis (Plugge-Foust 7). The study did, in fact, find that there is a correlation between irrational beliefs and homophobia as well as Christian ideology and homophobia (Plugge-Foust 9).

What could this mean? This article deals with both the religious and psychological aspects of homophobia, so there is less to elaborate upon. But, for the sake of our argument, let’s elaborate anyway. Fromm, again in the book Psychoanalysis and Religion, touches upon the idea of irrational thinking. He gives the example of a Stalinist:

We talk to an intelligent Stalinist who exhibits a great capacity to make use of his reason in many areas of thought. When we come to discuss Stalinism with him, however, we are suddenly confronted with a closed system of thought, the only function of which is to prove that his allegiance to Stalinism is in line with and not contradictory to reason (Fromm 57)

Again, we can extend Fromm’s thoughts. Since this is simply an example, one can say the same thing about the positive correlation between irrational thinking and homophobia: The homophobic person who is otherwise rational can exhibit great amounts of irrationality when talking about his or her feelings toward homosexuals.

The third article we will discuss is entitled “Religiosity, Authoritarianism, and Homophobia: A Multidimensional Approach,” by Wayne W. Wilkinson. Instead of using fundamentalism or irrationality as a basis for homophobia, Wilkinson uses the term right-wing authoritarianism (RWA), defined as “a sociopolitical construct characterized by submission to recognized authorities and the social norms established by those authorities, and hostility toward groups seen as violating these norms” (57). By this definition, RWA is not very much different than religious fundamentalism. Both are characterized by the idea that the group to which the person belongs is right, with the underlying assumption that anyone else is wrong.

The findings of the third study contrast with the first. In this case, people scored low in RWA; they also scored low in homophobia (Wilkinson 63). The most striking part of the study is these people did not view their world as a “dangerous place typified by ‘menacing outsiders’ threatening the established norms” (Wilkinson 63). Most likely, this schema is the reason for the low levels of homophobia. There are many things to think about regarding this third study. First, the students were from a higher socioeconomic background than those from the first study. We will not dwell on this aspect, but there could be a relationship there. Second, Wilkinson concludes that the low level of RWA – or, as he states, authoritarian self-righteousness – may have led the group to withhold the _expression of positive views of gays, such as granting them the same rights as everyone else (64). In other words, they lack the ability to claim moral authority so much that they do not even want to say what should be done to counteract homophobia. These findings, of course, are a good thing; and they are also an excellent way to finish our evaluations of research.

We have talked about Erich Fromm in this part of our project in order to do make our point clearer. Fromm most certainly addresses authoritarian religion. In his view, the authoritarian type of religion – whatever type it may be – leads humans to make the types of errors in judgment that we have talked about regarding homophobia. One of these views is the belief of the absolute truth of one’s beliefs, which leads to incestuous thoughts of nationalism, racism, and, as we have explained, homophobia. Another distortion of authoritarian religion is irrationality. Like the case of Stalinism, people can be rational in all aspects of their life, except when it comes to their attitudes toward homosexuals. Thus far, we have left out the most important part of Fromm’s argument: the humanistic aspect of religion. What does this mean, exactly? We have already seen the humanistic side of religion in this project. The individuals who scored low on the RWA test are examples of this care for man. They know the dangers of hatred such as homophobia. Instead of trying to describe Fromm’s thesis, we will close this part of our project with his own words:

Beyond the attitude of wonder and of concern there is a third element in religious experience, the one which is most clearly exhibited and described by the mystics. It is an attitude of oneness not only in oneself, not only with one’s fellow man, but with all life and, beyond that, with the universe (Fromm 95)

 

 

Works Cited

Blumenfeld, Warren J. “Homophobia and Anti-Semitism: Making the Links.” Overcoming Heterosexism and Homophobia: Strategies that Work. Sears, James T. and Williams, Walter L. New York: Columbia University Press, 1997. 131-140. This essay deals with the ways in which anti-Semitism and homophobia are interconnected. The article starts with a quote from Senator Paul Wellstone in which he decries discrimination against homosexuals, using the argument that it is the same as discriminating against Jews. The essay then describes ways that you can create ice-breaking activities to dispel myths about both gays and Jews.

Fromm, Erich. Psychoanalysis and Religion. New Haven: Yale University Press, 1978. Fromm, in this book, outlines the two types of religion – humanistic and authoritarian – under which, in his opinion, all major religions fall. He also speaks of the methods of incestuous behavior that carry over into society, such as racism and nationalism.

Hunsberger, Bruce; Owusu, Vida; and Duck, Robert. “Religion and Prejudice in Ghana and Canada: Religious Fundamentalism, Right-Wing Authoritarianism, and Attitudes Toward Homosexuals and Women.” International Journal for the Psychology of Religion 9 (1999): 181-195. The studies conducted are the same, just in two different places, in order to contrast or compare each of them. What they found is that, in both places, religious fundamentalism was positively correlated with homophobia; right-wing authoritarianism was positively correlated with negative attitudes toward women, and was therefore moot for this paper. They also found that when men go to same-sex schools, their attitudes toward gays worsen; for women, they get better.

Plugge-Foust, Caroll; and Strickland, George. “Homophobia, Irrationality, and Christian Ideology: Does a Relationship Exist?” Journal of Sex Education and Therapy 25 (2000): 240-245. This study investigated the relationship between homophobia, irrational beliefs, and religious ideology. The sample consisted of about 150 students, who anonymously and voluntarily completed Ellis’ Irrational Beliefs Scale, the Homophobia Scale, and the Doctrinal Label Scale of the Christian Ideology. The study showed a positive correlation between homophobia and irrational beliefs. Conservative Christian ideology was the best predictor of homophobia.

Wilkinson, Wayne W. “Religiosity, Authoritarianism, and Homophobia: A Multidimensional Approach.” The International Journal for the Psychology of Religion 14 (2004): 55-67. This study investigated the relationship between religiosity, right-wing authoritarianism (RWA), and homophobia. In contrast to the other two articles, RWA (or just plain religious ideology) was low in the test-takers, which made for a low amount of homophobia. In fact, the researcher concludes that the participants felt positively about homosexuals, but were afraid to voice their opinion out of fear of sounding too moral.

Homophobia, Religion, and Psychology, Andrew Keller

 

 

Morality & Neuroscience

An Ravelingien reports on the conference ‘Double standards. Towards an integration of evolutionary and neurological perspectives on human morality.’ (Ghent University, 21-22 Oct. 2006)

In Love in the Ruins, Walker Percy tells the story of Tom More, the inventor of the extraordinary ‘ontological lapsometer’1. The lapsometer is a diagnostic tool, a ‘stethoscope of the human soul’.  Just as a stethoscope or an EEG can trace certain physical dysfunctions, the lapsometer can measure the frailties of the human mind. The device can measure ‘how deep the soul has fallen’ and allows for early diagnoses of potential suicides, paranoia, depression, or other mood disorders. Bioethicist Carl Elliott refers to this novel to illustrate a well-known debate within psychiatry2. According to Elliott, the image of the physician that uses the lapsometer to unravel the mysteries of the soul is a comically desperate attempt to objectify experiences that cannot accommodate such scientific analysis. His objection carries back to the conflict between a sociological perspective – that would stress the subjective experiences related to the cultural and social context of human psychology – and a biological perspective – that would rather determine the physiological causes of mental and mood dysfunction. It is very likely that debate about the subjective and indefinite nature of some experiences will climax when empirical science is applied to trace and explain the biology of our moral sentiments and convictions. For most of us, I presume, nothing would appear to be more inextricably a part of our personal experience and merit than our moral competence. The conference ‘Double Standards’ questioned this intuition and demonstrated that the concept of ‘morality’ is becoming more and more tangible.

Jan Verplaetse and Johan Braeckman, the organizers of the conference, gathered 13 reputable experts and more than 150 participants to ponder one of the oldest and most fundamental philosophical questions: how did morality come into existence? For this, they drew upon two different scientific approaches: evolutionary psychology and neuroscience. In theory, these disciplines are complementary.  Neuroscientists assume that morality is generated by specific neural mechanisms and structures, which they hope to find by way of sophisticated brain imaging techniques. Evolutionary scientists, by contrast, want to figure out what the adaptive value of morality is for it to have evolved. According tot hem, morality is – just as all aspects of our human nature – a product of evolution through selection. Moral and social behavior must have had a selective advantage, from which the relevant cognitive and emotional functions developed. Through an interdisciplinary approach, the alleged functions can direct the neuroscientist in searching for the neurological structures that underlie them. Or, the other way around, the imaging of certain neural circuits should help to discover whether and to what extent our moral intuitions are indeed embedded in our ‘nature.’ During the conference, this double perspective gave rise to several interesting hypotheses.

It appears that neuroscientists have already achieved remarkably uniform results regarding the crucial brain areas that are involved in fulfilling moral tasks. Jorge Moll was the first to use functional MRI-studies to show that three major areas are engaged in moral decision making: the frontal lobes, temporal lobe and limbic-paralimbic areas. Other speakers at the conference confirmed this overlapping pattern of neural activity, regardless of differences in the ways in which moral stimuli were presented, and regardless of the specific content of the moral tasks (whether the tasks consisted of complex dilemma’s, simple scenario’s with an emotional undertone, or references to violence and bodily harm). Since these findings, several researchers have started looking for the biological basis of more specific moral intuitions. Jean Decety, for instance, has found the neural correlates that play a role in the cognitive modulation of empathy. fMRI-studies are also being used to compare ‘normal’ individuals with people who show deviant (and in particular criminal/immoral) behavior and to thereby derive new explanations of such a-typical behavior. As such, James Blair suggested that individuals with psychopathy have problems with learned emotional responses to negative stimuli.  According to him, the common neural circuit activated in moral decision making is in a more general sense involved in a rudimentary form of stimulus reinforcement learning. At least one form of morality is developed by such reinforcement learning: what Blair calls care-based morality. Contrary to psychopathic individuals, even very young children realize that there is an important difference between for instance the care-based norm ‘do not hit another child’ and the convention-based norm ‘do not talk during class’. In absence of a clear rule, ‘normal’ individuals will be more easily inclined to transgress social conventions than care-based norms. The reason for this, he proposed, is that transgression of care-based norms confronts us with the suffering of our victim(s). The observation of others in pain, sadness, anger, … immediately evokes a negative response, an aversion, in the self, from which we learn to avoid situations with similar stimuli. Blair offered brain images of psychopathic individuals that showed evidence of reduced brain activity in those parts of the brain that are involved in stimulus reinforcement (the ventromedial prefrontal cortex and the amygdala). Adrian Raine gave an entirely different perspective on ‘immoral behavior,’ in suggesting that certain deviances in the prefrontal cortex point to a predisposition towards antisocial behavior. According to Raine, immoral behavior need not be a dysfunction of normal neural circuits; evolution may just as well have shaped the brain to have a predisposition for immoral rather than moral behavior. Antisocial behavior may have a selective advantage: it can be a very effective means of taking others’ resources. As such, the expression of sham emotions (such as faked shame or remorse) can be interpreted as a strategy to mislead others in thinking that they have corrected their behavior. Raine finds support for his hypothesis in indications of a strong genetic basis for antisocial behavior. He also offered brain imaging results that show an 11% reduction in prefrontal grey matter in antisocial individuals and reduced activity in the prefrontal cortex of affective murderers.

Will we one day be able to evaluate ‘how deep someone’s morality has fallen’? Will there be a ‘stethoscope of morality,’ that can measure the weaknesses of our moral judgments and behaviors? If so, will we able to cure immoral behavior? Or, conversely, will we be able to augment the brain processes that are involved in our moral competence? Perhaps most importantly, what do we do with the notion of moral responsibility when there is evidence of predispositions towards antisocial behavior?  Although there is still a long way to go in understanding the neurobiology of human morality, this conference was an important step in introducing some moral dilemma’s that may confront us as the field of research progresses. More information on www.themoralbrain.be

1. Percy W (1971), Love in the Ruins, Farrar, Straus & Giroux, New York.

2. Elliott C (1999), Bioethics, Culture and Identity. A Philosophical Disease, Routledge, London.

 

——————————————————————————–
An Ravelingien Ph.D. is a fellow of the IEET, and an assistant researcher in bioethics at the Department of Philosophy, Ghent University.

The Addicted & Hijacked Brain

 

 

People have been using addictive substances for centuries, but only very recently, by using the powerful tools of brain imaging, genetics, and genomics, have scientists begun to understand in detail how the brain becomes addicted. Neuropharmacologists Wilkie A. Wilson, Ph.D., and Cynthia M. Kuhn, Ph.D., explain, for example, that you cannot conclude you are addicted to something because you experience withdrawal symptoms. And calling our love of chocolate or football an “addiction” not only trivializes the devastation wrought by addiction, but misses the point that addiction involves a hijacking of the brain’s circuitry, a reprogramming of the reward system, and lasting, sometimes permanent, brain changes. Any effective treatment must address both addiction’s reorganization of the brain and the power of the addict’s memories. The history of addiction stretches over thousands of years and reveals a persistent pattern: A chemical, often one with medicinal benefits, is discovered and found to be appealing for recreational use. Repeated use, however, leads to compulsive use and destructive consequences. Society then seeks to control use of the chemical. Many well known, problematic drugs have followed this pattern because they are derived from readily available and common plant products. Nicotine, cocaine, and many narcotics come from plants, and alcohol is produced by fermentation of many grains and fruits. These are products humans have known and used for millennia.  

Things began to change in the 19th century. Until then, methods of delivering the active ingredients to the brain were relatively unsophisticated: swallowing and smoking. Swallowing drugs often produces only a slow rise in brain concentrations because plants must be digested and absorbed and the active ingredients must escape destruction in the liver. When people realized that smoking plant products worked better, that became a favored method of delivery. Then we invented even more effective ways of getting drugs to the brain, especially the hypodermic syringe and needle. Now, modern chemistry has enabled us to synthesize potent, highly addictive chemicals, such as amphetamines, that were never available naturally. 

The ability to find new ways to become addicted has raced ahead of public understanding of the addiction process. For example, people often confuse a strong habit with an addiction, asserting that we can be addicted to chocolate, movies, or sports. Most people who are not addiction scientists or treatment professionals fail to understand what happens in the brain as addiction takes hold and how those brain changes may affect us. Yet one need not be an expert to understand how people become addicted, and the benefits of understanding are considerable— not least because to understand addiction is to understand the biological systems that govern our search for pleasure. 

FROM MEDICINE TO DISEASE TO JAIL

First, though, it is worth looking more closely at how addictive substances and their use have made their way into virtually every culture, from the simplest agrarian society to the most advanced technological one, and have provoked rules and sanctions when their power and appeal seemed threatening. 

Fermenting alcohol probably began with agriculture itself, and, by Biblical times, there were prohibitions against misuse of alcohol. During the Middle Ages, the discovery of distillation yielded drinks that were as much as 50 percent alcohol (today’s beer and wine range up to 15 percent alcohol). The enhanced potency, combined with wide availability and decreased social disapproval, caused use of alcohol to spread throughout Europe during the 17th century. The famous painting “Gin Lane” is emblematic of the rise of alcohol use and addiction in England during that time. Today’s worries about binge drinking by college students are but the most recent iteration of an age-old concern. 

Tobacco use followed a similar pattern. The leaves of the tobacco plant contain nicotine, which is both psychoactive and addictive. The plant is native to the Americas and its characteristics were probably known before the arrival of Europeans, although there are no written records by which to verify this. Tobacco first arrived in Europe in the early 16th century with returning Portuguese and Spanish explorers and soon was viewed as a miracle cure for everything from headaches to dysentery—so much so that it helped drive further Portuguese and Spanish colonization in the Americas. As tobacco use spread rapidly, health concerns and public outcry followed. By 1573, the Catholic Church had forbidden smoking in churches. But modern chemical techniques and the Industrial Revolution led to mass production of a perfect nicotine delivery device, the cigarette. The cigarette delivers a single, small dose of inhaled nicotine that enters the brain almost immediately. In the United States, manufactured cigarettes first appeared during the 1860s, and, by 1884, James B. Duke was producing almost a billion cigarettes a year. Protests, such as those by the Women’s Christian Temperance Union, soon followed, with complaints about addiction and other health concerns. The active prosecution of tobacco companies and increased legislation prohibiting smoking during the past decade are but the latest chapter in the history of tobacco use, addiction, and regulation. 

We see the pattern again with cocaine and narcotics. Ancient records indicate that cocaine, from the coca plant, was used by natives in South America to enhance physical endurance. Extracts of the opium poppy were used in South East Asia to relieve pain. During the late 19th century, European scientists purified both cocaine and morphine. What followed was an explosion of patent medicine manufacturing and sales; entrepreneurs founded future drug company giants such as Merck, Parke Davis, and Squibb Chemical Company, all of which marketed cocaine and narcotics as medicines. These drugs became widely used, and eventually abused, in Europe and the United States. Sigmund Freud’s personal research on cocaine helped to popularize the drug, and invention of the hypodermic syringe led to injectable analgesic and anesthetic drugs, increasing the potential for abuse. Public concern led to increased governmental regulation, which first took the form in the United States of the Pure Food and Drug Act of 1906 and the Harrison Narcotic Act in 1914. Today, the cycle of invention, popularity, demonizing, and regulation proceeds apace. Cultural acceptance of the benefits that psychoactive drugs can bring co-exist with condemnation of excess.

Addiction exerts a seemingly fundamental and enduring appeal and power for human beings. Why is this so? 

If there is a lesson, here, it is that addiction exerts a seemingly fundamental and enduring appeal and power for human beings. Why is this so?

 

WHAT IS AN ADDICTION?

Many people have a rather archaic view of the nature of addiction. Their misconceptions and confusion tend to revolve around three issues: What is the difference between addiction and a bad habit? What happens in the brain of an addict? What is involved in healing the addicted brain and the addicted person? 

People often claim to be addicted to chocolate, coffee, football, or some other substance or behavior that brings pleasure. This is not likely. Addiction is an overwhelming compulsion, based in alteration of brain circuits that normally regulate our ability to guide our actions to achieve goals. It overrides our ordinary, unaffected judgment. Addiction leads to the continued use of a substance or continuation of a behavior despite extremely negative consequences. An addict will choose the drug or behavior over family, the normal activities of life, employment, and at times even basic survival. When we call our love of chocolate or football an addiction, we are speaking loosely or misconstruing the intensity of what can be a devastating disorder. It may help to consider, first, what is not an addiction. 

No matter how much you like some drug or activity and how much you choose to involve yourself with it, you are not addicted if you can stop it when the consequences become negative for you. Coffee is an ideal example with which to illustrate this because it contains a powerful drug, caffeine, that can have significant effects on our behavior. Most of us like to drink coffee, but if your doctor told you that the heart attack you just had was precipitated by caffeine and that you would likely have another if you did not stop drinking coffee, what would you do? Most people would miss the buzz, but not so much that they would continue to drink coffee, knowing it would likely kill them. They would stop cold, right then and there. 

Yet people say they are addicted to coffee because they feel bad when they do not use it. This reflects common confusion about two important biological processes: tolerance and withdrawal. Most people, when they abruptly stop drinking coffee, begin to suffer some negative effects within about 24 hours: a nagging headache and general feelings of sleepiness and lethargy. Their experience, however, does not signify addiction. They are suffering from the processes of tolerance and withdrawal. Tolerance occurs when the brain reacts to repeated drug exposure by adapting its own chemistry to offset the effect of the drug—it adjusts itself to tolerate the drug. For example, if the drug inhibits or blocks the activity of a particular brain receptor for a neurotransmitter, the brain will attempt to counteract that inhibition by making more of that particular receptor or by increasing the effectiveness of the receptors that remain. On the other hand, if a drug enhances the activity of a receptor, the brain may make less of the receptor, thus adapting to its over-stimulation. Both conditions represent the process of tolerance, and, in either case, withdrawing the drug quickly leaves the brain with an imbalance because the brain is now dependent on the drug. This is true not only for addictive drugs; many neuroactive drugs from caffeine to antidepressants to sedatives (and even non-neuroactive drugs) cause the adaptation we call tolerance. 

Tolerance occurs when the brain reacts to repeated drug exposure by adapting its own chemistry to offset the effect of the drug—it adjusts itself to tolerate the drug. 

In the case of coffee, the caffeine inhibits the receptors for the neurotransmitter adenosine. When we regularly use caffeine, the brain senses that its adenosine receptors are not working up to par, and it responds by increasing their function, which affects brain cells, blood vessels, and other tissues. Two major functions of adenosine in the brain are to regulate blood flow to the brain and to inhibit the neuronal circuits that control alertness. When the coffee drinker stops his intake of caffeine, he goes into withdrawal, as the receptors for adenosine become less inhibited. With more adenosine receptors functioning, his brain experiences abnormal levels of blood flow in the arteries around it, and he gets a headache. At the same time, the brain centers that keep him alert are suppressed by the excess functioning of adenosine, so he feels sleepy and lethargic. 

Now the former coffee drinker is in caffeine withdrawal, feeling miserable, and wanting a cup of coffee because he is sleepy and has a headache. Is he addicted? No, he is tolerant to the caffeine because his brain chemistry has adapted to it and its proper function is dependent on its presence. This will quickly pass, because caffeine withdrawal symptoms usually disappear after a few days, and, unless he is a very unusual person, he will be able to stop using caffeine and hope to avoid another heart attack. His craving is not overwhelming; for example, it does not override his decision to protect himself from another heart attack. 

The relationship between withdrawal and addiction may confuse people because most genuine addicts do experience withdrawal of some sort when they quit, and most scientists think that avoiding withdrawal is one reason addicts keep using the substance to which they are addicted. Alcohol is a good example of how tolerance and withdrawal contribute to addiction. If a person drinks heavily for a long time, his brain will adapt to the sedative effects of the alcohol. The compensation that happens is like the caffeine example above, only with a different neurotransmitter. Alcohol activates receptors in the brain for the neurotransmitter GABA, which normally inhibits brain activity. After long-term alcohol exposure (weeks to months or years), the brain compensates by diminishing the ability of these receptors to function.  The alcoholic is now tolerant to the alcohol, just as the coffee drinker was tolerant to caffeine. 

If the alcoholic abruptly stops drinking, the neuronal circuits in the brain will suffer from excess excitation, because the opposing inhibitory functions have been diminished. The consequences of acute alcohol withdrawal can be lethal, because the hyperexcitability of the brain can cause epileptic seizures as well as instability of blood pressure and heart functions. Fortunately, however, other sedative drugs can be substituted for the alcohol to keep the brain stable, and withdrawal can proceed over a few days. 

Many addictive drugs like alcohol produce tolerance, and addicts experience withdrawal when they try to stop using them. This withdrawal can range from mild or at most moderate discomfort for a drug like marijuana, to extreme discomfort from opiates, to lethal brain instability from sedative agents like alcohol, barbiturates, and benzodiazepines (such as Valium and Ativan). Still, the key point remains: withdrawal discomfort ends in a matter of days to weeks as the brain chemistry normalizes, and this discomfort alone does not signify addiction. 

Are habits addictions? This is a tough question, because such habits range from mild and innocuous—such as twirling your hair when you are thinking about something —to dangerous, for example, overeating and gambling. Mild habits can be difficult to stop, but if we can stop when we must, we are not addicted. More dangerous habits or compulsions may be different. In fact, as we discuss later, modern neurobiology suggests that there are some strong similarities between drug addictions and compulsive habits. 

THE ADDICTED BRAIN

Scientists now think that the brain changes associated with genuine addiction long outlast the withdrawal phase for any drug. Addiction is characterized by profound craving for a drug (or behavior) that so dominates the life of an addict that virtually nothing can stop the person from engaging in the addictive activity. Addicts will give up anything and everything in their lives for the object of their addiction. They will lose all their money for cocaine, give up loved ones to feed their craving for alcohol, and sometimes give up their lives. The perplexing questions for neuroscientists who study addiction are how the brain learns to crave something so fiercely and how to reverse that craving. 

With new imaging techniques, we can watch the brain function in real time, and we now know that addictive drugs cause the activation of a specific set of neural circuits, called the brain reward system. This system controls much of our motivated behavior, but most people are hardly familiar with it. Our brain’s reward system motivates us to behave in ways such as eating and having sex that tend to help us survive as individuals and as a species. This system organizes the behaviors that are life-sustaining, provides some tools necessary to take the desired actions, and then rewards us with pleasure when we do. Research shows that almost any normal activity we find pleasurable—from hearing great music to seeing a beautiful face—can activate the reward system. When this happens, not only are we stimulated, but these circuits enable our brains to encode and remember the circumstances that led to the pleasure, so that we can repeat the behavior and go back to the reward in the future. 

A critical component of this system is the chemical dopamine, which is released from neurons in the reward system circuits and functions as neurotransmitter. Through a combination of biochemical, electrophysiological, and imaging experiments, scientists have learned that all addictive drugs increase the release of dopamine in the brain. Some increase dopamine much more than any natural stimuli. 

Let us imagine a simplified scenario that illustrates the power of a functioning reward system and our understanding of the role of dopamine. You are at a cocktail party, talking with friends. From time to time, you glance about the room to see who is coming and going, and then you notice an extremely attractive person has entered the room. That person now has your attention. The person is attractive enough that you begin to focus on him or her, and pay less attention to the ongoing conversations among your friends. 

At this point, you have experienced two effects of activating the reward system: attention and focus on the potential reward. Attention is the first of the reward system tools, giving you the ability to recognize a potentially rewarding possibility, be it your grandmother’s chocolate cake or this beautiful person. Next, you focused on the person, tending to ignore other aspects of your environment. The dopamine system is active at this point because it is part of the brain circuits that mediate attention; it helps us ignore peripheral stimuli and focus more on whatever we perceive as our task. Finally, in this first stage, perhaps you felt a little rush as the person indicates a mutual interest. 

Now things get interesting, as your reward system tells you that there is a possibility of a significant rewarding interaction with this person. This is the point where our understanding of what dopamine does has become more sophisticated in the last 10 years. We once held the simplistic view of dopamine as the “pleasure chemical”; when you did something that felt good, the increase in dopamine was the reason. Experimental psychologists now make clear distinctions between “wanting” something and “liking” something, and dopamine seems to be important for the “wanting” but not necessary for the “liking.” This distinction seems to hold in every species in which it has been tested, from rodents to man. “Wanting” turns a set of neutral sensory stimuli (a face, a scent) into a stimulus that is relevant, or has “incentive salience.” In other words, in our scenario above, activation of dopamine neurons helps signal that the person who enters the room is somebody interesting.   

We once held the simplistic view of dopamine as the “pleasure chemical”; when you did something that felt good, the increase in dopamine was the reason. Experimental psychologists now make clear distinctions between “wanting” something and “liking” something, and dopamine seems to be important for the “wanting” but not necessary for the “liking.” 

Studies using animals that were receiving sweet food treats or having sex usually show that dopamine activity increases not as a result of getting the reward, but in anticipation of a reward. Sophisticated mathematical models of this neuronal activity have led some of these scientists to view the dopamine system as an “error detector” that determines whether things are going as predicted. So if a monkey (or rat or person) is anticipating an expected reward (a kind glance from the person in the above scenario, perhaps), dopamine neurons fire in anticipation of this, and shut down their firing if it is not forthcoming. 

The reward system also has the ability to encode cues to help you repeat the experience. You will remember the room where you met this person, the clothes, the food being served, the odor of cologne or perfume, a spoken phrase, and much more. Assuming that things go well, the next time you encounter one of these cues, you will not only remember the encounter, but feel a little craving to repeat it. When a person experiences a positive, pleasurable outcome from an action or event, the release of dopamine and other chemicals alters the brain circuitry, providing tools and encouragement to repeat the event. The memory circuitry stores cues to the rewarding stimulus, so previously neutral cues (a perfume, a line of white powder) become salient. Our brains map the environment in which we experience the rewarding activity by recording the physical space, the people involved, the smells—in fact, all of the sensory experience. In addicts, cues that normally would have no particular importance to survival or pleasure—such as a line of white powder, a cigarette, or a bottle of brown liquid—activate this same reward system. 

But cues alone are not enough; action is necessary to get a reward. The brain’s reward system is organized to engage the areas of the brain that control our ability to take action. The executive area of the brain, located in the prefrontal cortex, enables us to plan and execute complex activities, as well as control our impulses. Humans have a much larger prefrontal cortex and so a greater capacity for planning and executing complex activities than lower animals do, even the nearest primates. When we experience a rewarding event, the executive center of the brain is engaged. It remembers the actions used to achieve the reward and creates the capacity to repeat the experience. Thus, not only does a pleasurable experience result in pleasant memories, but also the executive center of the brain provides motivation, rationalization, and the activation of other brain areas necessary to have the experience again. And each time the experience is repeated all of these brain changes— memories and executive function tasks— become stronger and more ingrained. These planning centers are an important target of dopamine action. 

THE HIJACKED BRAIN

Everything we know about addictive drugs suggests that they work through precisely these mechanisms. All addictive drugs activate the reward system by directly raising the levels of dopamine. Although each addictive drug also has its own unique effects, which is why alcohol feels different from cocaine or heroin, stimulation of the dopamine component of the reward system seems to be a common denominator. When addictive drugs enter the brain they artificially simulate a highly rewarding environment. The feelings provided by the drugs activate the “wanting” system just the way a cute person or tasty food would, and the dopamine released influences memory and executive function circuitry to encourage the person to repeat the experience. With every use, the enabling circuits become stronger and more compelling, creating an addiction. Recent imaging studies of the brains of addicts while they were anticipating a fix show that the planning and executive function areas of the prefrontal cortex become highly activated as the addicts plan for the upcoming drug reward. 

As an interesting aside: A new area of study is the mathematical modeling of the reward by economists. It should not be a surprise that the mathematical models that predict our consumption of cards, food, and perfume could be applied to more basic reward phenomena, and a group of mathematicians have shown that these models predict a wide range of normal human behaviors. The field of “behavioral economics” has become one of the most exciting forefronts of neuroscience research. Some scientists have proposed that addiction hijacks the normal reward circuitry and so disrupts the normally perfectly quantifiable relationships between reward and behavior. 

Addiction hijacks the normal reward circuitry and so disrupts the normally perfectly quantifiable relationships between reward and behavior. 

The level of addiction to a drug can vary immensely, depending on the characteristics of the particular drug. If a person uses a drug such as cocaine or amphetamine, which produce a profound dopamine release, the addict’s reward system experiences surges of activation. With repeated use, the circuitry adapts (perhaps becomes tolerant) to dopamine, and normal pleasures, such as sex, become less pleasurable compared with the drug. 

In alcoholics, using neuroimaging we can actually see decreases in the brain’s receptors for dopamine. Since it is hard to study human addicts before their addiction, we have a bit of a “chicken and egg” problem with this finding; we do not know which came first, the low receptors or the addiction. We do know from a recent rat study that raising the level of dopamine receptors by a sophisticated molecular strategy (transfection with a virus) caused rats to decrease their alcohol intake. 

Some addictive drugs, such as nicotine, might seem rather innocuous, because they do not produce a profound “buzz” or euphoria. How can nicotine be as addictive as it is? We know that nicotine is a reliable dopamine-releasing agent, although the amount of dopamine released is small with each use. People smoke or chew quite frequently, however, providing the brain a large number of exposures to the drug, allowing the reward system to modify the brain to crave the drug and take action to get it. The powerfully addicting effects of nicotine demonstrate that the conscious “liking” of the drug experience is not the most important effect of addictive drugs. Most smokers describe nicotine as relaxing, or anxiety reducing, but not as particularly pleasurable. 

This dissociation between liking the drug experience and taking drugs is described by most addicts. Many addicts will say that their initial experiences with addictive drugs were the best they ever had, and they have spent the remainder of their addiction seeking out a similar high. Addicts do report that when they stop, they go through a period when they are unable to experience pleasure from normally pleasurable activities; this is called “anhedonia.” But the result of the addiction is more than simply missing pleasure, as bad as that is. In an established addiction, the brain’s executive centers have become programmed to take all action necessary to acquire the drug. The person begins to crave the drug and feel compelled to take whatever action—spend money, rob a mini-market, steal from his parents— is necessary to get the drug and the high levels of dopamine that come with it. After awhile, seeking out the drug can become an automatic behavior that the addict does not even enjoy. 

And yet, the reasons that addicts keep using drugs are more complicated than activation of the reward system by dopamine. We think that long-lasting changes in the production of certain brain molecules are at work. Until recently, researchers patiently focused on single molecules, one at a time, to evaluate their potential role in addiction. Using this approach, we learned how to identify molecules that changed as an addiction developed and remained altered for a long time after drug use stopped, in concert with the long-lasting cravings that people experienced. Some of the molecules identified, such as the dopamine receptors, were expected, but others were not. For example, growth factors that produce long-lasting structural changes in the brain may also contribute to the changes in brain function associated with addiction. 

Scientists now know that the best way to produce long-lasting changes in the brain is to regulate the production of proteins by activating or silencing their genes. With the new ability to track changes simultaneously in thousands of brain molecules, we have started looking for patterns of change in genes. Some of the single molecules targeted earlier, such as proteins like CREB and delta fos B, themselves coordinate production of families of genes. Furthermore, these families change over different time frames. CREB is important during the early phases of cocaine use, but becomes much less important once addiction is long established. The fos family of proteins is the opposite: many more are changed after long-term exposure to cocaine. 

These changes do not go away quickly. The biological memories of the drug can be as profound and long-lasting as any other kinds of memories, and cues can activate the executive system to initiate drug-seeking years after the most recent previous exposure. So addiction is far more than seeking pleasure by choice. Nor is it just the unwillingness to avoid withdrawal symptoms. It is a hijacking of the brain circuitry that controls behavior, so that the addict’s behavior is fully directed to drug seeking and use. With repeated drug use, the reward system of the brain becomes subservient to the need for the drug. Brain changes have occurred that will probably influence the addict for life, regardless of whether or not he continues to use the drug. 

Now back to a question we posed earlier: “How are dangerous habits related to addiction?” Researchers are discovering that behaviors such as promiscuous sex, gambling, and overeating have some commonality with drug addiction, and you can probably imagine why. Nature did not create the brain reward circuit to help us get high on cocaine; this system evolved to help us eat and reproduce, behaviors that are complex but necessary to life. Recent brain imaging studies show that some of the changes that happen in the drug-addicted brain—for example, a decrease in the receptors for the neurochemical dopamine—are also seen in the brains of extreme overeaters. Other researchers are exploring this phenomenon in connection with other types of behaviors. 

TREATING THE ADDICTED BRAIN AND ADDICTED PERSON

How can we help control or reverse addictions? We do not yet have tools to erase the long-lasting brain changes that underlie addiction. The best pharmacological tools that we have now use a simple but effective strategy: an alternative drug is used to stimulate the brain on a low and steady level. This can fend off withdrawal, while providing a mild, almost subliminal, stimulation to the reward system, allowing the brain circuitry to readapt over time from the intense stimulation of daily use of addictive drugs to the very slight stimulation by steady, low levels of the medication. As the brain adapts back toward normality, an addict may gradually decrease the substitute drug until he becomes drug free. The narcotic drugs methadone and buprenorphine are safe and effective examples of such drugs. A recently approved drug called acamprosate uses a similar approach to treating alcoholism by providing a very mild sedative action that resembles alcohol. Is this just a chemical “crutch” that maintains the same brain changes caused by addiction? Perhaps, but by providing a minimal action it allows considerable normalization of brain function. Furthermore, these drugs allow people to reconnect with their families, hold jobs, and be productive members of society. 

Why not use a drug that blocks the effects of all addictive drugs, an abstinence-based approach that appeals to some people? The problem with such a drug is that it would also prevent all the normal rewards through which people need to find satisfaction in living. 

Why not use a drug that blocks the effects of all addictive drugs, an abstinence-based approach that appeals to some people? The problem with such a drug is that it would also prevent all the normal rewards through which people need to find satisfaction in living. If you invented the perfect reward-blocking drug, nobody would take it at the cost of losing the pleasures of life. Another approach may be found in a new drug called rimonabant, which blocks the actions of the cannabinnoid receptor, the brain receptor that the active ingredients in marijuana act upon. There is a tremendous amount of excitement right now about this new drug, and successful trials in weight reduction and smoking cessation have raised hopes that this drug might prevent certain addictive behaviors, and also block the effects of alcohol and narcotics. Recent experiments with “knockout” mice that lack the cannabinnoid receptor show that these animals do not drink alcohol, and they will not self-administer narcotics. This is consistent with older studies that have hinted that there is some common thread in the addiction pathway for these three drugs. What is philosophically more appealing about rimonabant is that the effects of drugs are prevented, not mimicked. Time will tell, but its effectiveness against several problems suggests that neuropharmacologists are on the right track. 

There will never be a simple pill to regulate such a complicated disease as addiction. The most important contribution that anyone dealing with addicted individuals can make is to recognize that reversing addiction is not just a matter of giving up something pleasurable but of accepting that addicted individuals have undergone a formidable reorganization of their brains. Treating an addict requires dealing with every aspect of this reorganization. 

Acute withdrawal is the first problem that any addict faces after he stops using, and this process plays an important role in maintaining drug-taking behavior. The withdrawal can be a day or two, or many days, even weeks, depending on the particular drug, how long the addict has been using, and how much he has been taking. We must recognize that the executive system in the brain of an addict is programmed to initiate drug seeking in response to cues, so it is critical to help the addict avoid those cues. This usually means removing the addict from the environment where he became addicted. The addict will also have to relearn impulse control; his executive system will have to be retrained to inhibit the impulses toward drug use as they occur. 

Finally, we should recognize that addiction is one of the most powerful memories we can have. These memories are imbedded in the brain; we do not forget an addiction any more easily than we forget our first love. People often receive drug treatment more than once, and still relapse. Relapses are unfortunately common in treating addiction, but the same thing happens in treating cancer and we still keep trying for a cure. We must take the same attitude toward addictive diseases and offer extensive as well as intensive treatment. But, most of all, we must offer understanding, which comes from knowing that addiction lies at the very core of our brains.

Cynthia M. Kuhn, and Wilkie A. Wilson, Dana Foundation

Faith & Irrationality

I tend to go to some religious gatherings for a number of social reasons. Whenever I go to such religious gatherings, sermons and discourses, I have often looked at the twinkle in the eyes of the adherents and wondered as to how people can be so deluded, or how reasoning can be so flawed. I have studied the egotistical and selfish nature of the preacher(s) and these have taken many forms – from the overtly ‘I know what you should think because I have thought it out for you’ to ‘I am your humble servant, and I think that these are the solutions to all your searches’. I have looked into the eyes of the adherents who are trying to give meaning to their lives for a number of reasons – all primarily psychological. I have studies the reasons people convert from one religion to another (primarily from western to eastern religions), and on the selfish reasons for doing so. Psychological escapism is nothing new, and converts will justify their reasons without clarity and candor. The burden of a disturbed mind manipulates irrationality into reason.

Increasingly, many of us want to rid the world of dogmatically-held beliefs that are vapid, barbarous, anachronistic and wrong. Many religious people give us their take on how they and their system of beliefs would combat such beliefs, which is often scientifically baseless, psychologically uninformed, politically naïve, and counterproductive for goals we share. We have stated, here on The European Rationalist, that silence in the face of dangerous lunacy, or even in the face of moderate unreasonableness, can be just as culpable as lying.

I was recently reading ‘In Gods We Trust: The Evolutionary Landscape of Religion’ by Scott Atran (Anthropologist, University of Michigan) and an article at Edge (www.edge.org) “An Edge Discussion of BEYOND BELIEF: Science, Religion, Reason and Survival, Salk Institue, La Jolla November 5-7, 2006” and I find that most of the points I wanted to be make as a critique, have already been made there.

Most religious people are irrational, as most us are in many situations in our lives, as when we fall in love, or hope beyond reason. Of course, you could be uncompromisingly rational and try whispering in your honey’s ear: “Darling, you’re the best combination of secondary sexual characteristics and mental processing that my fitness calculator has come up with so far.” After you perform this pilot experiment and see how far you get, you may reconsider your approach. If you think that approach absurd to begin with, it is probably because you sincerely feel, and believe in, love.

Empirical research on the cognitive basis of religion over the last two decades has focused on a growing number of converging cross-cultural experiments on “domain-specific cognition” emanating from developmental psychology, cognitive psychology and anthropology. Such experiments indicate that virtually all (non brain-damaged) human minds are endowed by evolution with core cognitive faculties for understanding the everyday world of readily perceptible substances and events. The core faculties are activated by stimuli that fall into a few intuitive knowledge domains, including: folkmechanics (object boundaries and movements), folkbiology (biological species configurations and relationships), and folkpsychology (interactive agents and goal-directed behavior). Sometimes operation of the structural principles that govern the ordinary and “automatic” cognitive construction of these core domains are pointedly interrupted or violated, as in poetry and religion. In these instances, counterintuitions result that form the basis for construction of special sorts of counterfactual worlds, including the supernatural, for example, a world that includes self-propelled, perceiving or thinking mineral substances (e.g., Maya sastun, crystal ball, Arab tilsam [talisman]) or beings that can pass through solid objects (angels, ghosts, ancestral spirits).

Religious beliefs are counterintuitive, then, because they violate innate and universal expectations about the world’s everyday structure, including such basic categories of “intuitive ontology” (i.e., the ordinary ontology of the everyday world that is built into any language learner’s semantic system) as person, animal, plant and substance. They are generally inconsistent with fact-based knowledge, though not randomly. As Dan Sperber and Scot Altran pointed out a quarter of a century ago, beliefs about invisible creatures who transform themselves at will or who perceive events that are distant in time or space flatly contradict factual assumptions about physical, biological and psychological phenomena. Consequently, these beliefs more likely will be retained and transmitted in a population than random departures from common sense, and thus become part of the group’s culture. Insofar as category violations shake basic notions of ontology they are attention-arresting, hence memorable.
 
But only if the resultant impossible worlds remain bridged to the everyday world can information be readily stored, evoked and transmitted. For example, you don’t have to learn in bible class that God could pick up a basket ball if you’ve already been taught that He can topple a chariot. And you don’t have to be told that God can become angry if you worship other Gods or do things He doesn’t like once you’ve already learned that He’s a jealous God. This is because such further pieces of knowledge are “automatically” inferable from our everyday commonsense understanding of folkphysics and folkbiology (e.g., relative effort and strength required to displace different sized objects) and folkpsychology (e.g., how emotions are related to one another and to beliefs). Miracles usually involve a single ontological violation, like a talking bush or horse riding into the sky, but leave the rest of the everyday commonsense world entirely intact. Experiments show that if ideas are too bizarre, like a talking tea kettle that has leaves and roots like a tree, then they are not likely to be retained in memory over the long run.

Religious worlds with supernaturals who manage our existential anxieties — such as sudden catastrophe, loneliness, injustice and misery – are minimally counterintuitive worlds. An experimental setup for this idea is to consider a 3 x 4 matrix of core domains (folkphysics, folkbiology, folkpsychology) by ontological categories (person, animal, plant, substance). By changing one and only one intuitive relationship among the 12 cells you then generate what Pascal Boyer calls a “minimal counterintuition.” For example, switching the cell ( − folkpsychology, substance) to ( + folkpsychology, substance) yields a thinking talisman, whereas switching ( +  folkpsychology, person) to (−  folkpsychology, person) yields an unthinking zombie. But changing two or more cells simultaneously usually leads only to confusion. Our experiments show that minimally counterintuitive beliefs are optimal for retaining stories in human memory (mains results have been replicated by teams of independent researchers, see for example articles in the most recent issue of the Journal of Cognition and Culture).

In sum, the conceptual foundations of religion are intuitively given by task-specific panhuman cognitive domains, including folkmechanics, folkbiology, folkpsychology. Core religious beliefs minimally violate ordinary ontological intuitions about how the world is, with its inescapable problems. This enables people to imagine minimally impossible supernatural worlds that solve existential problems that have no rational solution, including avoiding death or deception. Because religious beliefs cannot be deductively or inductively validated, validation occurs only by ritually addressing the very emotions motivating religion, usually through chant and music, dance and sway, prostration and prayer  −  all somewhat derivate of primate expressions of social bonding and submission. Cross-cultural experimental evidence encourages these claims.

Are Religions composed of Memes?

Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root mimeme, with allusions to memory and mime (and the French word même, “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root with allusions to memory and mime (and the French word “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.For genes, there is an operational definition: DNA-encoded units of information that dependably survive reproductive division, that is, meiosis (although crossover can occur anywhere along a strand of DNA, whether at the divisions of functionally defined genes or within them). In genetic propagation, information is transmitted with an extremely high degree of fidelity. In cultural propagation, imitation is the exception, not the rule; the typical pattern is of recurrent, guided transformation. Modular and innate mental structures (like those responsible for folkphysics, folkbiology and folkpsychology) thus play a central role in stabilizing and directing the transmission of beliefs toward points of convergence, or cultural attractors.

Memes are supposed to be cultural artifacts — prototypically ideas — that invade and restructure minds to reproduce themselves (without necessarily benefiting host minds beyond their capacity to service memes) much as genes dispose of physical individuals to gain serial immortality. Derived from the Greek root with allusions to memory and mime (and the French word “same”), a meme supposedly replicates from mind to mind in ways analogous to how genes replicate from body to body. There is little theoretical analysis or experimental study of memes, though this isn’t surprising because there is no consensual – or even coherent – notion of what a meme is or could be. Candidate memes include a word, sentence, belief, thought, melody, scientific theory, equation, philosophical puzzle, fashion, religious ritual, political ideology, agricultural practice, dance, poem, and recipe for a meal; or a set of instructions for origami, table manners, court etiquette, a car, building, computers, or cellphones.For genes, there is an operational definition: DNA-encoded units of information that dependably survive reproductive division, that is, meiosis (although crossover can occur anywhere along a strand of DNA, whether at the divisions of functionally defined genes or within them). In genetic propagation, information is transmitted with an extremely high degree of fidelity. In cultural propagation, imitation is the exception, not the rule; the typical pattern is of recurrent, guided transformation. Modular and innate mental structures (like those responsible for folkphysics, folkbiology and folkpsychology) thus play a central role in stabilizing and directing the transmission of beliefs toward points of convergence, or cultural attractors.Minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not by high-fidelity replication or imitation. For example, if a mother shows a child an abstract cartoon drawing of an animal that the child has never seen or heard of, and says to her child the equivalent of  “this platypus swims” in whatever human language, then any child whose linguistic faculty has matured enough to understand complete sentences, anywhere in the world, will almost immediately infer that mom is talking about: (a) something that belongs to the ontological category animal (because the lexical item “swims,” or its equivalent in another language, is cognitively processed under +animate, which is implicitly represented in every human’s semantic system), (b) this animal belongs to one and only one folk species (because an innately-determined and universal assumption of folkbiology is that animals divide into mutually exclusive folk species), and (c) the animal is probably aquatic (because part of the ordinary meaning of  “swims” is moves through water).

Inference in the communication of many religious beliefs, however, is cognitively designed never to come to closure, but to remain open-textured. For example, in a set of classroom experiments, we asked students to write down the meanings of three of the Ten Commandments: (1) Thou Shall Not Bow Down Before False Idols; (2) Remember the Sabbath; (3) Honor They Father and Thy Mother. Despite the students’ own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus.

In a serial attempt at replication a student in a closed room was given one of the Ten Commandments to paraphrase; afterwards the student would call in another student from the hallway and repeat the paraphrase; then the second student would paraphrase the paraphrase and call in a third student; and so on through. After 10 iterations the whole set of ten paraphrases was presented to another group of students who were asked to choose one phrase from a new list of phrases (including the original Ten Commandments) that “best describe the whole set of phrases before you.” Only “Thou shalt not kill” was reliably preferred as a descriptor of the set representing the chain of paraphrases initiated by a Commandment. (By contrast, control phrases such as “two plus two equals four” or “the grass is green” did replicate).

A follow-up study explored whether members of the same church have some normative notion of the Ten Commandments, that is, some minimal stability of content that could serve for memetic selection. Twenty-three members of a Bible class at a local Pentecostal Church, including the church pastor, were asked to define the three Commandments above, as well as “Thou shalt not kill,” “The Golden Rule,” “Lamb of God,” and “Why did Jesus die?” Only the first two produced anything close to consensus. In prior questioning all subjects agreed that the meanings of the Ten Commandments were fixed and had not changed substantially since Biblical times (so much for intuition).

In another project, students compared interpretations of ideological and religious sayings (e.g., “Let a thousand flowers bloom,” “To everything there is a season”) among 26 control subjects and 32 autistic subjects from Michigan. Autistics were significantly more likely to closely paraphrase and repeat content from the original statement (e.g., “Don’t cut flowers before they bloom”). Controls were more likely to infer a wider range of cultural meanings with little replicated content (e.g., “Go with the flow,” “Everyone should have equal opportunity”) – a finding consistent with previous results from East Asians (who were familiar with “Let a thousand flowers bloom” as Mao’s credo). Only the autistic subjects, who lack inferential capacity normally associated with aspects of folkpsychology came close to being “meme machines.” They may be excellent replicators of literal meaning, but they are poor transmitters of cultural meaning.

With some exceptions, ideas do not reproduce or replicate in minds in the same way that genes replicate in DNA. They do not generally spread from mind to mind by imitation. It is biologically prepared, culturally enhanced, richly structured minds that generate and transform recurrent convergent ideas from often fragmentary and highly variable input. Core religious ideas serve as conceptual signposts that help to socially coordinate other beliefs and behaviors in given contexts. Although they have no more fixed or stable propositional content than do poetic metaphors, they are not processed figuratively in the sense of an optional and endless search for meaning. Rather they are thought to be right, whatever they may mean, and to require those who share such beliefs to commune and converge on an appropriate interpretation for the context at hand. To claim that one knows what Judaism or Christianity is truly about because one has read the Bible, or that what Islam is about because one has read the Qur’an and Hadith, is to believe that there is an essence to religion and religious beliefs. But science (and the history of exegesis) demonstrates that this claim is false.

Humankind does not naturally divide into competing camps of reason and tolerance, on one side, and religion and intolerance, on the other. It is true that “scientists spend an extraordinary amount of time worrying about being wrong and take great pains to prove other so.” The best of our scientists make even greater efforts to prove themselves wrong. But it is historical nonsense to say that “pretending to know things you do not know… is the sine qua non of faith-based religion,” that doubt and attempts to “minimize the public effects of personal bias and self-deception” are alien to religion, or that religion but not scientific reason allows “thuggish lunacy.”

Is Augustine’s doubt really on a different plane than Descartes’? Are Gandhi’s and Martin Luther King’s religious appeals to faith and hope in the face of overwhelming material adversity truly beside the point? Did not the narrow focus of science on the evidence and argument of the task at hand allow the production of tens of thousands of nuclear weapons, and are not teams of very able and dedicated scientists today directly involved in constructing plausible scenarios for apocalyptic lunacy? Were not Nazi apologists Martin Heidegger and Werner Heisenberg among Germany’s preeminent men of reason and science (who used their reason and critical thought to apologize for Nazism)? Did not Bertrand Russell, almost everyone’s Hero of Reason (including mine), argue on the basis of clear and concise thought, and with full understanding and acknowledgement of opposing views and criticism, that the United states should nuke Soviet Russia before it got the bomb in order to save humankind from a worse evil? And Newton may have been the greatest genius that ever walked the face of the earth, as Neil de Grasse Tyson tells us, but if you read Newton’s letters at St. John’s College library in Cambridge, you’ll see he was one mean and petty son of a bitch.

The point is not, that some scientists do bad things and some religious believers do good things. The issue is whether or not there are reliable data to support the claim that religion engages more people who do bad than good, whereas science engages more people who do good than bad. One study might compare, say, standards of reason or tolerance or compassion among British scientists versus British clergy. My own intuition has it a wash, but even I wouldn’t trust my own intuitions, and neither should you.