Human Rights and Sentimentality

In a report from the former Bosnia some fifteen years ago1, David Rieff said “To the Serbs, the Muslims are no longer human… Muslim prisoners, lying on the ground in rows, awaiting interrogation, were driven over by a Serb guard in a small delivery van”. This theme of dehumanization recurs when Rieff says

A Muslim man in Bosanski Petrovac… [was] forced to bite off the penis of a fellow-Muslim… If you say that a man is not human, but the man looks like you and the only way to identify this devil is to make him drop his trousers – Muslim men are circumcised and Serb men are not – it is probably only a short step, psychologically, to cutting off his prick… There has never been a campaign of ethnic cleansing from which sexual sadism has gone missing.

The moral to be drawn from Rieff’s stories is that Serbian murderers and rapists do not think of themselves as violating human rights. For they are not doing these things to fellow human beings, but to Muslims. They are not being inhuman, but rather are discriminating between the true humans and the pseudohumans. They are making the same sort of distinction as the Crusaders made between humans and infidel dogs, and the Black Muslims make between humans and blue-eyed devils. The founder of my university was able both to own slaves and to think it self-evident that all men were endowed by their creator with certain inalienable rights. He had convinced himself that the consciousness of Blacks, like that of animals, “participate[s] more of sensation than reflection”2. Like the Serbs, Mr. Jefferson did not think of himself as violating human rights.

The Serbs take themselves to be acting in the interests of true humanity by purifying the world of pseudohumanity. In this respect, their self-image resembles that of moral philosophers who hope to cleanse the world of prejudice and superstition. This cleansing will permit us to rise above our animality by becoming, for the first time, wholly rational and thus wholly human. The Serbs, the moralists, Jefferson, and the Black Muslims all use the term “men” to mean “people like us”. They think the line between humans and animals is not simply the line between featherless bipeds and all others. They think the line divides some featherless bipeds from others: There are animals walking about in humanoid form. We and those like us are paradigm cases of humanity, but those too different from us in behavior or custom are, at best, borderline cases. As Clifford Geertz puts it, “Men’s most importunate claims to humanity are cast in the accents of group pride”3.

We in the safe, rich, democracies feel about the Serbian torturers and rapists as they feel about their Muslim victims: They are more like animals than like us. But we are not doing anything to help the Muslim women who are being gang raped or the Muslim men who are being castrated, any more than we did anything in the thirties when the Nazis were amusing themselves by torturing Jews. Here in the safe countries we find ourselves saying things like “That’s how things have always been in the Balkans”, suggesting that, unlike us, those people are used to being raped and castrated. The contempt we always feel for losers – Jews in the thirties, Muslims now – combines with our disgust at the winners’ behavior to produce the semiconscious attitude: “a plague on both your houses”. We think of the Serbs or the Nazis as animals, because ravenous beasts of prey are animals. We think of the Muslims or the Jews being herded into concentration camps as animals, because cattle are animals. Neither sort of animal is very much like us, and there seems no point in human beings getting involved in quarrels between animals.

The human-animal distinction, however, is only one of the three main ways in which we paradigmatic humans distinguish ourselves from borderline cases. A second is by invoking the distinction between adults and children. Ignorant and superstitious people, we say, are like children; they will attain true humanity only if raised up by proper education. If they seem incapable of absorbing such education, that shows they are not really the same kind of being as we educable people are. Blacks, the whites in the United States and in South Africa used to say, are like children. That is why it is appropriate to address Black males, of whatever age, as “boy”. Women, men used to say, are permanently childlike; it is therefore appropriate to spend no money on their education, and to refuse them access to power.

When it comes to women, however, there are simpler ways of excluding them from true humanity: for example, using “man” as a synonym of “human being”. As feminists have pointed out, such usages reinforce the average male’s thankfulness that he was not born a woman, as well as his fear of the ultimate degradation: feminization. The extent of the latter fear is evidenced by the particular sort of sexual sadism Rieff describes. His point that such sadism is never absent from attempts to purify the species or cleanse the territory confirms Catharine MacKinnon’s claim that, for most men, being a woman does not count as a way of being human. Being a nonmale is the third main way of being nonhuman. There are several ways of being nonmale. One is to be born without a penis; another is to have one’s penis cut or bitten off; a third is to have been penetrated by a penis. Many men who have been raped are convinced that their manhood, and thus their humanity, has been taken away. Like racists who discover they have Jewish or Black ancestry, they may commit suicide out of sheer shame, shame at no longer being the kind of featherless biped that counts as human.

Philosophers have tried to clear this mess up by spelling out what all and only the featherless bipeds have in common, thereby explaining what is essential to being human. Plato argued that there is a big difference between us and the animals, a difference worthy of respect and cultivation. He thought that human beings have a special added ingredient which puts them in a different ontological category than the brutes. Respect for this ingredient provides a reason for people to be nice to each other. Anti-Platonists like Nietzsche reply that attempts to get people to stop murdering, raping, and castrating each other are, in the long run, doomed to fail – for the real truth about human nature is that we are a uniquely nasty and dangerous kind of animal. When contemporary admirers of Plato claim that all featherless bipeds – even the stupid and childlike, even the women, even the sodomized – have the same inalienable rights, admirers of Nietzsche reply that the very idea of “inalienable human rights” is, like the idea of a special added ingredient, a laughably feeble attempt by the weaker members of the species to fend off the stronger.

As I see it, one important intellectual advance made in our century is the steady decline in interest in the quarrel between Plato and Nietzsche. There is a growing willingness to neglect the question “What is our nature?” and to substitute the question “What can we make of ourselves?”. We are much less inclined than our ancestors were to take “theories of human nature” seriously, much less inclined to take ontology or history as a guide to life. We have come to see that the only lesson of either history or anthropology is our extraordinary malleability. We are coming to think of ourselves as the flexible, protean, self-shaping, animal rather than as the rational animal or the cruel animal.

One of the shapes we have recently assumed is that of a human rights culture. I borrow the term “human rights culture” from the Argentinian jurist and philosopher Eduardo Rabossi. In an article called “Human Rights Naturalized”, Rabossi argues that philosophers should think of this culture as a new, welcome fact of the post-Holocaust world. They should stop trying to get behind or beneath this fact, stop trying to detect and defend its so-called “philosophical presuppositions”. On Rabossi’s view, philosophers like Alan Gewirth are wrong to argue that human rights cannot depend on historical facts. “My basic point”, Rabossi says, is that “the world has changed, that the human rights phenomenon renders human rights foundationalism outmoded and irrelevant”4.

Rabossi’s claim that human rights foundationalism is outmoded seems to me both true and important; it will be my principal topic in this lecture. I shall be enlarging on, and defending, Rabossi’s claim that the question whether human beings really have the rights enumerated in the Helsinki Declaration is not worth raising. In particular, I shall be defending the claim that nothing relevant to moral choice separates human beings from animals except historically contingent facts of the world, cultural facts.

This claim is sometimes called “cultural relativism” by those who indignantly reject it. One reason they reject it is that such relativism seems to them incompatible with the fact that our human rights culture, the culture with which we in this democracy identify ourselves, is morally superior to other cultures. I quite agree that ours is morally superior, but I do not think this superiority counts in favor of the existence of a universal human nature. It would only do so if we assumed that a moral claim is ill-founded if not backed up by knowledge of a distinctively human attribute. But it is not clear why “respect for human dignity” – our sense that the differences between Serb and Muslim, Christian and infidel, gay and straight, male and female should not matter – must presuppose the existence of any such attribute.

Traditionally, the name of the shared human attribute which supposedly “grounds” morality is “rationality”. Cultural relativism is associated with irrationalism because it denies the existence of morally relevant transcultural facts. To agree with Rabossi one must, indeed, be irrationalist in that sense. But one need not be irrationalist in the sense of ceasing to make one’s web of belief as coherent, and as perspicuously structured, as possible. Philosophers like myself, who think of rationality as simply the attempt at such coherence, agree with Rabossi that foundationalist projects are outmoded. We see our task as a matter of making our own culture – the human rights culture – more self-conscious and more powerful, rather than of demonstrating its superiority to other cultures by an appeal to something transcultural.

We think that the most philosophy can hope to do is summarize our culturally influenced intuitions about the right thing to do in various situations. The summary is effected by formulating a generalization from which these intuitions can be deduced, with the help of noncontroversial lemmas. That generalization is not supposed to ground our intuitions, but rather to summarize them. John Rawls’s “Difference Principle” and the U.S. Supreme Court’s construction, in recent decades, of a constitutional “right to privacy” are examples of this kind of summary. We see the formulation of such summarizing generalizations as increasing the predictability, and thus the power and efficiency, of our institutions, thereby heightening the sense of shared moral identity which brings us together in a moral community.

Foundationalist philosophers, such as Plato, Aquinas, and Kant, have hoped to provide independent support for such summarizing generalizations. They would like to infer these generalizations from further premises, premises capable of being known to be true independently of the truth of the moral intuitions which have been summarized. Such premises are supposed to justify our intuitions, by providing premises from which the content of those intuitions can be deduced. I shall lump all such premises together under the label “claims to knowledge about the nature of human beings”. In this broad sense, claims to know that our moral intuitions are recollections of the Form of the Good, or that we are the disobedient children of a loving God, or that human beings differ from other kinds of animals by having dignity rather than mere value, are all claims about human nature. So are such counterclaims as that human beings are merely vehicles for selfish genes, or merely eruptions of the will to power.

To claim such knowledge is to claim to know something which, though not itself a moral intuition, can correct moral intuitions. It is essential to this idea of moral knowledge that a whole community might come to know that most of their most salient intuitions about the right thing to do were wrong. But now suppose we ask: Is there this sort of knowledge? What kind of question is that? On the traditional view, it is a philosophical question, belonging to a branch of epistemology known as “metaethics”. But on the pragmatist view which I favor, it is a question of efficiency, of how best to grab hold of history – how best to bring about the utopia sketched by the Enlightenment. If the activities of those who attempt to achieve this sort of knowledge seem of little use in actualizing this utopia, that is a reason to think there is no such knowledge. If it seems that most of the work of changing moral intuitions is being done by manipulating our feelings rather than increasing our knowledge, that will be a reason to think that there is no knowledge of the sort which philosophers like Plato, Aquinas, and Kant hoped to acquire.

This pragmatist argument against the Platonist has the same form as an argument for cutting off payment to the priests who are performing purportedly war-winning sacrifices – an argument which says that all the real work of winning the war seems to be getting done by the generals and admirals, not to mention the foot soldiers. The argument does not say: Since there seem to be no gods, there is probably no need to support the priests. It says instead: Since there is apparently no need to support the priests, there probably are no gods. We pragmatists argue from the fact that the emergence of the human rights culture seems to owe nothing to increased moral knowledge, and everything to hearing sad and sentimental stories, to the conclusion that there is probably no knowledge of the sort Plato envisaged. We go on to argue: Since no useful work seems to be done by insisting on a purportedly ahistorical human nature, there probably is no such nature, or at least nothing in that nature that is relevant to our moral choices.

In short, my doubts about the effectiveness of appeals to moral knowledge are doubts about causal efficacy, not about epistemic status. My doubts have nothing to do with any of the theoretical questions discussed under the heading of “metaethics”, questions about the relation between facts and values, or between reason and passion, or between the cognitive and the noncognitive, or between descriptive statements and action-guiding statements. Nor do they have anything to do with questions about realism and antirealism. The difference between the moral realist and the moral antirealist seems to pragmatists to be a difference which makes no practical difference. Further, such metaethical questions presuppose the Platonic distinction between inquiry which aims at efficient problem-solving and inquiry which aims at a goal called “truth for its own sake”. That distinction collapses if one follows Dewey in thinking of all inquiry – in physics as well as in ethics – as practical problem-solving, or if one follows Peirce in seeing every belief as action-guiding5.

Even after the priests have been pensioned off, however, the memories of certain priests may still be cherished by the community – especially the memories of their prophecies. We remain profoundly grateful to philosophers like Plato and Kant, not because they discovered truths but because they prophesied cosmopolitan utopias – utopias most of whose details they may have got wrong, but utopias we might never have struggled to reach had we not heard their prophecies. As long as our ability to know, and in particular to discuss the question “What is man?” seemed the most important thing about us human beings, people like Plato and Kant accompanied utopian prophecies with claims to know something deep and important – something about the parts of the soul, or the transcendental status of the common moral consciousness. But this ability, and those questions, have, in the course of the last two hundred years, come to seem much less important. Rabossi summarizes this cultural sea change in his claim that human rights foundationalism is outmoded. In the remainder of this lecture, I shall take up the questions: Why has knowledge become much less important to our self-image than it was two hundred years ago? Why does the attempt to found culture on nature, and moral obligation on knowledge of transcultural universals, seem so much less important to us than it seemed in the Enlightenment? Why is there so little resonance, and so little point, in asking whether human beings in fact have the rights listed in the Helsinki Declaration? Why, in short, has moral philosophy become such an inconspicuous part of our culture?

A simple answer is that between Kant’s time and ours Darwin argued most of the intellectuals out of the view that human beings contain a special added ingredient. He convinced most of us that we were exceptionally talented animals, animals clever enough to take charge of our own future evolution. I think this answer is right as far as it goes, but it leads to a further question: Why did Darwin succeed, relatively speaking, so very easily? Why did he not cause the creative philosophical ferment caused by Galileo and Newton?

The revival by the New Science of the seventeenth century of a Democritean-Lucretian corpuscularian picture of nature scared Kant into inventing transcendental philosophy, inventing a brand-new kind of knowledge, which could demote the corpuscularian world picture to the status of “appearance”. Kant’s example encouraged the idea that the philosopher, as an expert on the nature and limits of knowledge, can serve as supreme cultural arbiter1. By the time of Darwin, however, this idea was already beginning to seem quaint. The historicism which dominated the intellectual world of the early nineteenth century had created an antiessentialist mood. So when Darwin came along, he fitted into the evolutionary niche which Herder and Hegel had begun to colonize. Intellectuals who populate this niche look to the future rather than to eternity. They prefer new ideas about how change can be effected to stable criteria for determining the desirability of change. They are the ones who think both Plato and Nietzsche outmoded.

The best explanation of both Darwin’s relatively easy triumph, and our own increasing willingness to substitute hope for knowledge, is that the nineteenth and twentieth centuries saw, among the Europeans and Americans, an extraordinary increase in wealth, literacy, and leisure. This increase made possible an unprecedented acceleration in the rate of moral progress. Such events as the French Revolution and the ending of the trans-Atlantic slave trade prompted nineteenth-century intellectuals in the rich democracies to say: It is enough for us to know that we live in an age in which human beings can make things much better for ourselves7. We do not need to dig behind this historical fact to nonhistorical facts about what we really are.

In the two centuries since the French Revolution, we have learned that human beings are far more malleable than Plato or Kant had dreamed. The more we are impressed by this malleability, the less interested we become in questions about our ahistorical nature. The more we see a chance to recreate ourselves, the more we read Darwin not as offering one more theory about what we really are but as providing reasons why we need not ask what we really are. Nowadays, to say that we are clever animals is not to say something philosophical and pessimistic but something political and hopeful, namely: If we can work together, we can make ourselves into whatever we are clever and courageous enough to imagine ourselves becoming. This sets aside Kant’s question “What is Man?” and substitutes the question “What sort of world can we prepare for our great-grandchildren?”.

The question “What is Man?” in the sense of “What is the deep ahistorical nature of human beings?” owed its popularity to the standard answer to that question: We are the rational animal, the one which can know as well as merely feel. The residual popularity of this answer accounts for the residual popularity of Kant’s astonishing claim that sentimentality has nothing to do with morality, that there is something distinctively and transculturally human called “the sense of moral obligation” which has nothing to do with love, friendship, trust, or social solidarity. As long as we believe that, people like Rabossi are going to have a tough time convincing us that human rights foundationalism is an outmoded project.

To overcome this idea of a sui generis sense of moral obligation, it would help to stop answering the question “What makes us different from the other animals?” by saying “We can know, and they can merely feel”. We should substitute “We can feel for each other to a much greater extent than they can”. This substitution would let us disentangle Christ’s suggestion that love matters more than knowledge from the neo-Platonic suggestion that knowledge of the truth will make us free. For as long as we think that there is an ahistorical power which makes for righteousness – a power called truth, or rationality – we shall not be able to put foundationalism behind us.

The best, and probably the only, argument for putting foundationalism behind us is the one I have already suggested: It would be more efficient to do so, because it would let us concentrate our energies on manipulating sentiments, on sentimental education. That sort of education sufficiently acquaints people of different kinds with one another so that they are less tempted to think of those different from themselves as only quasi-human. The goal of this manipulation of sentiment is to expand the reference of the terms “our kind of people” and “people like us”.

All I can do to supplement this argument from increased efficiency is to offer a suggestion about how Plato managed to convince us that knowledge of universal truths mattered as much as he thought it did. Plato thought that the philosopher’s task was to answer questions like “Why should I be moral? Why is it rational to be moral? Why is it in my interest to be moral? Why is it in the interest of human beings as such to be moral?”. He thought this because he believed the best way to deal with people like Thrasymachus and Callicles was to demonstrate to them that they had an interest of which they were unaware, an interest in being rational, in acquiring self-knowledge. Plato thereby saddled us with a distinction between the true and the false self. That distinction was, by the time of Kant, transmuted into a distinction between categorical, rigid, moral obligation and flexible, empirically determinable, self-interest. Contemporary moral philosophy is still lumbered with this opposition between self-interest and morality, an opposition which makes it hard to realize that my pride in being a part of the human rights culture is no more external to my self than my desire for financial success.

It would have been better if Plato had decided, as Aristotle was to decide, that there was nothing much to be done with people like Thrasymachus and Callicles, and that the problem was how to avoid having children who would be like Thrasymachus and Callicles. By insisting that he could reeducate people who had matured without acquiring appropriate moral sentiments by invoking a higher power than sentiment, the power of reason, Plato got moral philosophy off on the wrong foot. He led moral philosophers to concentrate on the rather rare figure of the psychopath, the person who has no concern for any human being other than himself. Moral philosophy has systematically neglected the much more common case: the person whose treatment of a rather narrow range of featherless bipeds is morally impeccable, but who remains indifferent to the suffering of those outside this range, the ones he or she thinks of as pseudohumans8.

Plato set things up so that moral philosophers think they have failed unless they convince the rational egotist that he should not be an egotist – convince him by telling him about his true, unfortunately neglected, self. But the rational egotist is not the problem. The problem is the gallant and honorable Serb who sees Muslims as circumcised dogs. It is the brave soldier and good comrade who loves and is loved by his mates, but who thinks of women as dangerous, malevolent whores and bitches.

Plato thought that the way to get people to be nicer to each other was to point out what they all had in common – rationality. But it does little good to point out, to the people I have just described, that many Muslims and women are good at mathematics or engineering or jurisprudence. Resentful young Nazi toughs were quite aware that many Jews were clever and learned, but this only added to the pleasure they took in beating them up. Nor does it do much good to get such people to read Kant, and agree that one should not treat rational agents simply as means. For everything turns on who counts as a fellow human being, as a rational agent in the only relevant sense – the sense in which rational agency is synonomous with membership in our moral community.

For most white people, until very recently, most Black people did not so count. For most Christians, up until the seventeenth century or so, most heathen did not so count. For the Nazis, Jews did not so count. For most males in countries in which the average annual income is under four thousand dollars, most females still do not so count. Whenever tribal and national rivalries become important, members of rival tribes and nations will not so count. Kant’s account of the respect due to rational agents tells you that you should extend the respect you feel for people like yourself to all featherless bipeds. This is an excellent suggestion, a good formula for secularizing the Christian doctrine of the brotherhood of man. But it has never been backed up by an argument based on neutral premises, and it never will be. Outside the circle of post-Enlightenment European culture, the circle of relatively safe and secure people who have been manipulating each others’ sentiments for two hundred years, most people are simply unable to understand why membership in a biological species is supposed to suffice for membership in a moral community. This is not because they are insufficiently rational. It is, typically, because they live in a world in which it would be just too risky – indeed, would often be insanely dangerous – to let one’s sense of moral community stretch beyond one’s family, clan, or tribe.

To get whites to be nicer to Blacks, males to females, Serbs to Muslims, or straights to gays, to help our species link up into what Rabossi calls a “planetary community” dominated by a culture of human rights, it is of no use whatever to say, with Kant: Notice that what you have in common, your humanity, is more important than these trivial differences. For the people we are trying to convince will rejoin that they notice nothing of the sort. Such people are morally offended by the suggestion that they should treat someone who is not kin as if he were a brother, or a nigger as if he were white, or a queer as if he were normal, or an infidel as if she were a believer. They are offended by the suggestion that they treat people whom they do not think of as human as if they were human. When utilitarians tell them that all pleasures and pains felt by members of our biological species are equally relevant to moral deliberation, or when Kantians tell them that the ability to engage in such deliberation is sufficient for membership in the moral community, they are incredulous. They rejoin that these philosophers seem oblivious to blatantly obvious moral distinctions, distinctions any decent person will draw.

This rejoinder is not just a rhetorical device, nor is it in any way irrational. It is heartfelt. The identity of these people, the people whom we should like to convince to join our Eurocentric human rights culture, is bound up with their sense of who they are not. Most people – especially people relatively untouched by the European Enlightenment – simply do not think of themselves as, first and foremost, a human being. Instead, they think of themselves as being a certain good sort of human being – a sort defined by explicit opposition to a particularly bad sort. It is crucial for their sense of who they are that they are not an infidel, not a queer, not a woman, not an untouchable. Just insofar as they are impoverished, and as their lives are perpetually at risk, they have little else than pride in not being what they are not to sustain their self-respect. Starting with the days when the term “human being” was synonomous with “member of our tribe”, we have always thought of human beings in terms of paradigm members of the species. We have contrasted us, the real humans, with rudimentary, or perverted, or deformed examples of humanity.

We Eurocentric intellectuals like to suggest that we, the paradigm humans, have overcome this primitive parochialism by using that paradigmatic human faculty, reason. So we say that failure to concur with us is due to “prejudice”. Our use of these terms in this way may make us nod in agreement when Colin McGinn tells us, in the introduction to his recent book9, that learning to tell right from wrong is not as hard as learning French. The only obstacles to agreeing with his moral views, McGinn explains, are “prejudice, vested interest and laziness”.

One can see what McGinn means: If, like many of us, you teach students who have been brought up in the shadow of the Holocaust, brought up believing that prejudice against racial or religious groups is a terrible thing, it is not very hard to convert them to standard liberal views about abortion, gay rights, and the like. You may even get them to stop eating animals. All you have to do is convince them that all the arguments on the other side appeal to “morally irrelevant” considerations. You do this by manipulating their sentiments in such a way that they imagine themselves in the shoes of the despised and oppressed. Such students are already so nice that they are eager to define their identity in nonexclusionary terms. The only people they have trouble being nice to are the ones they consider irrational – the religious fundamentalist, the smirking rapist, or the swaggering skinhead.

Producing generations of nice, tolerant, well-off, secure, other-respecting students of this sort in all parts of the world is just what is needed – indeed all that is needed – to achieve an Enlightenment utopia. The more youngsters like this we can raise, the stronger and more global our human rights culture will become. But it is not a good idea to encourage these students to label “irrational” the intolerant people they have trouble tolerating. For that Platonic-Kantian epithet suggests that, with only a little more effort, the good and rational part of these other people’s souls could have triumphed over the bad and irrational part. It suggests that we good people know something these bad people do not know, and that it is probably their own silly fault that they do not know it. All they have to do, after all, is to think a little harder, be a little more self-conscious, a little more rational.

But the bad people’s beliefs are not more or less “irrational” than the belief that race, religion, gender, and sexual preference are all morally irrelevant – that these are all trumped by membership in the biological species. As used by moral philosophers like McGinn, the term “irrational behavior” means no more than “behavior of which we disapprove so strongly that our spade is turned when asked why we disapprove of it”. It would be better to teach our students that these bad people are no less rational, no less clearheaded, no more prejudiced, than we good people who respect otherness. The bad people’s problem is that they were not so lucky in the circumstances of their upbringing as we were. Instead of treating as irrational all those people out there who are trying to find and kill Salman Rushdie, we should treat them as deprived.

Foundationalists think of these people as deprived of truth, of moral knowledge. But it would be better – more specific, more suggestive of possible remedies – to think of them as deprived of two more concrete things: security and sympathy. By “security” I mean conditions of life sufficiently risk-free as to make one’s difference from others inessential to one’s self-respect, one’s sense of worth. These conditions have been enjoyed by Americans and Europeans – the people who dreamed up the human rights culture – much more than they have been enjoyed by anyone else. By “sympathy” I mean the sort of reaction that the Athenians had more of after seeing Aeschylus’ The Persians than before, the sort that white Americans had more of after reading Uncle Tom’s Cabin than before, the sort that we have more of after watching TV programs about the genocide in Bosnia. Security and sympathy go together, for the same reasons that peace and economic productivity go together. The tougher things are, the more you have to be afraid of, the more dangerous your situation, the less you can afford the time or effort to think about what things might be like for people with whom you do not immediately identify. Sentimental education only works on people who can relax long enough to listen.

If Rabossi and I are right in thinking human rights foundationalism outmoded, then Hume is a better advisor than Kant about how we intellectuals can hasten the coming of the Enlightenment utopia for which both men yearned. Among contemporary philosophers, the best advisor seems to me to be Annette Baier. Baier describes Hume as “the woman’s moral philosopher” because Hume held that “corrected (sometimes rule-corrected) sympathy, not law-discerning reason, is the fundamental moral capacity”10. Baier would like us to get rid of both the Platonic idea that we have a true self, and the Kantian idea that it is rational to be moral. In aid of this project, she suggests that we think of “trust” rather than “obligation” as the fundamental moral notion. This substitution would mean thinking of the spread of the human rights culture not as a matter of our becoming more aware of the requirements of the moral law, but rather as what Baier calls “a progress of sentiments”11. This progress consists in an increasing ability to see the similarities between ourselves and people very unlike us as outweighing the differences. It is the result of what I have been calling “sentimental education”. The relevant similarities are not a matter of sharing a deep true self which instantiates true humanity, but are such little, superficial, similarities as cherishing our parents and our children – similarities that do not interestingly distinguish us from many nonhuman animals.

To accept Baier’s suggestions, however, we should have to overcome our sense that sentiment is too weak a force, and that something stronger is required. This idea that reason is “stronger” than sentiment, that only an insistence on the unconditionality of moral obligation has the power to change human beings for the better, is very persistent. I think that this persistence is due mainly to a semiconscious realization that, if we hand our hopes for moral progress over to sentiment, we are in effect handing them over to condescension. For we shall be relying on those who have the power to change things – people like the rich New England abolitionists, or rich bleeding hearts like Robert Owen and Friedrich Engels – rather than on something that has power over them. We shall have to accept the fact that the fate of the women of Bosnia depends on whether TV journalists manage to do for them what Harriet Beecher Stowe did for black slaves, whether these journalists can make us, the audience back in the safe countries, feel that these women are more like us, more like real human beings, than we had realized.

To rely on the suggestions of sentiment rather than on the commands of reason is to think of powerful people gradually ceasing to oppress others, or ceasing to countenance the oppression of others, out of mere niceness, rather than out of obedience to the moral law. But it is revolting to think that our only hope for a decent society consists in softening the self-satisfied hearts of a leisure class. We want moral progress to burst up from below, rather than waiting patiently upon condescension from the top. The residual popularity of Kantian ideas of “unconditional moral obligation” – obligation imposed by deep ahistorical noncontingent forces – seems to me almost entirely due to our abhorrence for the idea that the people on top hold the future in their hands, that everything depends on them, that there is nothing more powerful to which we can appeal against them.

Like everyone else, I too should prefer a bottom-up way of achieving utopia, a quick reversal of fortune which will make the last first. But I do not think this is how utopia will in fact come into being. Nor do I think that our preference for this way lends any support to the idea that the Enlightenment project lies in the depths of every human soul. So why does this preference make us resist the thought that sentimentality may be the best weapon we have? I think Nietzsche gave the right answer to this question: We resist out of resentment. We resent the idea that we shall have to wait for the strong to turn their piggy little eyes to the suffering of the weak. We desperately hope that there is something stronger and more powerful that will hurt the strong if they do not – if not a vengeful God, then a vengeful aroused proletariat, or, at least, a vengeful superego, or, at the very least, the offended majesty of Kant’s tribunal of pure practical reason. The desperate hope for a noncontingent and powerful ally is, according to Nietzsche, the common core of Platonism, of religious insistence on divine omnipotence, and of Kantian moral philosophy12.

Nietzsche was, I think, right on the button when he offered this diagnosis. What Santayana called “supernaturalism”, the confusion of ideals and power, is all that lies behind the Kantian claim that it is not only nicer, but more rational, to include strangers within our moral community than to exclude them from it. If we agree with Nietzsche and Santayana on this point, however, we do not thereby acquire any reason to turn our backs on the Enlightenment project, as Nietzsche did. Nor do we acquire any reason to be sardonically pessimistic about the chances of this project, in the manner of admirers of Nietzsche like Santayana, Ortega, Heidegger, Strauss, and Foucault.

For even though Nietzsche was absolutely right to see Kant’s insistence on unconditionality as an expression of resentment, he was absolutely wrong to treat Christianity, and the age of the democratic revolutions, as signs of human degeneration. He and Kant, alas, shared something with each other which neither shared with Harriet Beecher Stowe – something which Iris Murdoch has called “dryness” and which Jacques Derrida has called “phallogocentrism”. The common element in the thought of both men was a desire for purity. This sort of purity consists in being not only autonomous, in command of oneself, but also in having the kind of self-conscious self-sufficiency which Sartre describes as the perfect synthesis of the in-itself and the for-itself. This synthesis could only be attained, Sartre pointed out, if one could rid oneself of everything sticky, slimy, wet, sentimental, and womanish.

Although this desire for virile purity links Plato to Kant, the desire to bring as many different kinds of people as possible into a cosmopolis links Kant to Stowe. Kant is, in the history of moral thinking, a transitional stage between the hopeless attempt to convict Thrasymachus of irrationality and the hopeful attempt to see every new featherless biped who comes along as one of us. Kant’s mistake was to think that the only way to have a modest, damped-down, nonfanatical version of Christian brotherhood after letting go of the Christian faith was to revive the themes of pre-Christian philosophical thought. He wanted to make knowledge of a core self do what can be done only by the continual refreshment and re-creation of the self, through interaction with selves as unlike itself as possible.

Kant performed the sort of awkward balancing act required in transitional periods. His project mediated between a dying rationalist tradition and a vision of a new, democratic world, the world of what Rabossi calls “the human rights phenomenon”. With the advent of this phenomenon, Kant’s balancing act has become outmoded and irrelevant. We are now in a good position to put aside the last vestiges of the ideas that human beings are distinguished by the capacity to know rather than by the capacities for friendship and intermarriage, distinguished by rigorous rationality rather than by flexible sentimentality. If we do so, we shall have dropped the idea that assured knowledge of a truth about what we have in common is a prerequisite for moral education, as well as the idea of a specifically moral motivation. If we do all these things, we shall see Kant’s Foundations of the Metaphysics of Morals as a placeholder for Uncle Tom’s Cabin – a concession to the expectations of an intellectual epoch in which the quest for quasi-scientific knowledge seemed the only possible response to religious exclusionism13.

Unfortunately, many philosophers, especially in the English-speaking world, are still trying to hold on to the Platonic insistence that the principal duty of human beings is to know. That insistence was the lifeline to which Kant and Hegel thought we had to cling14. Just as German philosophers in the period between Kant and Hegel saw themselves as saving “reason” from Hume, many English-speaking philosophers now see themselves saving reason from Derrida. But with the wisdom of hindsight, and with Baier’s help, we have learned to read Hume not as a dangerously frivolous iconoclast but as the wettest, most flexible, least phallogocentric thinker of the Enlightenment. Someday, I suspect, our descendants may wish that Derrida’s contemporaries had been able to read him not as a frivolous iconoclast, but rather as a sentimental educator, another of “the women’s moral philosophers”15.

If one follows Baier’s advice one will not see it as the moral educator’s task to answer the rational egotist’s question “Why should I be moral?” but rather to answer the much more frequently posed question “Why should I care about a stranger, a person who is no kin to me, a person whose habits I find disgusting?”. The traditional answer to the latter question is “Because kinship and custom are morally irrelevant, irrelevant to the obligations imposed by the recognition of membership in the same species”. This has never been very convincing, since it begs the question at issue: whether mere species membership is, in fact, a sufficient surrogate for closer kinship. Furthermore, that answer leaves one wide open to Nietzsche’s discomfiting rejoinder: That universalistic notion, Nietzsche will sneer, would only have crossed the mind of a slave – or, perhaps, the mind of an intellectual, a priest whose self-esteem and livelihood both depend on getting the rest of us to accept a sacred, unarguable, unchallengeable paradox.

A better sort of answer is the sort of long, sad, sentimental story which begins “Because this is what it is like to be in her situation – to be far from home, among strangers”, or “Because she might become your daughter-in-law”, or “Because her mother would grieve for her”. Such stories, repeated and varied over the centuries, have induced us, the rich, safe, powerful, people, to tolerate, and even to cherish, powerless people – people whose appearance or habits or beliefs at first seemed an insult to our own moral identity, our sense of the limits of permissible human variation.

To people who, like Plato and Kant, believe in a philosophically ascertainable truth about what it is to be a human being, the good work remains incomplete as long as we have not answered the question “Yes, but am I under a moral obligation to her?”. To people like Hume and Baier, it is a mark of intellectual immaturity to raise that question. But we shall go on asking that question as long as we agree with Plato that it is our ability to know that makes us human.

Plato wrote quite a long time ago, in a time when we intellectuals had to pretend to be successors to the priests, had to pretend to know something rather esoteric. Hume did his best to josh us out of that pretense. Baier, who seems to me both the most original and the most useful of contemporary moral philosophers, is still trying to josh us out of it. I think Baier may eventually succeed, for she has the history of the last two hundred years of moral progress on her side. These two centuries are most easily understood not as a period of deepening understanding of the nature of rationality or of morality, but rather as one in which there occurred an astonishingly rapid progress of sentiments, in which it has become much easier for us to be moved to action by sad and sentimental stories.

This progress has brought us to a moment in human history in which it is plausible for Rabossi to say that the human rights phenomenon is a “fact of the world”. This phenomenon may be just a blip. But it may mark the beginning of a time in which gang rape brings forth as strong a response when it happens to women as when it happens to men, or when it happens to foreigners as when it happens to people like us.

1. “Letter from Bosnia”, New Yorker, November 23, 1992, 82-95.

2. “Their griefs are transient. Those numberless afflictions, which render it doubtful whether heaven has given life to us in mercy or in wrath, are less felt, and sooner forgotten with them. In general, their existence appears to participate more of sensation than reflection. To this must be ascribed their disposition to sleep when abstracted from their diversions, and unemployed in labor. An animal whose body is at rest, and who does not reflect must be disposed to sleep of course”. Thomas Jefferson, “Notes on Virginia”, Writings, ed. Lipscomb and Bergh (Washington, D.C.: 1905),1:194.

3. Geertz, “Thick Description” in his The Interpretation of Culture (New York: Basic Books, 1973), 22.

4. Rabossi also says that he does not wish to question “the idea of a rational foundation of morality”. I am not sure why he does not. Rabossi may perhaps mean that in the past – for example, at the time of Kant – this idea still made a kind of sense, but it makes sense no longer. That, at any rate, is my own view. Kant wrote in a period when the only alternative to religion seemed to be something like science. In such a period, inventing a pseudoscience called “the system of transcendental philosophy” – setting the stage for the show-stopping climax in which one pulls moral obligation out of a transcendental hat – might plausibly seem the only way of saving morality from the hedonists on one side and the priests on the other.

5. The present state of metaethical discussion is admirably summarized in Stephen Darwall, Allan Gibbard, and Peter Railton, “Toward Fin de Siècle Ethics: Some Trends”, The Philosophical Review 101 (1992): 115-89. This comprehensive and judicious article takes for granted that there is a problem about “vindicating the objectivity of morality” (127), that there is an interesting question as to whether morals is “cognitive” or “non-cognitive”, that we need to figure out whether we have a “cognitive capacity” to detect moral properties (148), and that these matters can be dealt with ahistorically.

When these authors consider historicist writers such as Alasdair MacIntyre and Bernard Williams, they conclude that they are “[meta]théoriciens malgré eux” who share the authors’ own “desire to understand morality, its preconditions and its prospects” (183). They make little effort to come to terms with suggestions that there may be no ahistorical entity called “morality” to be understood. The final paragraph of the paper does suggest that it might be helpful if moral philosophers knew more anthropology, or psychology, or history. But the penultimate paragraph makes clear that, with or without such assists, “contemporary metaethics moves ahead, and positions gain in complexity and sophistication”.

It is instructive, I think, to compare this article with Annette Baier’s “Some Thoughts On How We Moral Philosophers Live Now”, The Monist 67 (1984): 490. Baier suggests that moral philosophers should “at least occasionally, like Socrates, consider why the rest of society should not merely tolerate but subsidize our activity”. She goes on to ask, “Is the large proportional increase of professional philosophers and moral philosophers a good thing, morally speaking? Even if it scarcely amounts to a plague of gadflies, it may amount to a nuisance of owls”. The kind of metaphilosophical and historical self-consciousness and self-doubt displayed by Baier seems to me badly needed, but it is conspicuously absent in Philosophy in Review (the centennial issue of The Philosophical Review in which “Toward Fin de Siècle Ethics” appears). The contributors to this issue are convinced that the increasing sophistication of a philosophical subdiscipline is enough to demonstrate its social utility, and are entirely unimpressed by murmurs of “decadent scholasticism”.

6. Fichte’s Vocation of Man is a useful reminder of the need that was felt, circa 1800, for a cognitive discipline called philosophy that would rescue utopian hope from natural science. It is hard to think of an analogous book written in reaction to Darwin. Those who couldn’t stand what Darwin was saying tended to go straight back past the Enlightenment to traditional religious faith. The unsubtle, unphilosophical opposition, in nineteenth-century Britain and France, between science and faith suggests that most intellectuals had become unable to believe that philosophy might produce some sort of superknowledge, knowledge that might trump the results of physical and biological inquiry.

7. Some contemporary intellectuals, especially in France and Germany, take it as obvious that the Holocaust made it clear that the hopes for human freedom which arose in the nineteenth century are obsolete – that at the end of the twentieth century we postmodernists know that the Enlightenment project is doomed. But even these intellectuals, in their less preachy and sententious moments, do their best to further that project. So they should, for nobody has come up with a better one. It does not diminish the memory of the Holocaust to say that our response to it should not be a claim to have gained a new understanding of human nature or of human history, but rather a willingness to pick ourselves up and try again.

8. Nietzsche was right to remind us that “these same men who, amongst themselves, are so strictly constrained by custom, worship, ritual gratitude and by mutual surveillance and jealousy, who are so resourceful in consideration, tenderness, loyalty, pride and friendship, when once they step outside their circle become little better than uncaged beasts of prey”. The Genealogy of Morals, trans. Golffing (Garden City, N.Y.: Doubleday, 1956), 174.

9. Colin McGinn, Moral Literacy: or, How to Do the Right Thing (London: Duckworth, 1992), 16.

10. Baier, “Hume, the Women’s Moral Theorist?”, in Eva Kittay and Diana Meyers, eds., Women and Moral Theory (Totowa, N.J.: Rowman and Littlefield, 1987), 40.

11. Baier’s book on Hume is entitled A Progress of Sentiments: Reflections on Hume’s Treatise (Cambridge, Mass.: Harvard University Press, 1991). Baier’s view of the inadequacy of most attempts by contemporary moral philosophers to break with Kant comes out most clearly when she characterizes Allan Gibbard (in his book Wise Choices, Apt Feelings) as focusing “on the feelings that a patriarchal religion has bequeathed to us”, and says that “Hume would judge Gibbard to be, as a moral philosopher, basically a divine disguised as a fellow expressivist” (312).

12. Nietzsche’s diagnosis is reinforced by Elizabeth Anscombe’s famous argument that atheists are not entitled to the term “moral obligation”.

13. See Jane Tompkins, Sensational Designs: The Cultural Work of American Fiction, 17901860 (New York: Oxford University Press, 1985), for a treatment of the sentimental novel that chimes with the point I am trying to make here. In her chapter on Stowe, Tompkins says that she is asking the reader “to set aside some familiar categories for evaluating fiction – stylistic intricacy, psychological subtlety, epistemological complexity – and to see the sentimental novel not as an artifice of eternity answerable to certain formal criteria and to certain psychological and philosophical concerns, but as a political enterprise, halfway between sermon and social theory, that both codifies and attempts to mold the values of its time” (126).

The contrast that Tompkins draws between authors like Stowe and “male authors such as Thoreau, Whitman and Melville, who are celebrated as models of intellectual daring and honesty” (124), parallels the contrast I tried to draw between public utility and private perfection in my Contingency, Irony and Solidarity (Cambridge, England: Cambridge University Press, 1989). I see Uncle Tom’s Cabin and Moby Dick as equally brilliant achievements, achievements that we should not attempt to rank hierarchically, because they serve such different purposes. Arguing about which is the better novel is like arguing about which is the superior philosophical treatise: Mill’s On Liberty or Kierkegaard’s Philosophical Fragments.

14. Technically, of course, Kant denied knowledge in order to make room for moral faith. But what is transcendental moral philosophy if not the assurance that the noncognitive imperative delivered via the common moral consciousness shows the existence of a “fact of reason” – a fact about what it is to be a human being, a rational agent, a being that is something more than a bundle of spatio-temporal determinations? Kant was never able to explain how transcendental knowledge could be knowledge, but he was never able to give up the attempt to claim such knowledge.

On the German project of defending reason against Hume, see Fred Beiser, The Fate of Reason: German Philosophy From Kant to Fichte (Cambridge, Mass.: Harvard University Press, 1987).

15. I have discussed the relation between Derrida and feminism in “Deconstruction; Ideology and Feminism: A Pragmatist View”, forthcoming in Hypatia, and also in my reply to Alexander Nehamas in Lire Rorty (Paris: éclat, 1992). Richard Bernstein is, I think, basically right in reading Derrida as a moralist, even though Thomas McCarthy is also right in saying that “deconstruction” is of no political use.

Richard Rorty, Belgrade Circle Journal.

Philosophy at The End Of The Millennium: Existentialism, Nietzsche, Stirner, Postmodernism. Now what?

It seems right to begin with Kierkegaard – acknowledged as the father of existentialism. In his first book Kierkegaard gave a description of three philosophical positions or ways of life: i) a cultured form of worldly hedonism; ii) a life of a judgmental, dutiful moralist; iii) a spirituality which transcends both worldly hedonism and the rules of social morality or ordinary justice.

He called the book Either/Or. For he contended that, as such positions are discrete and self-contained, based on their own unique values, and as reason and logic can’t prove which position is objectively more true or superior, a subjective either/or decision, a free leap of faith, is required to adopt any one and commit oneself to it. Free choice here means choice in the face of the inability to establish the objective rightness of the decision; hence, choice taken in irresolvable uncertainty; hence, choice begetting angst -anxiety that we are completely wrong.

Kierkegaard rejected the Hegelian philosophy dominant in his day. It claimed that by use of reason we can all see how a position evolves out of previous ones and represents a rational advance. Reason can compare and assess positions. If we follow the logic of cultural evolution we make a smooth transition from one to another and eventually arrive at a shared final conclusion: the ultimate position objectively superior to all others. We won’t need a leap of faith. Reason will guide and assure us we’ve arrived at the highest truth. Then we can all go home.

Nietzsche and postmodernism similarly reject the idea that reason can establish objective truth and that positions or ways of life can be compared to see which one is ultimate. Nietzsche is famous for his perspectivism, ie, his argument that philosophies reflect different perspectives on reality and that all such perspectives are founded on diverse culturally relative assumptions and values. We can’t prove objective truth since the criteria for the truth -for what gets called true in a particular culture -vary relative to historical time and place. There are no independent criteria by which we can judge between positions. Moreover, behind logic stands evaluation: eg, that one values being rational, or questioning, or reflective, or analytical, or dialectical, or that one is bothered about non-contradiction, logical determinations of reality, and the like. After all, a late-medieval like Martin Luther can declare that reason is the devil’s whore -ie, that reason is a corrupt faculty, part of our fallen and sinful nature: not a reliable faculty to use in pursuit of truth. It will seduce us away from truth, which can only be found, says Luther, in a God-given scriptural revelation.

So, the value of reason appears relative and can be put in question. Other cultures have not valued it as much as we have in modern times. Nietzsche raises the question why we want truth at all rather than illusion and suggests it is only a kind of imperialism, or piece of moral naiveté, to assume truth is worth more than myth or appearance. Moreover, what we call truths are just our more triumphant fictions: ie, certain fictions, simplifications, and the like, come to the fore at a certain point in time and if they triumph they get called truths by most people in that culture. Thus, truth is basically a concept expressing a people’s incapacity to think otherwise. It reflects limitation, a degree of disempowerment. Our convictions are our prisons. At the same time, though, the temptation of truth is that it promises a power, viz, the security and superiority of feeling we live in the truth or possess the truth -as against others who are in the wrong. So, Nietzsche famously analyses truth and philosophy in terms of an underlying will-to-power.

Postmodernism is close. Foucault also analyses what’s called knowledge in terms of power -eg, that a group which successfully portrays itself as having knowledge thereby acquires power and that such knowledges arise via discrepancies of power in society between so-called experts and those not in the know: between the haves and the have-nots in society, the dominant and less dominant in education. It is the dominant elites which determine what gets to be called the canon of knowledge -shoring up their privileged positions and passing the canon down to future generations. There’s no guarantee the canon, or dominant regime of discourse, is truth rather than a temporarily triumphant fiction serving certain vested interests. (This may have crossed one’s mind before!)

Also close is Lyotard, who calls the many positions grand narratives, or stories of truth, and refers to them as language games. The games are discrete and circular, for they are founded on their own unique set of values and contain within themselves their own game rules or criteria for truth, knowledge, evidence, proof, right method, and the like. There is no objectively true game, since there is no independent position from which you could judge between the games to decide which one is best. Hence, the games are said to be incommensurable -ie, they can’t be measured or compared for their real truth-value. Truths and values are relative to the game you are playing. To say one language game is intrinsically or objectively superior to another would be as absurd as saying that soccer is intrinsically or objectively better than cricket. Games are simply different, not inherently better or worse.

Also similar to Nietzsche is Baudrillard’s notion of simulation and seduction. We don’t live in the real as such, he says, but in our cultural simulation of reality. In late-Capitalist consumer society, where mass media dominate, the mainstream cultural simulation is selected and mediated over and over again. It is reinforced through endless repetitions: hyper-mediated, hyper-realized. The simulation thereby becomes the hyperreal, the realer-than-real: an overdetermined simulation which appears natural, normal, an obvious truth.

Meanwhile, the real itself is a void, a nullity, a desert of the real, as Baudrillard puts it. All cultures are seduced by their truths. Moreover, seduction is not rational, or it is pre-rational, more basic than the rational. For to be rational already presupposes one has been seduced by the ideals of reason. Hence, Baudrillard seems to be in agreement with Nietzsche that behind reason stands evaluation or the mysterious non-rational -the other of reason -which Baudrillard calls seduction. However, Baudrillard, unlike Nietzsche and Foucault, is uncommitted to the view that seduction operates through power or a will-to-power, or even through desire, as some others would have it. Hence, he says we should forget Foucault -presumably Nietzsche too, at least on this point.

How then does seduction operate if not through power or desire? Actually, this is undecidable. For to analyze seduction in terms of power, or desire, or some other factor, be it psychological, psychoanalytic, natural and empirical, or supernatural and non-empirical, would already presuppose a seduction, ie, that one has been seduced by this or that discourse or perspective. Rather, the ultimate sources of seduction remain mysterious, a kind of secret rule of the game. We find ourselves seduced, we know not how or why. One thing remains though: whatever position or way of life we are seduced by, there is no way we can establish its objective or essential truth. It has value only relative to our seduction. To say our seduction is objectively best would be as absurd as Romeo saying Juliet is objectively best. He may feel she is, but he can’t establish this as a truth for others. So the implication of seduction theory in particular, and postmodernism in general, is that beauty and truth is in the eye of the beholder. Hence, it’s said that truth is dead in postmodernity -ie, essential truth, objective truth, is an outmoded notion, a concept from a dead language game of the past.

So, in the light of Kierkegaard and existentialism, Nietzsche and postmodernism, philosophical positions and ways of life now appear as perspectives, simulations, or discrete and discontinuous language games; or in more dramatic terms: at the end of the millennium, truth is dead. But was Kierkegaard right to say free choice or a leap of faith is required to jump the gaps? Is there free choice here? Is there even a self which is free to make such a choice? Does it have the free will? On these questions we find Nietzsche and postmodernism part company with Kierkegaard and existentialism. Let’s consider.

Descartes is the father of modern philosophy or what’s called modernity by postmoderns. Emphasis is on self and related concepts, such as autonomy, responsibility, accountability, free will, free choice, individuality, and the like. It begins with the Cartesian “I think therefore I am”. Several things are implied: that there is a self, that the self is a causal agent, that the self can control thought and action through free will, that the self is a free moral agent -ie, accountable and responsible. Philosophers like Descartes, Kant and Hegel, stressed the rationality of self and said the self is most free when most rational. Kierkegaard and existentialism object. Nevertheless, they still concur on free self, free choice, deliberation and decision, responsibility and accountability. Therefore, we have to say that existentialism belongs to modernity.

Now, what about Nietzsche? He rejects the “I think” in no uncertain terms. It is arbitrary to assume the “I” creates or controls thought and action. After all, thoughts, beliefs, actions, decisions, and the like, can be generated by underlying and unconscious agencies. This, of course, connects to will-topower. Will-to-power can operate in us at levels below the level of conscious awareness or control. The sense of having a free subjectivity, a free self, is itself an illusion generated by will-to-power in the human organism. Moreover, Nietzsche declares: the doctrine of free will is “a hangman’s metaphysics” -ie, a fiction invented by certain resentful and vengeful groups in the past so that others -criminals, conquering tribes, masters -can be held accountable and responsible and duly condemned, punished, or damned. Belief in free will thus serves to rationalize and legitimate righteous indignation and revenge under the fiction of justice and desert. The idea caught on.

Similarly, postmodernism decentres the self, ie, it undermines the ideology of the free self by pointing to factors which condition who we are, what we can think or say or believe, or what we can do. One catch-phrase is: the self does not speak language, but language speaks the self -ie, the cultural language or language games we are brought up in conditions our sense of subjectivity and the possibilities of thought. We may think we are free agents, but actually we are speaking and acting in accordance with our historical conditioning and cultural limitations.

So Nietzsche and postmodernism differ radically from Kierkegaard and existentialism in so far as the latter rely heavily on an assumption of free subjectivity reminiscent of Descartes. There can be no existential free choice, or free and accountable leaps of faith, if self is merely a simulation of selfhood, as Baudrillard might say, determined by modernity’s cultural code. Moreover, in the light of this, Nietzsche is surely not an existentialist and existentialism is not the heir to his thought; rather postmodernism is. Indeed, postmodernism could well be described as kind of neo-Nietzscheanism.

In sum: Kierkegaard the existentialist argued that positions can’t be compared by reason alone and that objective truth is impossible, then declared that a responsible, accountable, free leap of faith is required. He assumed the reality of free will or free subjectivity as the final ground of our action and commitment. Nietzsche and postmodernism object. Underlying factors, such as power, desire, cultural conditioning, language limitations, regimes of discourse, seduction, and the like, must be taken into account. Existentialism is itself a version of the hangman’s metaphysics. Are we speaking at a hangman’s society?!

What now of Max Stirner? Where does he stand? Stirner was writing at much the same time as Kierkegaard, in the 1840’s, and in a similar intellectual environment. Like Kierkegaard he rejected the dominant Hegelianism in which he was schooled. So in some ways he is similar to Kierkegaard, especially in that he too provides a sustained critique of rationalist metaphysics and objective truth. Moreover, at first glance he seems to be arguing in favour of free subjectivity, the free self or free ego, and free individualism. Thus, he might seem to belong in the existentialist camp. However, this is rather misleading. If we look more closely we find he is not committed to the idea of a free self or ego, and that, contrary to initial appearances and to his critics and commentators, he is not advocating individualist egoism at all.

Well, this needs some explaining. Stirner certainly argues against objective truth arrived at through reason, proposing instead that positions have been adopted in the past for underlying egoistic reasons of self-interest. Desire had more to do with it than reason. However, as with will-to-power, this egoistic will did not always operate at the conscious level of deliberation or control. Most of the time people have been unconscious or involuntary egoists, as Stirner puts it, ie, they may have thought they were choosing a position purely because of its truth, but since no position can exhibit its truth, the real motives were psychological, egoistic in the sense of being self-serving or apparently advantageous.

This is summed up in Stirner’s saying, “Nothing is sacred but by my bending the knee.” -meaning: nothing is simply given as sacred or true or right or valuable in itself, but only acquires this appearance of value by our elevating it to this sublime status, disempowering ourselves in relation to it. We project its value, declare it sacred, untouchable, inviolable, thereby losing the capacity to take back its value again, or annul it. We do this because we feel, however dimly, however unconsciously, it is advantageous to be aligned with the sacred.

However, Stirner argues we are, rather, disadvantaged in the process. For we become addicted to the sacred truth, and, as Stirner sees it, a better -more empowering, more reliable, more immediate, more liberating -mode of happiness, a happiness of non-addiction, can be found by undermining and annulling every sacred truth. We achieve this through realizing nothing is sacred of itself but only appears sacred via our projection -by our bending the knee. Seeing it is not sacred or inviolable in itself, we find we can violate it, ie, take back its value and annul it, thus letting go of it.

Example: consider people who fall romantically in love. At one level they feel it is advantageous to be thus enthralled -and so they pursue it: their own thralldom, their own servitude. The other becomes a sacred object or idol to which one becomes addicted, attached. One becomes emotionally dependent. There are certain highs involved, to be sure, which explains the temptation.

But there’s the down side. We are subservient in that our sense of emotionalwellbeing is vulnerable to the other’s will or changeability. As Stirner would say, we have fallen prey to tributariness -ie, we pay the other too much tribute, give the other too much weight, value or power. In short, we make the other sacred by bending the knee. This is the pattern of idolatry. The same applies to everything -eg, God, truths, faiths, beliefs, ideologies, reason, discourse, thought, and even the self or ego. We can make an little idol out of anything.

Do we possess our objects of belief and desire or do they possess us? For Stirner, re-phrasing Hamlet, to possess or be possessed -that is the question. Possessing them without them possessing us means we retain the capacity to take back value any time and cancel, suspend or annul it -ie, we can absolve ourselves of the thing, we can let it go, be non-attached and independent in relation to it. We can, for example, let that old lover go, let that old God go, let that old truth go, let even life itself go -let everything go. To be able to have and enjoy things without them having you, describes the non-attached condition Stirner calls Ownness. We come into our own, we develop maturity, when we can have and not have in this way.

God and the truth is dead for Stirner in that he can let them go. He is radically uncommitted. Indeed, he is not concerned for anything except “the self-enjoyment of life” -akin to what the Greeks called “eudemonia” -ie, philosophical good spirits. To attain and enjoy good spirits is Stirner’s purpose. Attainment comes via the realization that nothing is sacred except by our bending the knee and exercising the capacity to take back all things and annul their value or power over us. This implies we annul all the objects of belief and desire, hence, all the objects of hope and fear and time. What then remains? Only what Stirner calls “creative nothingness” -ie, the ongoing unfolding of life itself here and now without names, conceptualizations, divisions, limits. For these are all objects of belief or desire, potential idols. And we take these back and annul them. So it is no longer a matter of fear and hope, of time, of mediation. The immediate self-enjoyment of creative nothingness is realized where there are no idols left standing to block it. It is the free creative act of life-affirmation, of life affirming itself in and through us: a life-enjoyment without reason, that is, for no reason except itself because, well, enjoyment is enjoyable -which seems obvious.

Now, is this egoism? I say not. For ordinary egoism is the pursuit of enjoyment in time via the objects of belief and desire. And self-enjoyment is precisely not this. On the contrary, self-enjoyment is the radical alternative to ordinary egoism. But is it not egoism at least in the sense that Stirner believes in the free ego or individual self of egoism, as the commentators say? No, again. Stirner is not attached or committed to self or ego, since self or ego is simply a concept, an object of belief or desire, one more potential idol. He annuls it along with the rest. Note that Stirner’s motto throughout the book is not “I have set my affair on the ego or egoism”. His motto is, “I have set my affair on nothing.” Creative nothingness is the last word in his discourse, and on the last page of the book even the idea of the ego or owner is taken back, annulled, returned to the creative nothing from whence it came.

Stirner doesn’t belong in the existentialist camp because he is not committed to key existentialist notions: eg, self, free will, authenticity, accountability, responsibility. He would absolve himself of all such notions. He would not make an idol of them. Well, then, shall we say Stirner is more like a postmodern? After all, he was one of the first to use the term “modernity” to describe the previous period of philosophical culture, and he says that his own position -ownness, or self-enjoyment -comes after this, and so by implication is post-modern. In fact, a good case can be made that he was way ahead of his time, that critics and commentators have failed to understand him, and that he anticipated many themes of postmodernism a hundred and fifty years ago.
However, what Stirner most resembles, it seems to me, is Taoistic Zen. After all, Taoistic Zen is also all about radical non-attachment to any objects of temporal desire or belief and a contemplative openness to and appreciation of the Tao, understood as the nameless, the unconceptualized, Way of reality. As the first line of the Tao Te Ching says, the Tao or Way that can be named is not the real Tao or Way itself. Thus, the Tao is akin to Stirner’s creative nothingness and the contemplative appreciation of the Tao is akin to Stirner’s practice of immediate self-enjoyment.

What about similarity between Taoistic Zen, Stirner, and postmodernism? Well, in so far as postmoderns are committed to discourse itself, or the terms of their discourse -whether power, or desire, or deconstruction, or simulation, or seduction, etc. -and make a sacred idol out of them, then there would be little similarity. However, in so far as ironic detachment from discourse is hinted at in some texts -notably in the case of Baudrillard -then there may be a similarity. In Baudrillard, in his rather extreme brand of postmodernism, there is an ongoing unresolved ambiguity or equivocation over whether his discourse is to be taken as a serious or sacred truth about the real or whether he is instead engaged in a kind of provocative and ironic game with the reader. The former is suggested by his description of himself as a moralist and metaphysician. The latter is suggested by references to his text as theory-fiction and by pronouncements that the secret of theory is that there is no longer any truth in theory. In short, Baudrillard prevaricates on this crucial issue. And so, in the end, one must forget Baudrillard.

Stirner privileges the calm contemplative self-enjoyment of creative nothingness above all and he seems rather scornful of other pursuits. He prefers aloof retreat from the world and he seems to have gone on to live the rest of his life this way. Same goes for mainstream Taoistic Zen. However, postmodern writers, including Baudrillard who at least flirts with the void and contemplative silence, tend to privileged discourse or writing as such, and so churn out endless books -even if they are books of theory which argue we can’t write books of theory any more. This seems to be the state of play in philosophy as we approach the end of the millennium.

Which leaves me with one last question to address tonight. Is there a way forward from here into the next millennium, a way beyond the positions outlined so far, a way beyond even postmodernism: a post-postmodernism perhaps? Is there life after theory? This strikes me as being the primary research question in philosophy at the present time. And to judge by the number of books and compilations with the word “after” in the title, I wouldn’t be alone.

I’ll advance the following conjectures. If the first millennium, the medieval millennium, pre-modernity, can be categorized as the Age Of Faith -ie, where religious faith, piety, theology, supernaturalism, etc. increasingly preoccupied cultural life; and if the second millennium, the modern millennium, can be categorized as the Age Of Reason -ie, where theorizing, reasoning, science, humanism, critical thinking -eventually leading to late-twentieth century postmodern irony, ambiguity, and nihilism -increasingly preoccupied cultural life; then perhaps the next millennium might be characterized differently from both and be called the Age Of Art. This would be an age after theory, an age which is post-religious and post-rational -or in sum, post-truth, and, therefore, also post-irony, post-nihilism, even post-Baudrillard, even post-postmodern: an age where art and artistic effects come to the fore and preoccupy cultural life.

Art existed in previous ages, of course. However, each age has a dominant principle which other interests serve, and in those ages art served the dominant principles of faith or reason. So, in medieval times reason and art were pressed into the service of faith: faith went in search of understanding through reason in theology and in search of aesthetic self-expression through religious art. In modern times, faith and art are pressed into the service of reason: faith becomes either a rational faith, faith within the bounds of reason alone, as Kant had it, or a faith in reason itself; and art becomes rational, humanist, realist, socialist, critical, avante garde, etc., following the evolving trends of critical theory.

What I envisage, then, is an age where art really comes into its own, ie, artistic creativity and effect, aesthetic quality and interest, becomes the dominant principle and faith and reason is pressed into its service. Faith becomes faith in art as a way of life: an artistic faith -in art and imagination we trust, rather than in God we trust (or in science). Reason and its associated qualities logical argument, order, proportion, method, clarity, coherence, concision, discursive elegance, etc. -is employed in so far as it contributes in a work to its aesthetic quality. The latter, then, is what counts, not reason itself. So good or bad in such an age is not decided by a dominant religion or piety, nor by a dominant rational methodology or science, but by degree of artistic appeal.
For example, consider theory -ie, the old representational language game: ie, a game purporting to contain knowledgeable propositions truthfully representing reality as it is -eg, God exists, electrons exist, the self exists, freedom exists, etc. However, representational language turns on epistemology ie, the study of knowledge, which claimed to give the logos or knowledgeable account of knowledge, the truth about the truth. It claimed to know what knowledge is and exhibit its possibility. This always was an absurd undertaking, however, founded in a paradox. For to know what knowledge is presupposes we already know what knowledge is in claiming to have knowledge about knowledge. Put another way, to say a criterion of the truth is a true criterion of the truth either presupposes the criterion already and so begs the question, or else sets up an infinite regress by bringing in another criterion. The upshot is simple: epistemology is impossible; hence, knowledge and truth is impossible; hence, the age-old representational language game of theory is impossible; hence, we need to move beyond representational language and the claim that theory contains statements as true representations of reality.

We cease prevaricating and unambiguously drop the pretense that theory is really saying anything about reality at all. But what then can it be doing? Is there another way of intending or understanding a text? There surely is. Literature, creative writing, fiction, theatre, poetry -do not have to claim to be representing reality. They can be an alternative to the representational language game of truth. A novel, for instance, might be a complete fabrication from beginning to end, an exercise of the artistic imagination, a fantasy work. However, it can still have merit, ie, in an aesthetic sense if it succeeds in generating aesthetic arousal and interest in the reader. So theory, after theory, must be understood this way: as creative writing, literature, prose poetry art. This still allows there can be good and bad theory, but good and bad is not determined by criteria such as truth-content, representational correspondence to reality, or verisimilitude, but by aesthetics.

In short, in the blink of eye -perhaps we should make it at the stroke of midnight bringing in the year 2000? -everyone becomes an artist. Thus: philosophers, theologians, fundamentalists, mystics, scientists, sociologists, critical thinkers -all artists, all exercising their creative imaginations, expressing themselves, inventing theory. No longer any ambiguity about it: theory is theory-fiction. We start from there. We drop the irony and pretense of truth and switch over to a purely aesthetic paradigm. We all become artists, artists all the time, even in our own heads. For thought -ongoing internal discourse -no longer represents reality either. Everyday thinking itself is art, is imagination, is story-telling. Of course, we can be relatively good or bad artists. The criterion is not representational truth, if truth is dead, but turns on aesthetics: broadly speaking, on the degree to which whatever is generated is pleasing or interesting.

Here are some dictionary synonyms for the word “interesting” -absorbing, arousing, amusing, appealing, attractive, compelling, curious, engaging, engrossing, entertaining, gripping, intriguing, novel, original, provocative, stimulating, thought-provoking, unusual. These and related aesthetic terms, such as, beautiful, sublime, elegant, inspiring, moving, etc., now take over from the old terms associated with the dead language of representation, such as, truth, knowledge, correspondence, coherence, pragmatism, probability, proof, evidence, demonstration, verification, falsfication, legitimation, etc. So observe that, where once Lyotard reported there is a legitimation crisis regarding theory there is no longer a legitimation crisis, since, after theory, theory no longer makes claims which require legitimation. Rather, whatever value theory-fiction has turns on its aesthetic merits. The quest, therefore, is no longer a quest for the truth -which always was an impossibility -but, rather, the point of theory and every other aesthetic creation is simply this: to make life more interesting!

Observe those who are down, depressed, dull, in the doldrums, those for whom life has lost its spice, for whom life seems meaningless, who may even contemplate suicide. Life shows no interest. What they need is arousal -that which would enable them to find life more interesting. That is where art comes in. Art is therapy. Art is the endless capacity of the human imagination to create and re-create interest in life, and thereby, meaning and value. And it comes in all shapes and forms: not just books, paintings, films, music, but also: religion, science, mythology, philosophy, debate, psychoanalysis, politics, Zen meditation, whatever. Everything is theatre. Go to a church or ashram or zendo -or for that matter, a parliament -and the theatricality is obvious. Less obvious, but no less theatrical, are our therapy rooms, science labs, and lecture halls. Note the costumes, the props, the role plays, the standards of good and bad form, the rules of procedure -the stage directions, in other words. There is no truth to be found in any of it. Nevertheless, it can be extremely interesting. What’s more, it keeps us all alive and kicking.

All we need do now is create more art as best we can -more inventive art, more pleasing art, more arousing art, more comprehensive art -art for its own sake, where art is the dominant ethos and everything else, eg, faith, reason, virtue, is subservient to the aesthetic principle. Moreover, it is no longer a matter of saying one art form is inherently better than another, eg, that one religion is better than another, or that science is better than religion, or vice versa, or that meditation or contemplation is better than intellectual work or an active life in the world. For they are all equivalent as art forms, and to say one is superior would be like saying horror movies are inherently better than tragedies or comedies. It is merely a matter of what makes life seem more fascinating to you. So generate and enjoy! After theory, this can be done more freely and with a clear intellectual conscience. For truth is no longer a constraint. If it interests one to think there are fairies at the bottom of the garden, then one can entertain the thought, and thereby entertain oneself. After all, this is no more or less true than that there is a God or an electron at the bottom of the garden. Indeed, perhaps fairies ride about on electrons and angels still dance on pinheads. As for the Big Bang, that’s a particularly stirring form of science fiction -however, a fashion which, quite possibly, will be outmoded in fifty or hundred years.

But at this point perhaps we need to consider two typical objections to life as art. First: that it is escapist. However, to claim devotion to art is mere escapism from reality presupposes one can prove what is reality. And after theory, this can’t be done. Moreover, after theory, any theory of reality is itself art. Thus, the objection is outmoded. That’s why entertaining fairies at the bottom of the garden is just as valid as entertaining electrons (if electrons are entertaining). Second: life devoted to art is morally irresponsible. This again presupposes truth, this time a truth of morality. Moreover, after theory, moral theory is itself an art form, as is the ethical self. That is: one finds a type of character attractive, hence one is drawn to those who exhibit it, more-or-less, and one tends to create it as a preferred self-image. Thus, in an age of art, ethics turns on aesthetics, rather revamping the “beautiful soul” idea -except it is no longer claimed beauty has an objective or universal standard or that we ought to conform to one. Beauty is contextual, as is morality. However, if we are concerned as artists to be aesthetically appealing the likely way is to become more beautiful and interesting whether in appearance or character. A pain in the bum is a poor artist in the medium of morality. Of course, one may be good in some other way. But if the ideal in an age of art is maximum comprehensive artistry, it behoves us to develop our artistic talents in as many mediums as possible, as best we can, including the medium of morality. In this way, we become eclectic artists, somewhat Renaissance-like. So virtue is included in an age of art, as is faith and reason, under the dominant aesthetic principle.

If there is anything to avoid it is simply that which usually makes for bad art hence, such as: the ugly, the displeasing, the inelegant, the irritating, the banal, the clichéd, the commonplace, the stereotypical, the repetitious, the overdone, the long-winded, the unoriginal, the uninspired, the dull, the boring, the superficial, the inept, the poorly crafted, the technically unproficient, the juvenile, the unripe, the jaded, the stale, etc. We apply such criteria when adjudicating things in context -eg, a play, an academic essay, a poem, a painting, a scientific paper, a thesis, a political manifesto, a dance, a sermon, a news report, a character, a song, and so forth. Experienced judges usually find themselves in agreement with other experienced judges in the same field. Still, judgments are subjective in reflecting and expressing one’s lack of interest or pleasure in the work, a deficit of aesthetic arousal.

We might note that the disturbing, the unsettling, the occasionally discordant or displeasing, is not always an objection to a piece. It depends on how these elements fit into and complete a whole which overall may be aesthetically pleasing.

Which leads to my penultimate point tonight. People have no idea what reality as a whole is. Indeed, it is quite comic when they think they do. At such times they appear as perceptive as the soap box they’re standing on (which is entertaining in its own way). At any rate, things appear thus after theory. This opens a strategy of re-enchantment. For whenever some discord arises in life, some painful episode, to avoid disenchantment we just have to realize that within the whole this discord may play a positive essential part. It may be a fine artistic touch, a piece of finesse lending grace to the total picture, even if grace is currently incognito. In other words, in an age of art after theory, it is easy to entertain ourselves with the idea that reality itself is an artistic work and that this is how our sufferings can be justified and accommodated. After all, the truth of the idea is no longer relevant. All that matters is that it be a re-enchanting idea to engage with. One just needs to contemplate reality in this fashion, as a perfect aesthetic Whole or Way or Tao, to defuse the blues.

Finally, it will no doubt have occurred to the perceptive person that my discourse tonight must be, according to its own lights, beyond truth. This is so. It is only an argument. An argument could be completely convincing to everyone who hears it, and yet still be false. So what has it to do with truth? My discourse, therefore, is merely intended as a piece of creative writing which may or may not provoke a lively aesthetic effect. It’s sole purpose is to interest or re-enchant, at least its author. If nothing else, it has achieved that.

 

Existentialist Society Lecture. 2nd Nov. 1999.