About Truth

Many people talk about truth – it is one of the central and largest topics in philosophy. Truth has been one of the subjects of discussion in its own right for thousands of years. Many faiths claim exclusivity of (revealed) Truth and have modified their language with time so as to take account of absurd inconsistencies in their beliefs. A huge variety of issues in philosophy relate to truth, either by relying on theses about truth, or implying theses about truth.

This article concentrates on the main themes in the study of truth in the contemporary philosophical literature. It attempts to survey the key problems and theories of current interest, and show how they relate to one-another. A number of other entries investigate many of these topics in greater depth. Generally, discussion of the principal arguments is left to them. The goal of this article is only to provide an overview of the current theories.

The problem of truth is in a way easy to state: what truths are, and what (if anything) makes them true. But this simple statement masks a great deal of controversy. Whether there is a metaphysical problem of truth at all, and if there is, what kind of theory might address it, are all standing issues in the theory of truth.

Some anecdotes and attributes about truth:

The truth that makes men free is for the most part the truth which men prefer not to hear.
Herbert Agar

Materialism coarsens and petrifies everything, making everything vulgar, and every truth false.
Henri Frederic Amiel

An epigram is a flashlight of a truth; a witticism, truth laughing at itself.
Minna Antrim

Truth sits upon the lips of dying men.
Matthew Arnold

Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth.
Marcus Aurelius

Not being known doesn’t stop the truth from being true.
Richard Bach

There is no original truth, only original error.
Gaston Bachelard

Truth is a good dog; but always beware of barking too close to the heels of an error, lest you get your brains kicked out.
Francis Bacon

Truth is so hard to tell, it sometimes needs fiction to make it plausible.
Francis Bacon

Truth emerges more readily from error than from confusion.
Francis Bacon

Truth is the daughter of time, not of authority.
Francis Bacon

What is truth? said jesting Pilate; and would not stay for an answer.
Francis Bacon

You never find yourself until you face the truth.
Pearl Bailey

Falsehood is cowardice, the truth courage.
Hosea Ballou

Man can certainly keep on lying… but he cannot make truth falsehood. He can certainly rebel… but he can accomplish nothing which abolishes the choice of God.
Karl Barth

Truth is meant to save you first, and the comfort comes afterward.
Georges Bernanos

As scarce as truth is, the supply has always been in excess of the demand.
Josh Billings

When you want to fool the world, tell the truth.
Otto von Bismarck

A truth that’s told with bad intent beats all the lies you can invent.
William Blake

Truth never penetrates an unwilling mind.
J. L. Borges

Truth, though it has many disadvantages, is at least changeless. You can always find it where you left it.
Phyllis Bottome

The truth is always exciting. Speak it, then. Life is dull without it.
Pearl S. Buck

Truth is always strange, stranger than fiction.
Lord Byron

A dog barks when his master is attacked. I would be a coward if I saw that God’s truth is attacked and yet would remain silent.
John Calvin

Truth, like light, blinds. Falsehood, on the contrary, is a beautiful twilight that enhances every object.
Albert Camus

Truth, of course, must of necessity be stranger than fiction; for we have made fiction to suit ourselves.
G. K. Chesterton

The truth is incontrovertible, malice may attack it, ignorance may deride it, but in the end; there it is.
Winston Churchill

This is the truth: as from a fire aflame thousands of sparks come forth, even so from the Creator an infinity of beings have life and to him return again.
Marcus Tullius Cicero

Truth is so rare that it is delightful to tell it.
Emily Dickinson

God offers to every mind its choice between truth and repose. Take which you please – you can never have both.
Ralph Waldo Emerson

Truth is beautiful, without doubt; but so are lies.
Ralph Waldo Emerson

Truth hurts – not the searching after; the running from!
John Eyberg

There is no truth. There is only perception.
Gustave Flaubert

God is, even though the whole world deny him. Truth stands, even if there be no public support. It is self-sustained.
Mohandas Gandhi

I worship God as Truth only. I have not yet found Him, but I am seeking after Him. I am prepared to sacrifice the things dearest to me in pursuit of this quest. Even if the sacrifice demanded my very life, I hope I may be prepared to give it.
Mohandas Gandhi

Wisdom is found only in truth.
Johann Wolfgang von Goethe

First and last, what is demanded of genius is love of truth.
Johann Wolfgang von Goethe

The truth isn’t always beauty, but the hunger for it is.
Nadine Gordimer

Truth, like a torch, the more it’s shook it shines.
William Hamilton

Truth is the torch that gleams through the fog without dispelling it.
Claud-Adrian Helvetius

To attempt seeing Truth without knowing Falsehood. It is the attempt to see the Light without knowing the Darkness. It cannot be.
Frank Herbert

The truth has a million faces, but there is only one truth.
Hermann Hesse

Live truth instead of professing it.
Elbert Hubbard

Proverbs are always platitudes until you have personally experienced the truth of them.
Aldous Huxley

Truth consists of having the same idea about something that God has.
Joseph Joubert

Truth forever on the scaffold, wrong forever on the throne.
James Russell Lowell

All credibility, all good conscience, all evidence of truth come only from the senses.
Friedrich Nietzsche

There is no such thing as a harmless truth.
Gregory Nunn

The truth knocks on the door and you say, go away, I’m looking for the truth, and it goes away. Puzzling.
Robert M. Pirsig

Truth is like the sun. You can shut it out for a time, but it ain’t goin’ away.
Elvis Presley

One fool will deny more truth in half an hour than a wise man can prove in seven years.
Coventry Patmore

Truth is polymorphic, multi-functional, multi-layered and with many abstractions.
Dr Rationalist

The absolute truth is the thing that makes people laugh.
Carl Reiner.

People say they love truth, but in reality they want to believe that which they love is true.
Robert J. Ringer

The absolute truth is the thing that makes people laugh.People say they love truth, but in reality they want to believe that which they love is true.Truth does not do as much good in the world as the semblance of truth does evil.
Duc de La Rochefoucauld

The truth. It is a beautiful and terrible thing, and must therefore be treated with great caution.
J. K. Rowling

Search for the truth is the noblest occupation of man; its publication is a duty.
Madame de Stael

The truth will set you free. But first, it will p*ss you off.
Gloria Steinem

If you shut the door to all errors truth will be shut out.
Rabindranath Tagore

It’s no wonder that truth is stranger than fiction. Fiction has to make sense.
Mark Twain

The words of truth are always paradoxical.
Lao Tzu

Truth is one, but error is manifold.
Simone Weil

Truth provokes those whom it does not convert.
Bishop Thomas Wilson

The logic of the world is prior to all truth and falsehood.
Ludwig Wittgenstein

The truth is at the beginning of anything and its end are alike touching.
Kenko Yoshida

The truth is on the march and nothing will stop it.
Emile Zola

Correspondence theory

Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on the one hand, and things or objects on the other. This class of theories holds that the truth or the falsity of a representation is determined in principle solely by how it relates to objective reality, by whether it accurately describes that reality. For example, there is a true distance to the moon when we humans attempt to go there, and this true distance is necessary to know so that the journey can be successfully made.

Correspondence theory traditionally operates on the assumption that truth is a matter of accurately copying “objective reality” and then representing it in thoughts, words and other symbols. More modern theorists have stated that this ideal cannot be achieved independently of some analysis of additional factors. For example, language plays a role in that all languages have words that are not easily translatable into another. The German word Zeitgeist is one such example: one who speaks or understands the language may “know” what it means, but any translation of the word fails to accurately capture its full meaning (this is a problem with many abstract words, especially those derived in agglutinative languages). Thus, the language itself adds an additional parameter to the construction of an accurate truth predicates.

Proponents of several of the theories below have gone farther to assert that there are yet other issues necessary to the analysis, such as interpersonal power struggles, community interactions, personal biases and other factors involved in deciding what is seen as truth.

Coherence theory

For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. So, for example, the completeness and comprehensiveness of the underlying set of concepts is a critical factor in judging the validity and usefulness of a coherent system. A pervasive tenet of coherence theories is the idea that truth is primarily a property of whole systems of propositions, and can be ascribed to individual propositions only according to their coherence with the whole. Among the assortment of perspectives commonly regarded as coherence theory, theorists differ on the question of whether coherence entails many possible true systems of thought or only a single absolute system.

Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries. On the whole, coherence theories have been criticized as lacking justification in their application to other areas of truth, especially with respect to assertions about the natural world, empirical data in general, assertions about practical matters of psychology and society, especially when used without support from the other major theories of truth.

Constructivist theory

Social constructivism holds that truth is constructed by social processes, is historically and culturally specific, and that it is in part shaped through the power struggles within a community. Constructivism views all of our knowledge as “constructed,” because it does not reflect any external “transcendent” realities (as a pure correspondence theory might hold). Rather, perceptions of truth are viewed as contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender are socially constructed.

Consensus theory

Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.

Pragmatic theory

The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles S. Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.

Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth.” This statement emphasizes Peirce’s view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.

William James’s version of pragmatic theory, while complex, is often summarized by his statement that “the ‘true’ is only the expedient in our way of thinking, just as the ‘right’ is only the expedient in our way of behaving.” By this, James meant that truth is a quality the value of which is confirmed by its effectiveness when applying concepts to actual practice (thus, “pragmatic”).

John Dewey, less broadly than James but more broadly than Peirce, held that inquiry, whether scientific, technical, sociological, philosophical or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine and/or refute proposed truths

Minimalist (deflationary) theories

A number of philosophers reject the thesis that the concept or term truth refers to a real property of sentences or propositions. These philosophers are responding, in part, to the common use of truth predicates (e.g., that some particular thing “…is true”) which was particularly prevalent in philosophical discourse on truth in the first half of the 20th century. From this point of view, to assert the proposition “‘2 + 2 = 4’ is true” is logically equivalent to asserting the proposition “2 + 2 = 4”, and the phrase “is true” is completely dispensable in this and every other context. These positions are broadly described

as deflationary theories of truth, since they attempt to deflate the presumed importance of the words “true” or truth,

as disquotational theories, to draw attention to the disappearance of the quotation marks in cases like the above example, or

as minimalist theories of truth.

Whichever term is used, deflationary theories can be said to hold in common that “[t]he predicate ‘true’ is an expressive convenience, not the name of a property requiring deep analysis.” Once we have identified the truth predicate’s formal features and utility, deflationists argue, we have said all there is to be said about truth. Among the theoretical concerns of these views is to explain away those special cases where it does appear that the concept of truth has peculiar and interesting properties.

In addition to highlighting such formal aspects of the predicate “is true”, some deflationists point out that the concept enables us to express things that might otherwise require infinitely long sentences. For example, one cannot express confidence in Michael’s accuracy by asserting the endless sentence:

Michael says, ‘snow is white’ and snow is white, or he says ‘roses are red’ and roses are red or he says … etc.

But it can be expressed succinctly by saying: Whatever Michael says is true.

Performative theory of truth

Attributed to P. F. Strawson is the performative theory of truth which holds that to say “‘Snow is white’ is true” is to perform the speech act of signaling one’s agreement with the claim that snow is white (much like nodding one’s head in agreement). The idea that some statements are more actions than communicative statements is not as odd as it may seem. Consider, for example, that when the bride says “I do” at the appropriate time in a wedding, she is performing the act of taking this man to be her lawful wedded husband. She is not describing herself as taking this man. In a similar way, Strawson holds: “To say a statement is true is not to make a statement about a statement, but rather to perform the act of agreeing with, accepting, or endorsing a statement. When one says ‘It’s true that it’s raining,’ one asserts no more than ‘It’s raining.’ The function of [the statement] ‘It’s true that…’ is to agree with, accept, or endorse the statement that ‘it’s raining.'”

Redundancy and related theories

According to the redundancy theory of truth, asserting that a statement is true is completely equivalent to asserting the statement itself. For example, making the assertion that ” ‘Snow is white’ is true” is equivalent to asserting “Snow is white”. Redundancy theorists infer from this premise that truth is a redundant concept; that is, it is merely a word that is traditionally used in conversation or writing, generally for emphasis, but not a word that actually equates to anything in reality. This theory is commonly attributed to Frank P. Ramsey, who held that the use of words like fact and truth was nothing but a roundabout way of asserting a proposition, and that treating these words as separate problems in isolation from judgment was merely a “linguistic muddle”.

A variant of redundancy theory is the disquotational theory which uses a modified form of Tarski’s schema: To say that ‘”P” is true’ is to say that P. Yet another version of deflationism is the prosentential theory of truth, first developed by Dorothy Grover, Joseph Camp, and Nuel Belnap as an elaboration of Ramsey’s claims. They argue that sentences like “That’s true”, when said in response to “It’s raining”, are prosentences, expressions that merely repeat the content of other expressions. In the same way that it means the same as my dog in the sentence My dog was hungry, so I fed it, That’s true is supposed to mean the same as It’s raining — if you say the latter and I then say the former. These variations do not necessarily follow Ramsey in asserting that truth is not a property, but rather can be understood to say that, for instance, the assertion “P” may well involve a substantial truth, and the theorists in this case are minimalizing only the redundancy or prosentence involved in the statement such as “that’s true.”

Deflationary principles do not apply to representations that are not analogous to sentences, and also do not apply to many other things that are commonly judged to be true or otherwise. Consider the analogy between the sentence “Snow is white” and the person Snow White, both of which can be true in a sense. To a minimalist, saying “Snow is white is true” is the same as saying “Snow is white”, but to say “Snow is white is true” is not the same as saying “Snow is white”.

Truth in Ethics

Controversies, such as the Freedom of Speech debate at the Oxford Union, always brings people back as to what is truth, what are ethics, and whether they relate to each other in any critical manner. To this extent, we have, on The European Rationalist, written and reference a number of previous articles on the subject: How Could I Be Wrong? How Wrong Could I Be?; Delusions, Beliefs; Theism, Atheism, and RationalityScience and Truth; etc. At the level of general public debate we begin to think of issues such as to whether free speech (and consequently as to what we believe and why we believe it) has an envelope beyond which it becomes unacceptable to the current norms and metrics. The only problem is who decides – the media barons, big business, religious groups, powerful minorities?

I came across the book True to Life, and found a quite good review by Kieran Setiya of it that I think is worth reading to start off with.  

In True to Life, Michael Lynch sets out to defend “four truisms about truth”: truth is objective, a “cognitive good”, a worthy goal of inquiry, and something valuable in itself. On the back cover, Nussbaum says that the book “performs a major public service”.  

The argument of the book is intricate, though it is presented with an enviably light touch. It begins with the platitude that a belief is correct if and only if its object is a true proposition; deduces that, if p is true, it is good to believe p, other things being equal; interprets this as final or non-instrumental value; and concludes that truth is itself a normative property, and, given Moore’s “open question argument”, an irreducible one: “If truth matters, reductive naturalism is false.”

In a different context, it would be interesting to engage with these steps, each of which is controversial. Here, my focus is rhetorical. Who is Lynch writing for, and what are his chances of convincing them?

I think he cannot be writing for the post-modernist “enemies of truth” alleged to inhabit our English Departments. They will rightly feel that they are not taken seriously here. There is no mention of Derrida, and only a page or two on Foucault. In any case, the whole operation will seem to them naïvely unhistorical. To engage with them, one has to sink, or rise, to their level – as in Literature Against Itself.

Perhaps the aim of the book is prophylactic: it is meant to forestall the attractions of subjectivism and the cynical equation of truth with power. But if this is his persuasive task, Lynch has adopted an unfortunate strategy. Arguing that one cannot accept the value of truth without Moorean non-naturalism is bad salesmanship, even if is sound. It is not just the post-modern crowd who cannot stomach Principia Ethica: most philosophers find its commitments incredible.

The effect of True to Life, if it carries conviction, will thus be to enmire the truisms about truth in a swamp of metaphysics, to retrench the suspicion that those who believe in the possibility and the value of objective truth inhabit a Platonic jungle. As I said, that might be so – I haven’t tried to engage with Lynch’s arguments – but it would be terrible news. This truth might be one of those we do better not to believe.

How Could I Be Wrong? How Wrong Could I Be?

One of the striking, even amusing, spectacles to be enjoyed at the many workshops and conferences on consciousness these days is the breathtaking overconfidence with which laypeople hold forth about the nature of consciousness – their own in particular, but everybody’s by extrapolation. Everybody’s an expert on consciousness, it seems, and it doesn’t take any knowledge of experimental findings to secure the home truths these people enunciate with such conviction.

One of my goals over the years has been to shatter that complacency, and secure the scientific study of consciousness on a proper footing. There is no proposition about one’s own or anybody else’s conscious experience that is immune to error, unlikely as that error might be.  I have come to suspect that refusal to accept this really quite bland denial of what would be miraculous if true lies behind most if not all the elaboration of fantastical doctrines about consciousness recently defended. This refusal fuels the arguments about the conceivability of zombies, the importance of a first-person science of consciousness, intrinsic intentionality and various other hastily erected roadblocks to progress in the science of consciousness.

You can’t have infallibility about your own consciousness. Period. But you can get close – close enough to explain why it seems so powerfully as if you do. First of all, the intentional stance (Dennett, 1971, 1987) guarantees that any entity that is voluminously and reliably predictable as an intentional system will have a set of beliefs (including the most intimate beliefs about its personal experiences) that are mainly true.  So each of us can be confident that in general what we believe about our conscious experiences will have an interpretation according to which we are, in the main, right. How wrong could I be? Not that wrong. Not about most things. There has to be a way of nudging the interpretation of your manifold of beliefs about your experience so that it comes out largely innocent of error though this might not be an interpretation you yourself would be inclined to endorse.  This is not a metaphysical gift, a proof that we live in the best of all possible worlds. It is something that automatically falls out of the methodology: when adopting the intentional stance, one casts about for a maximally charitable (truth-rendering) interpretation, and there is bound to be one if the entity in question is hale and hearty in its way.

But it does not follow from this happy fact that there is a path or method we can follow to isolate some privileged set of guaranteed-true beliefs. No matter how certain you are that p, it may turn out that p is one of those relatively rare errors of yours, an illusion, even if not a grand illusion. But we can get closer, too.  Once you have an intentional system with a capacity for communicating in a natural language, it offers itself as a candidate for the rather special role of self-describer, not infallible but incorrigible in a limited way: it may be wrong, but there may be no way to correct it. There may be no truth-preserving interpretation of all of its expressed opinions (Dennett, 1978, 1991) about its mental life, but those expressed opinions may be the best source we could have about what it is like to be it. A version of this idea was made (in-)famous by Richard Rorty back in his earlier incarnation as an analytic philosopher, and has been defended by me more recently in The Case for Rorts (Dennett, 2000). There I argue that if, for instance, Cog, the humanoid robot being developed by Rodney Brooks and his colleagues at MIT, were ever to master English, its own declarations about its subjectivity would systematically tend to trump the third-person opinions of its makers, even though they would be armed, in the limit, with perfect information about the micro-mechanical implementation of that subjectivity. This, too, falls out of the methodology of the intentional stance, which is the only way (I claim) to attribute content to the states of anything.

The price we pay for this near-infallibility is that our heterophenomenological worlds may have to be immersed in a bath of metaphor in order to come out mainly true. That is, our sincere avowals may have to be rather drastically reconstrued in order to come out literally true. For instance, when we sincerely tell our interrogators about the mental images we’re manipulating,  we may not think we’re talking about convolutions of data-structures in our brain–we may well think we’re talking about immaterial ectoplasmic composites, or intrinsic qualia, or quantum-perturbations in our micro-tubules! but if the interrogators rudely override these ideological glosses and disclaimers of ours and forcibly re-interpret our propositions as actually being about such data-structure convolution, these propositions will turn out to be, in the main, almost all true, and moreover deeply informative about the ways we solve problems, think about the world, and fuel our subjective opinions in general. (In this regard, there is nothing special about the brain and its processes; if you tell the doctor that you have a certain sort of traveling pain in your gut, your doctor may well decide that you’re actually talking about your appendix whatever you may think you’re talking about and act accordingly.)

Since we are such reflective and reflexive creatures, we can participate in the adjustment of the attributions of our own beliefs, and a familiar philosophical move turns out to be just such reflective self-re-adjustment, but not a useful one. Suppose you say you know just what beer tastes like to you now, and you are quite sure you remember what beer tasted like to you the first time you tasted it, and you can compare, you say, the way it tastes now to the way it tasted then. Suppose you declare the taste to be the same. You are then asked: Does anything at all follow from this subjective similarity in the way of further, objectively detectable similarities? For instance, does this taste today have the same higher-order effects on you as it used to have? Does it make you as happy or as depressed, or does it enhance or diminish your capacity to discriminate colors, or retrieve synonyms or remember the names of your childhood friends or. . . . .? Or have your other, surrounding dispositions and habits changed so much in the interim that it is not to be expected that the very same taste (the same quale, one may venture to say, pretending to know what one is talking about) would have any of the same effects at this later date? You may very well express ignorance about all such implications. All you know, you declare, is that this beer now tastes just like that first beer did (at least in some ineffable, intrinsic regard) whether or not it has any of the same further effects or functions. But by explicitly jettisoning all such implications from your proposition, you manage to guarantee that it has been reduced to a vacuity.  You have jealously guarded your infallibility by seeing to it that you=ve adjusted the content of your claim all the way down to zero. You can’t be wrong, because there’s nothing left to be right or wrong about.

This move is always available, but it availeth nought. It makes no difference, by the way, whether you said the beer tastes the same or different; the same point goes through if you insist it tastes different now. Once your declaration is stripped of all powers of implication, it is an empty assertion, a mere demonstration that this is how you fancy talking at this moment. Another version of this self-vacating move can be seen, somewhat more starkly, in a reaction some folks opt for when they have it demonstrated to them that their color vision doesn’t extend to the far peripheries of their visual fields: They declare that on the contrary, their color vision in the sense of color experience does indeed extend to the outer limits of their phenomenal fields; they just disavow any implications about what this color experience they enjoy might enable them to do e.g., identify by name the colors of the objects there to be experienced! They are right, of course, that it does not follow from the proposition that one is having color experiences that one can identify the colors thus experienced, or do better than chance in answering same-different? questions, or use color differences to detect shapes (as in a color- blindness test) to take the most obvious further effects. But if nothing follows from the claim that their peripheral field is experienced as colored, their purported disagreement with the researchers= claim that their peripheral field lacks color altogether evaporates.

O’regan and Noë (2001) argue that my heterophenomenology makes the mistake of convicting naive subjects of succumbing to a grand illusion.   

But is it true that normal perceivers think of their visual fields this way [as in sharp detail and uniform focus from the center out to the periphery]? Do normal perceivers really make this error? We think not. . . . . normal perceivers do not have ideological commitments concerning the resolution of the visual field. Rather, they take the world to be sold, dense, detailed and present and they take themselves to be embedded in and thus to have access to the world. [pXXX]

 My response to this was:

Then why do normal perceivers express such surprise when their attention is drawn to facts about the low resolution (and loss of color vision, etc) of their visual peripheries?  Surprise is a wonderful dependent variable, and should be used more often in experiments; it is easy to measure and is a telling betrayal of the subject’s having expected something else. These expectations are, indeed, an overshooting of the proper expectations of a normally embedded perceiver-agent; people shouldn’t have these expectations, but they do. People are shocked, incredulous, dismayed; they often laugh and shriek when I demonstrate the effects to them for the first time.(Dennett, 2001, pXXXX)

O’regan and Noë (see also Noë, Pessoa, and Thompson, (2000) Noë (2001), and Noë and O’Regan, forthcoming) are right that it need not seem to people that they have a detailed picture of the world in their heads. But typically it does. It also need not seem to them that they are not zombies but typically it does.  People like to have ideological commitments. They are inveterate amateur theorizers about what is going on in their heads, and they can be mighty wrong when they set out on these paths.

For instance, quite a few theorizers are very, very sure that they have something that they sometimes call original intentionality. They are prepared to agree that interpretive adjustments can enhance the reliability of the so-called reports of the so-called content of the so-called mental states of a robot like Cog, because those internal states have only derived intentionality, but they are of the heartfelt opinion that we human beings, in contrast, have the real stuff: we are endowed with genuine mental states that have content quite independently of any such charitable scheme of interpretation.  That’s how it seems to them, but they are wrong.

How could they be wrong? They could be wrong about this because they could be wrong about anything because they are not gods. How wrong could they be?  Until we excuse them for their excesses and re-interpret their extravagant claims in the light of good third-person science, they can be utterly, bizarrely wrong. Once they relinquish their ill-considered grip on the myth of first-person authority and recognize that their limited incorrigibility depends on the liberal application of a principle of charity by third-person observers who know more than they do about what is going on in their own heads, they can become invaluable, irreplaceable informants in the investigation of human consciousness.


  • Dennett, 1971, Intentional Systems, J.Phil, 68, pp87‑106
  • Dennett, 1978, How to Change your Mind, in Brainstorms, Cambridge, MA: MIT Press.
  • Dennett, 1987, The Intentional Stance,  Cambridge, MA: MIT Press.
  • Dennett, 1991, Consciousness Explained, Boston: Little, Brown, and London: Allen Lane 1992.
  • Dennett, 2000, The Case for Rorts, in Robert Brandom, ed., Rorty and his Critics, Oxford: Blackwells.
  • Dennett, 2001, Surprise, surprise, commentary on O’Regan and  Noë, 2001, BBS, 24, 5, pp.xxxx.
  • O’Regan and Noë, 2001. BBS, 24, 5, pp.xxxxx
  • Noë, A., Pessoa, L., Thompson, E. (2000) Beyond the grand illusion: what change blindness
  • really teaches us about vision. Visual Cognition.7, 2000: 93‑106.
  • Noë, A. (2001) Experience and the active mind. Synthese 129: 41‑60.
  • Noë, A. O’Regan, J. K. Perception, attention and the grand illusion.Psyche6 (15) URL:
  • http://psyche.cs.monash.edu.au/v6/psyche‑6‑15‑noe.html

Special issue of Journal of Consciousness Studies on The Grand Illusion, January 13, 2002, How could I be wrong? How wrong could I be? Daniel C. Dennett, Center for Cognitive Studies, Tufts University, Medford, MA 02155

Delusions, Beliefs

I spotted the following article in The Psychologist Vol 16, and it made me realise how many people of different faiths, beliefs, and mindsets should be really considered to be deluded. Indeed, I know the case of a Swedish religious convert to an Eastern Religion that I could relate this article to (actually I can relate this article to many followers of Western and Eastern religions). The aspect of standing back and each one of us really looking at ourselves in a reasonably objective way is crucial for a rationalist. Unfortunately, it is a luxury that eludes many people. The aspect of being critical of everything and everyone is an intellectual honesty and baggage that not all people are capable of handling. Of course, intellectual honesty is a causality of people who want certainty in their lives at the expense of truth, and it is intellectual terrorism for those who center their lives around such zealous propagation of religion, faith, and unquestioning culture.

Early in his third month of office, President Reagan was on his way to address a conference when John Hinckley fired six gun shots at point blank range, wounding the president and three of his entourage. In the controversial trial that followed, three defence psychiatrists successfully argued that Hinckley was not guilty, on the grounds that he was suffering from the delusion that the assassination would cause Jodie Foster, the actress from Taxi Driver (a film which Hinckley was obsessed with), to fall in love with him. In the same year the award-winning author Philip K. Dick, whose books have been turned into major Hollywood films, such as Blade Runner, Total Recall and Minority Report, published one of his last books. The sprawling and eccentric VALIS is a novel based on delusions resulting from his own psychotic breakdown, which he drew on for much of his prolific career (see box 1).

From these and many other examples, it would appear that unusual or unlikely beliefs have significant consequences and continue to captivate the interest of many of us. But to examine such claims we need to know what is meant by a delusion. How do delusions differ from other abnormal beliefs? Does the study of delusions provide a productive way of understanding beliefs?

Box 1: Philip K. Dick
Many novels and short stories by Philip K. Dick contain elements from the delusions he suffered regarding identity and the nature of reality. Dick described many bizarre experiences and came to believe that human development was controlled by an entity called VALIS (Vast Active Living Intelligence System) and that his perception of Orange County, California was an illusion disguising the fact that he was really living in firstcentury Rome.There were multiple reasons for Dick’s bizarre beliefs, given his share of trauma, phobias and drug abuse, but it is likely that many of the delusions he wrote about stemmed from psychotic episodes he experienced as a sufferer and as an observer of others.This alone makes his work of great psychological interest. However, Dick also seems to have some knowledge of contemporary psychology himself, incorporating as he did the work of Penfield,Vygotsky and Luria (among others) into his stories.

Defining issues

Delusions are one of the most important constructs used by psychiatrists to diagnose patients who are considered to have lost touch with reality (Maher, 1988). For Jaspers (1963), one of the founders of modern psychiatry, delusions constituted the ‘basic characteristic of madness’ despite being ‘psychologically irreducible’.

More significantly, the detection of delusions has ‘enormous implications for diagnosis andtreatment, as well as complex notions concerning responsibility, prediction of behaviour, etc.’ (David, 1999). Yet, as pointed out by many commentators (see Jones, 1999), the clinical usage of the term delusion and its distinction from other abnormal beliefs involve a host of semantic and epistemological difficulties. Predominant amongst these is our belief that delusions are (to a large extent) self evident; that is, that they constitute a type of belief that (almost) everyone else would recognise as pathological. This, however, is more apparent than real, and is not even reflected in the many different opinions that surround the definition of the construct (Berrios, 1991; Garety &Hemsley, 1994; Spitzer, 1990). Indeed, David (1999) has suggested ‘there is no acceptable (rather than accepted) definition of a delusion’ (p.17).

For most of us, however, these thorny issues of definition can be sidestepped by choosing to adopt the descriptive and widespread characterisation offered by the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). This established psychiatric nosology text considers a delusion to be, first and foremost, a form of belief: a belief whose acceptance and subsequent behaviour can constitute the grounds for insanity. But no justification is offered and the statement itself amounts to a belief in delusions. More explicitly, the standard definition characterises delusions as false, based on an incorrect inference about external reality and different from what almost everyone else believes (APA, 1994). Other features such as degree of conviction and imperviousness to persuasion do not set delusions apart from other beliefs (Garety & Hemsley, 1994).

Delusions – An abnormal belief by any other name

Despite differences in emphasis, most definitions consider two criteria to be significant when establishing a delusion: falsifiability and bizarreness. Simply described, ‘bizarre delusions are generally impossible, whereas non-bizarre delusions are generally improbable’ (Sedler, 1995, p.256). The DSM-IV distinguishes these as follows: a non-bizarre delusion may involve situations that in principle could occur in real life but are thought (by the psychiatrist) to be highly improbable and therefore potentially falsifiable; a bizarre or fantastic belief, however, is considered impossible and therefore assumed to be one not normally held by others in the culture or society. The problem with each of these definitions lies not with the differential distinction, but with the absence of agreed operational definitions as to how these criteria are arrived at clinically.

The DSM definition does not specify how one might set about establishing the falseness or bizarreness of the belief; nor how one could know whether the belief was the product of an impaired inference, such as occurs in paranoid patients, who show a tendency to jump to conclusions in situations requiring probabalistic reasoning (Bentall, 1994). Here we turn to some specific problems.

Falsifiability Non-bizarre delusions involve situations and events that could occur in real life, such as believing that one is being followed, infected, poisoned or deceived by another. Therefore the ‘falsifiability’ criterion can mean that psychiatrists are often required to make judgements on claims of marital infidelity, persecution or conspiracy in the workplace (Jones, 1999), where the available relevant evidence is either limited, cannot be ascertained within the confines of the consulting room, or lies beyond the forensic capabilities of the clinician. As pointed out by Young (2000), ‘many of the beliefs considered to be delusions do not meet these criteria (or are not tested against them) in practice’ (p.47). This can have some curious consequences (see ‘The Martha Mitchell Effect, box 2).

Accordingly, this falsity criterion has been rightly questioned (Spitzer, 1990). Moreover, it is unclear what level of evidence would be required to consider a belief ‘incontrovertibly false’ and whether judgements should be based on the ‘balance of probabilities’ or the more stringent test of ‘beyond reasonable doubt’. ‘Delusional’ beliefs, consequently, may not be false (Heise, 1988) or even firmly sustained (Myin-Germeys et al., 2001).

Bizarre beliefs The attribution that a delusion is bizarre is typically defined in terms of beliefs considered not normally held by other members of a person’s culture or society. This, however, often first involves the psychiatrist’s own evaluation as regards the plausibility of the belief; after which the psychiatrist considers whether it is one typically sustained by the others in the person’s culture. Although both evaluations may be related, they need not be. If, based on his or her own beliefs and experience, the psychiatrist considers the belief sufficiently bizarre, then presumably a diagnosis of delusion can be made independent of ascertaining the actual prevalence of the belief in the patient’s culture. 

Box 2: The Martha Mitchell Effect
Sometimes improbable patient reports are erroneously assumed to be symptoms of mental illness (Maher, 1988).The ‘Martha Mitchell effect’ referred to the tendency of mental health practitioners not to believe the experience of the wife of the American attorney general, whose persistent reports of corruption in the Nixon White House were initially dismissed as evidence of delusional thinking, until later proved correct by the Watergate investigation. Such examples demonstrate that delusional pathology can often lie in the failure or inability to verify whether the events have actually taken place, no matter how improbable intuitively they might appear to the busy clinician. Clearly, there are instances ‘where people are pursued by the Mafia’ or are ‘kept under surveillance by the police’, and where they rightly suspect ‘that their spouse is unfaithful’ (Sedler, 1995).As Joseph H. Berke (1998) wrote, even paranoids have enemies! For understandable and obvious reasons, however, little effort is invested by clinicians into checking the validity of claims of persecution or harassment, and without such evidence the patient could be labelled delusional.

The DSM definition, however, clearly assumes that the criterion of abnormality or bizarreness should be obvious, given that the belief is one not ordinarily accepted by other members of a person’s culture or subculture. This is not necessarily a reliable strategy: many studies of psychiatrists show poor interrater reliability for ratings of bizarre beliefs (Flaum et al., 1991; Junginger et al., 1992). Moreover, most clinicians are not in a position to know or find out whether such beliefs comprise those normally accepted, except by direct comparison with those of his or her own peer group. One method of comparison is the use of large-scale surveys, but most clinical judgements on the prevalence of beliefs in society are not typically informed by empirical evidence.

In fact, beliefs in unscientific or parapsychological phenomena are not statistically uncommon (see Della Salla, 1999), and were this criterion alone employed as a sufficient condition, then many of us at times might be classified as delusional (Moor & Tucker, 1979). Large-scale marketing research polls carried out in the UK and North America consistently reveal that significant numbers of people within society hold strong beliefs about the paranormal. For example, a 1998 UK survey found that 41 per cent of respondents believed in communication with the dead, and 49 per cent believed in heaven – but only 28 per cent in hell (‘Survey of paranormal beliefs’, 1998). Such surveys also reveal important cultural differences in held beliefs. In many Western countries opinion polls confirm that large numbers believe in god(s) and hold other paranormal beliefs (Taylor, 2003). Consequently, religious beliefs, including praying to a deity, are not typically considered delusional, while believing and claiming that one is a deity (see ‘The Three Christs of Ypsilanti’, box 3) or that one’s spouse has been replaced (see ‘Capgras delusion’, box 4) typically are.

The existence of high levels of conviction in what might be considered abnormal, unscientific or paranormal beliefs raises important questions for mental health workers when justifying the notion of bizarre beliefs on purely conceptual or statistical grounds. As pointed out by French (1992), most beliefs are based upon ‘personal experiences perhaps supported by reports of trusted others, and the general cultural acceptance that such phenomena are indeed genuine’.

Although clinically important, the conceptual basis for the criteria of falsification or impossibility clearly breaks down under scrutiny. It is also problematic because psychotic symptoms such as delusions and hallucinations are not inevitably associated with the presence of a psychiatric disorder (Johns & van Os, 2001). Consequently, patients with DSM-IV-type delusions do not constitute a homogeneous group.

Box 3: The Three Christs Of Ypsilanti
In 1959 social psychologist Milton Rokeach brought together three schizophrenic patients in the same psychiatric ward in Ypsilanti, Michigan, all of whom suffered from the Messiah complex – each believed he was Jesus Christ. Rokeach was interested in seeing whether these mutually exclusive delusions would interact and affect the extent of conviction and content of each patient’s delusional beliefs. In his book Rokeach (1964/1981) records how each patient dealt with this conflict, one by avoidance, one by relinquishing his delusion and the other by attributing the identity claims of his compatriots to mental illness.Whilst this study would be considered ethically dubious today, it was one of the most original forays into the study of psychopathology where the explicit aim was to inform normal belief processes.

More often than not the decision about whether or not a belief is delusional is made on pragmatic grounds – namely, the evidential consequences of the beliefs including the extent of personal distress, potential or actual injury or social danger generated by the belief. Sometimes the decision may be simple – Cotard’s delusion, a person’s belief that they are dead, may be assessed differently from a delusion of grandeur such as believing that you are dating a famous TV star.

Can delusions tell us about ‘normal’ beliefs?

Notwithstanding difficulties with the standard psychiatric definitions, most people accept that normal beliefs perform an essential and fundamental process in establishing mental reference points from which to help explain and interact with the world. It is impossible to understand racism, prejudice, and political and religious conflict without considering discrepancy in fundamental belief systems. Fodor (1983) indicated that beliefs comprise a ‘central’ cognitive process and should be regarded as qualitatively different from the modular processes that have been well exploited by cognitive neuropsychologists (Coltheart, 1999). The proposition, however, is not matched by any clear consensus in neuropsychological accounts of what constitutes the cognitive or neural mechanisms involved, the evolutionary functions, or how such beliefs can be changed and maintained.

Jones (1999) describes beliefs as mental forms that incorporate the capacity to influence
behaviour and cognition and govern the way people think and what they do. But the debate as to what defines a belief or belief state rumbles on, and some researchers have instead opted to examine the ways in which damage or change to known cognitive processes can affect belief formation, as communicated or acted upon by patients diagnosed as suffering from delusions.

Bryant (1997) observed that over the past 20 years a variety of cognitive models of belief formation have drawn ‘empirical support from evidence that delusions can be elicited in normal individuals undergoing anomalous experiences (Zimbardo et al., 1981), the prevalence of delusions in neuropathological disturbances of sensory experience (Ellis & Young, 1990), reasoning deficits in deluded patients (Garety et al., 1991) and the tendency for deluded patients to make external attributions following negative life events (Kaney & Bentall, 1989)’ (p.44). Recent developments from cognitive neuropsychiatry have shown how detailed investigations of monodelusional conditions (e.g. Capgras) can help to generate testable theories of delusion, face recognition and normal belief formation (Ellis & Lewis, 2001). But this potentially rich vein of research for cognitive neuropsychiatry (see Coltheart and Davis, 2000; Halligan & David, 2001) does not necessarily imply that delusions are the primary source of psychopathology in patients diagnosed as psychotic.

Since most patients requiring psychiatric help have fully formed delusions by the time they are clinically diagnosed, establishing the causal factors responsible for the delusion is difficult. The neuropsychological or neurophysiological abnormalities observed could just as easily be interpreted as the product rather than the cause of these mental disorders.

However, if the formation of delusions as abnormal beliefs is the product of selective but as yet unspecified cognitive disturbance (e.g. in reasoning, thinking, attribution) then studying delusions may inform our understanding of how this psychopathology impacts on normal belief systems. Either way, they provide a platform for elucidating the cognitive architecture of belief formation itself.

Box 4: Capgras Delusion
Following a car crash in September 1995 Alan Davies became convinced that his wife of 31 years died in the accident and had been replaced by someone with whom he did not want to share his life. Diagnosed as suffering from Capgras syndrome, Mr Davies was awarded £130,000 damages after it was claimed that his rare psychiatric syndrome was caused by the crash that he and his wife, Christine, had survived. Despite suffering only minor physical injury he came to regard his wife, whom he now called Christine II, as an imposter and became stressed by any show of affection (de Bruxelles, 1999).

Future directions from a useful past

Despite the concept of delusion being common parlance in psychiatry and society, it is only in the last 20 years that serious attempts have been made to define and understand the construct in formal cognitive terms (Bentall et al., 2001; Coltheart & Davis, 2000; Garety & Hemsley, 1994).

One area that has been either ignored or relegated to a mysterious box in belief formation diagrams is the influence of our current ‘web of beliefs’ on the adoption or rejection of new beliefs. Stone and Young (1997) strongly argued that belief formation may involve weighing up explanations that are observationally adequate versus those that fit within a person’s current belief set. However, a plausible process by which beliefs may be integrated into such a belief set, or by which such a pre-existing set may influence how we generate beliefs about our perceptual world, has not been widely adopted.

Philosophers and social psychologists have attempted to piece together some of this network – and with some success. Quine and Ullian (1978) set out some philosophical principles by which a web of belief should operate. Of particular interest is their principle that beliefs are more easily shed, adopted or altered when the resulting network disruption is minimal, and that beliefs are validated by their relationships with existing beliefs. Moreover, they claim that any belief ‘can be held unrefuted no matter what, by making enough adjustments in other beliefs’ (p.79) – though sometimes this results in madness. Based on the idea that not all beliefs (or links) are created equal empirical work has shown that particular beliefs can be differentiated by the amount and strength of other beliefs, which are relied on for justification (Maio, 2002).

One theoretical framework that we are exploring in Cardiff is that provided by coherence theory (Thagard, 2000) when considering dynamic models of belief processes in action. Our working model describes how active beliefs can be evaluated for their acceptability by how well they cohere into existing belief sets. Beliefs and the constraints between them (for example, believing that Elvis is alive would constrain you to reject the belief that he is buried at Graceland) can be given values or weights. These allow an overall measure of coherence to be calculated and also permit a quantitative measure of disruption when beliefs are added, discarded or revised.

Sensory input may be a constraint in itself with the threshold for believing things obtained from  your own senses (‘I believe it was raining this morning’) considered higher than those taken on authority alone (‘I believe it was raining during the Battle of Waterloo’). This hierarchy may partly explain why in some cases delusional beliefs can be adopted over very short periods and with such conviction, and involve the sufferer dramatically revising other beliefs to cohere with their new-found preoccupation. Unusual experiences, which may accompany brain injury or mental illness, may also give direct perceptual experience for unlikely or bizarre beliefs that cause a radical reorganisation of a previously conservative belief network.

However, there must be more to pathological beliefs than simply reacting to unusual experiences, otherwise our belief systems would be in a constant state of flux. Influences on the ways in which individuals establish links between beliefs and their subsequent relevance for the individual also need to be taken into account when trying to explain why delusions are often considered bizarre.

A coherence theory account can address some of these problems by allowing reasoning biases to be modelled via damage to the constraints between beliefs. Of particular advantage to this approach is that coherence models can be implemented as artificial neural networks. This means the model can address predictions from neuropsychiatry. For example, Spitzer (1995) has argued for the the role of dopamine modulation in perceiving significance. He likens the role of dopamine to a perceptual ‘signal to noise ratio’ contrast control, where too little modulation could mean we make no useful distinction between meaningful and nonmeaningful information.

Too much, however, could lead us to see significance and meaning in perceptual information that we might otherwise ignore, causing, according to Spitzer, a range of unusual and unlikely beliefs. Given the heterogeneity and complexity of the factors involved, not least of agreeing a common language to describe and access the construct of abnormal beliefs in question, it would seem sensible to adopt an eclectic approach to delusions – one that links understanding from neuroscience, cognitive and social psychology. This would allow ‘abnormal’ and delusional beliefs to be understood as arising not simply from damaged biological mechanisms or information processing modules, but from cognitive beings firmly situated within their social milieu. Such an approach might also better allow us to treat patients with distressing beliefs, as well as provide a clearer insight into how each of us comes to hold our own beliefs, be they viewed by others as mundane, profound or peculiar.


American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th edn).Washington, DC:Author.
Bentall, R.P., Corcoran, R., Howard, R., Blackwood, N., & Kinderman, P. (2001). Persecutory delusions:A review and theoretical integration. Clinical Psychology Review, 21, 1143–1192.
Berke, J.H. (1998). Even paranoids have enemies: New perspectives on paranoia and persecution. London and New York: Routledge.
Berrios, G.E. (1991). Delusions as ‘wrong beliefs’:A conceptual history. British Journal of Psychiatry, 14, 6–13.
Bryant, R.A. (1997). Folie a familie:A cognitive study of delusional beliefs. Psychiatry, 60, 44– 50.
Coltheart, M. (1999). Modularity and cognition. Trends in Cognitive Science, 3(3), 115–120.
Coltheart, M. & Davis, M. (Eds.) (2000). Pathologies of belief. Oxford: Blackwell.
David,A.S. (1999). On the impossibility of defining delusions. Philosophy, Psychiatry and Psychology, 6, 17–20.
de Bruxelles, S. (1999, 5 March). Crash victim thinks wife is an imposter. The Times, p.7.
Della Salla, S. (Ed.) (1999). Mind myths. New York:Wiley.
Ellis, H.D. & Lewis, M.B. (2001). Capgras delusion:A window on face recognition. Trends in Cognitive Sciences, 5(4),149–156.
Flaum, M.,Arndt, S. & Andreasen, N.C. (1991).The reliability of ‘bizarre’ delusions. Comparative Psychiatry, 32, 59–65.
Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press.
French, C.C. (1992). Factors underlying belief in the paranormal: Do sheep and goats think differently? The Psychologist, 5, 295–299.
Garety, P.A. & Hemsley, D.R. (1994). Delusions: Investigations into the psychology of delusional reasoning. Oxford: Oxford University Press.
Halligan, P.W. & David,A.S. (2001). Cognitive neuropsychiatry: Towards a scientific psychopathology. Nature Neuroscience Review, 2, 209–215.
Heise, D.R. (1988). Delusions and the construction of reality. In T. Oltmanns & B. Maher (Eds) Delusional beliefs (pp.259–272) Chichester:Wiley.
Jaspers, K. (1963). General psychopathology (7th edn; J. Hoenig & M. Hamilton,Trans.). Manchester: Manchester University Press.
Johns, L.C. & van Os, J. (2001).The continuity of psychotic experiences in the general population. Clinical Psychology Review, 21, 1125–1141.
Jones, E. (1999).The phenomenology of abnormal belief. Philosophy, Psychiatry and Psychology, 6, 1–16.
Junginger, J., Barker, S. & Coe, D. (1992). Mood theme and bizarreness of delusions in schizophrenia and mood psychosis. Journal of Abnormal Psychology, 101, 287–292.
Maio, G.R. (2002).Values – Truth and meaning. The Psychologist, 15, 296–299.
Maher, B. (1988).Anomalous experience and delusional thinking:The logic of explanations. In T.F. Oltmanns and B.A. Maher (Eds.) Delusional beliefs (pp.15–33). Chichester:Wiley.
Moor, J.H. & Tucker, G.J. (1979). Delusions:Analysis and criteria. Comprehensive Psychiatry, 20, 388–393.
Myin-Germeys, I., Nicolson, N.A. & Delespaul, P.A.E.G. (2001).The context of delusional experiences in the daily life of patients with schizophrenia. Psychological Medicine, 31, 489– 498.
Quine,W.V. & Ullian, J.S. (1978). The web of belief (2nd edn). Toronto: Random House.
Rokeach, M. (1981). The three Christs of Ypsilanti. New York: Columbia University Press. (Original work published 1964)
Sedler, M.J. (1995). Understanding delusions. The Psychiatric Clinics of North America, 18, 251–262.
Spitzer,M. (1990). On defining delusion. Comprehensive Psychiatry, 31, 377–397.
Spitzer,M. (1995).A neurocomputational approach to delusions. Comprehensive Psychiatry, 36, 83–105.
Survey of paranormal beliefs. (1998, 2 February). Daily Mail.
Thagard, P. (2000). Coherence in thought and action. Cambridge, MA: MIT Press.
Stone,T. & Young,A.W. (1997). Delusions and brain injury:The philosophy and psychology of belief. Mind and Language, 12, 327–364.

Beliefs About Delusions, Vaughan Bell, Peter Halligan and Hadyn Ellis.
Published in The Psychologist Vol 16 No 8, pages 418-423.

Science and Truth

Science never gives up searching for truth, since it never claims to have achieved it. It is civilizing because it puts truth before all else, including personal interests. These are grand claims, but so is the enterprise in which scientists share.

How do we encourage the civilizing effects of science? First, we have to understand science. Scientia is knowledge. Only in the popular mind is it equated with facts. That is flattering, since facts are incontrovertible. But it is also demeaning, since facts are meaningless. Science, by contrast, is story-telling; it searches for a beginning, middle and end.

What we see is, as a consequence, culturally conditioned and could be construed to mean that our conclusions are simply a matter of taste. They are not. Though we explore in a culturally-conditioned way, the reality we sketch is universal. This, at its most basic, makes science a humane pursuit; it acknowledges the commonality of people’s experience.

This, in turn, implies a commonality of human worth. If we treasure our own experience, and regard it as real, we must treasure the experience of others — reality is none the less precious if it presents itself to someone else. All are discoverers, and if we disenfranchise any, all suffer.

Our understanding of science will inform public policy toward it. If seeing is a skill, then we should rely on those who have that skill to determine what science we do. But in Canada we routinely offend against that principle. We have, for example, numerous “Centres of Excellence” because we recognize that the skill on which discovery depends is possessed by few. But when evaluating such centres, we give only a legislated 20-per-cent weighting to “excellence” and a preposterous 80 per cent to considerations of “socio-economic worth.”

Our assessment of socio-economic worth is largely a sham. We scientists should not lend ourselves to it — though we routinely do. We should, instead, insist on applying the criterion of quality. (That this criterion is real is evidenced by the awesome success of peer-reviewed science in our times). Have scientists failed to explain science? Seemingly. Have we too often kept silent because it was expedient? Undoubtedly.

Though neglectful of their responsibility to protect science, scientists are increasingly aware of their responsibility to society. But what is that?

Some dreamers demand that scientists only discover things that can be used for good. That is impossible. Science gives us a powerful vocabulary, and it is impossible to produce a vocabulary with which one can only say nice things.

Others think it the responsibility of scientists to coerce the rest of society, because they have the power that derives from special knowledge. But scientists must work through democratic channels; anything else would be incredible arrogance. Still, plenty of responsibilities remain, and in the time I’ve been a scientist I’ve seen huge changes in our perception of them.

A major issue in the late 1950s was whether Canada should acquire nuclear weapons. The United States was trying to get Canada to do the decent thing, and arm itself with nukes for the defence of North America. Individual scientists like myself pointed to the dangers of radioactive fallout over Canada if we were to launch nuclear weapons to intercept incoming bombers. On the face of it, this was technical advice. But more truthfully it was a philosophical position. We chose to make calculations concerning fallout because we were opposed to the acquisition of nuclear weapons. I do not mean to discount the technical element. I merely want to stress that what the scientist sees is influenced by what he believes.

Another public debate had to do with nuclear-fallout shelters. Technical arguments were, once more, advanced to illustrate the absurdity of sheltering a nation from a determined nuclear attack. At a deeper level, however, we were appalled by the abandonment of attempts at co-existence in favour of the life of a mole. Better to die in the pursuit of civilized values, we believed, than in a flight underground. We were offering a value system couched in the language of science.

Around 1970, my scientist friends in the United States indoctrinated me in a fresh question of policy. In the war in Vietnam, the United States was using herbicides (Agent Orange) and a tear gas (CS2). This could be construed as being in contravention of the Geneva Protocol banning the use of chemical weapons (one of the few instruments of international law regulating the use of weapons, and therefore precious).

I went off to see our ministers of Defence and Foreign Affairs, and the Prime Minister. God knows how I got into their offices, but I did. They gave me a hard time, protesting, “These things are used for killing weeds and for riot control; how can you say they are weapons of war?” The answer was that when employed to prosecute a war, they had become weapons of war. They were being used to expose the enemy, so as to kill him.

One does not need to be a chemist to make that point. But it helps to come from a community with a commitment to objectivity, and a degree of independence from special interests. Under scientific and moral pressure, the Canadian government conceded publicly they considered the use of these weapons in Vietnam a contravention of the Geneva Protocol. Washington was not pleased.

What we in the scientific community were seeking, in our idealism, was a world ruled by law. The moral force that we brought to this debate derived from our membership in an international community ruled by law, albeit unwritten law — for without the acceptance and enforcement of standards of probity, there would be no functioning scientific community.

And without steps being taken to widen this realm of rule-based co-operation, beyond the narrow bounds of science and similar professions, there will be anarchy and ultimately all-out war. Technology had made such war intolerable. The solution lies not in more technology, but in less war.

When in March, 1983, president Reagan announced the Strategic Defense Initiative (SDI), popularly known as Star Wars, this issue was clearly joined. President Reagan was offering a technical fix to the threat of nuclear war. The SDI was to be the scientist’s antidote to the nuclear poison. However, in the process of distributing this illusory antidote, we were to abandon the only genuine defence against nuclear missiles, which lay — as it still lies — in institutionalized restraint.

The SDI was an invitation to a new arms race, one in nuclear shields, which would proceed in parallel to the continuing arms race in swords. With missile defences back in the news, this is a lesson to remember.

In the course of these political struggles, scientists became increasingly aware of themselves as an international non-governmental organization. This NGO bases itself, I claim, not primarily on its technical expertise but on its moral tenets. In science we have a group of individuals supporting one another, worldwide, in an endeavour whose success depends upon placing the truth ahead of personal advantage. Not all succeed in doing this, but all are agreed as to the necessity. In science, truth must take precedence not only over individual advantage, but also over “group advantage” — sectional interests such as nationality, creed or ethnicity.

This assertion of higher purpose has made all scholars supporters of human rights and puts to rest the notion that what we are offering is primarily technical expertise. It is the moral force of science — evident in such individuals as Bertrand Russell and Andrei Sakharov — that makes it effective. And our community’s voyage of self discovery will lead us to a more active support of democracy, wherever it is threatened.

That notion would have seemed preposterous when I began my life as a scientist. No longer. Today Academies of Science use their influence around the world in support of human rights. They should do the same for democracy, for the death of democracy is the death of free enquiry. The bell tolls for us.

John Polanyi is a Nobel prize winner in chemistry at the University of Toronto.

Theism, Atheism, and Rationality

This article on Theism, Atheism, and Rationality  is by Alvin Plantinga  

A theological objections to the belief that there is such a person as God come in many varieties. There are, for example, the familiar objections that theism is somehow incoherent, that it is inconsistent with the existence of evil, that it is a hypothesis ill-confirmed or maybe even disconfirmed by the evidence, that modern science has somehow cast doubt upon it, and the like. Another sort of objector claims, not that theism is incoherent or false or probably false (after all, there is precious little by way of cogent argument for that conclusion) but that it is in some way unreasonable or irrational to believe in God, even if that belief should happen to be true. Here we have, as a centerpiece, the evidentialist objection to theistic belief. The claim is that none of the theistic arguments-deductive, inductive, or abductive-is successful; hence there is at best insufficient evidence for the existence of God. But then the belief that there is such a person as God is in some way intellectually improper-somehow foolish or irrational. A person who believed without evidence that there are an even number of ducks would be believing foolishly or irrationally; the same goes for the person who believes in God without evidence. On this view, one who accepts belief in God but has no evidence for that belief is not, intellectually speaking, up to snuff. Among those who have offered this objection are Antony Flew, Brand Blanshard, and Michael Scriven. Perhaps more important is the enormous oral tradition: one finds this objection to theism bruited about on nearly any major university campus in the land.

The objection in question has also been endorsed by Bertrand Russell, who was once asked what he would say if, after dying, he were brought into the presence of God and asked whyhe had not been a believer. Russell’s reply: “I’d say, ‘Not enough evidence, God! Not enough evidence!'” I’m not sure just how that reply would be received; but my point is only that Russell, like many others, has endorsed this evidentialist objection to theistic belief. Now what, precisely, is the objector’s claim here? He holds that the theist without evidence is irrational or unreasonable; what is the property with which he is crediting such a theist when he thus describes him? What, exactly, or even approximately, does he mean when he says that the theist without evidence is irrational? Just what, as he sees it, is the problem with such a theist? The objection can be seen as taking at least two forms; and there are at least two corresponding senses or conceptions of rationality lurking in the nearby bushes. According to the first, a theist who has no evidence has violated an intellectual or cognitive duty of some sort. He has gone contrary to an obligation laid upon him-perhaps by society, or perhaps by his own nature as a creature capable of grasping propositions and holding beliefs. There is an obligation or something like an obligation to proportion one’s beliefs to the strength of the evidence. Thus according to John Locke, a mark of a rational person is “the not entertaining any proposition with greater assurance than the proof it is built upon will warrant,” and according to David Hume, “A wise man proportions his belief to the evidence.” 

In the nineteenth century we have W.K. Clifford, that “delicious enfant terrible” as William James called him, insisting that it is monstrous, immoral, and perhaps even impolite to accept a belief for which you have insufficient evidence:

Whoso would deserve well of his fellow in this matter will guard the purity of his belief with a very fanaticism of jealous care, lest at any time it should rest on an unworthy object, and catch a stain which can never be wiped away.[1] He adds that if a belief has been accepted on insufficient evidence, the pleasure is a stolen one. Not only does it deceive ourselves by giving us a sense of power which we do not really possess, but it is sinful, stolen in defiance of our duty to mankind. That duty is to guard ourselves from such beliefs as from a pestilence, which may shortly master our body and spread to the rest of the town. [2]
And finally: To sum up: it is wrong always, everywhere, and for anyone to believe anything upon insufficient evidence.[3] (It is not hard to detect, in these quotations, the “tone of robustious pathos” with which James credits Clifford.) On this view theists without evidence-my sainted grandmother, for example-are flouting their epistemic duties and deserve our disapprobation and disapproval. Mother Teresa, for example, if she has not arguments for her belief in God, then stands revealed as a sort of intellectual libertine-someone who has gone contrary to her intellectual obligations and is deserving of reproof and perhaps even disciplinary action. Now the idea that there are intellectual duties or obligations is difficult but not implausible, and I do not mean to question it here. It is less plausible, however, to suggest that I would or could be going contrary to my intellectual duties in believing, without evidence, that there is such a person as God. For first, my beliefs are not, for the most part, within my control. If, for example, you offer me $1,000,000 to cease believing that Mars is smaller than Venus, there is no way I can collect. But the same holds for my belief in God: even if I wanted to, I couldn’t-short of heroic measures like coma inducing drugs-just divest myself of it. (At any rate there is nothing I can do directly; perhaps there is a sort of regimen that if followed religiously would issue, in the long run, in my no longer accepting belief in God.) But secondly, there seems no reason to think that I have such an obligation. Clearly I am not under an obligation to have evidence for everything I believe; that would not be possible. But why, then, suppose that I have an obligation to accept belief in God only if I accept other propositions which serve as evidence for it? This is by no means self-evident or just obvious, and it is extremely hard to see how to find a cogent argument for it.

In any event, I think the evidentialist objector can take a more promising line. He can hold, not that the theist without evidence has violated some epistemic duty-after all, perhaps he can’t help himself- but that he is somehow intellectually flawed or disfigured. Consider someone who believes that Venus is smaller than Mercury-not because he has evidence, but because he read it in a comic book and always believes whatever he reads in comic books-or consider someone who holds that belief on the basis of an outrageously bad argument. Perhaps there is no obligation he has failed to meet; nevertheless his intellectual condition is defective in some way. He displays a sort of deficiency, a flaw, an intellectual dysfunction of some sort. Perhaps he is like someone who has an astigmatism, or is unduly clumsy, or suffers from arthritis. And perhaps the evidentialist objection is to be construed, not as the claim that the theist without evidence has violated some intellectual obligations, but that he suffers from a certain sort of intellectual deficiency. The theist without evidence, we might say, is an intellectual gimp. Alternatively but similarly, the idea might be that the theist without evidence is under a sort of illusion, a kind of pervasive illusion afflicting the great bulk of mankind over the great bulk of the time thus far allotted to it. Thus Freud saw religious belief as “illusions, fulfillments of the oldest, strongest, and most insistent wishes of mankind.”[4 ]He sees theistic belief as a matter of wish-fulfillment. Men are paralyzed by and appalled at the spectacle of the overwhelming, impersonal forces that control our destiny, but mindlessly take no notice, no account of us and our needs and desires; they therefore invent a heavenly father of cosmic proportions, who exceeds our earthly fathers in goodness and love as much as in power. Religion, says Freud, is the “universal obsessional neurosis of humanity”, and it is destined to disappear when human beings learn to face reality as it is, resisting the tendency to edit it to suit our fancies. A similar sentiment is offered by Karl Marx: Religion . . . is the self-consciousness and the self-feeling of the man who has either not yet found himself, or else (having found himself) has lost himself once more. But man is not an abstract being . . . Man is the world of men, the State, society. This State, this society, produce religion, produce a perverted world consciousness, because they are a perverted world . . . Religion is the sigh of the oppressed creature, the feelings of a heartless world, just as it is the spirit of unspiritual conditions. It is the opium of the people.

The people cannot be really happy until it has been deprived of illusory happiness by the abolition of religion. The demand that the people should shake itself free of illusion as to its own condition is the demand that it should abandon a condition which needs illusion.[5] Note that Marx speaks here of a perverted world consciousness produced by a perverted world. This is a perversion from a correct, or right, or natural condition, brought about somehow by an unhealthy and perverted social order. From the Marx-Freud point of view, the theist is subject to a sort of cognitive dysfunction, a certain lack of cognitive and emotional health. We could put this as follows: the theist believes as he does only because of the power of this illusion, this perverted neurotic condition. He is insane, in the etymological sense of that term; he is unhealthy. His cognitive equipment, we might say, isn’t working properly; it isn’t functioning as it ought to. If his cognitive equipment were working properly, working the way it ought to work, he wouldn’t be under the spell of this illusion. He would instead face the world and our place in it with the clear-eyed apprehension that we are alone in it, and that any comfort and help we get will have to be our own devising. There is no Father in heaven to turn to, and no prospect of anything, after death, but dissolution. (“When we die, we rot,” says Michael Scriven, in one of his more memorable lines.) Now of course the theist is likely to display less than overwhelming enthusiasm about the idea that he is suffering from a cognitive deficiency, is under a sort of widespread illusion endemic to the human condition. It is at most a liberal theologian or two, intent on novelty and eager to concede as much as possible to contemporary secularity, who would embrace such an idea. The theist doesn’t see himself as suffering from cognitive deficiency. As a matter of fact, he may be inclined to see the shoe as on the other foot; he may be inclined to think of the atheist as the person who is suffering, in this way, from some illusion, from some noetic defect, from an unhappy, unfortunate, and unnatural condition with deplorable noetic consequences. He will see the atheist as somehow the victim of sin in the world- his own sin or the sin of others. According to the book of Romans, unbelief is a result of sin; it originates in an effort to “suppress the truth in unrighteousness.” According to John Calvin, God has created us with a nisus or tendency to see His hand in the world around us; a “sense of deity,” he says, “is inscribed in the hearts of all.” He goes on: Indeed, the perversity of the impious, who though they struggle furiously are unable to extricate themselves from the fear of God, is abundant testimony that his conviction, namely, that there is some God, is naturally inborn in all, and is fixed deep within, as it were in the very marrow. . . . From this we conclude that it is not a doctrine that must first be learned in school, but one of which each of us is master from his mother’s womb and which nature itself permits no man to forget.[6]

Were it not for the existence of sin in the world, says Calvin, human beings would believe in God to the same degree and with the same natural spontaneity displayed in our belief in the existence of other persons, or an external world, or the past. This is the natural human condition; it is because of our presently unnatural sinful condition that many of us find belief in God difficult or absurd. The fact is, Calvin thinks, one who does not believe in God is in an epistemically defective position-rather like someone who does not believe that his wife exists, or thinks that she is a cleverly constructed robot that has no thoughts, feelings, or consciousness. Thus the believer reverses Freud and Marx, claiming that what they see as sickness is really health and what they see as health is really sickness. Obviously enough, the dispute here is ultimately ontological, or theological, or metaphysical; here we see the ontological and ultimately religious roots of epistemological discussions of rationality. What you take to be rational, at least in the sense in question, depends upon your metaphysical and religious stance. It depends upon your philosophical anthropology.

 Your view as to what sort of creature a human being is will determine, in whole or in part, your views as to what is rational or irrational for human beings to believe; this view will determine what you take to be natural, or normal, or healthy, with respect to belief. So the dispute as to who is rational and who is irrational here can’t be settled just by attending to epistemological considerations; it is fundamentally not an epistemological dispute, but an ontological or theological dispute. How can we tell what it is healthy for human beings to believe unless we know or have some idea about what sort of creature a human being is? If you think he is created by God in the image of God, and created with a natural tendency to see God’s hand in the world about us, a natural tendency to recognize that he has been created and is beholden to his creator, owing his worship and allegiance, then of course you will not think of belief in God as a manifestation of wishful thinking or as any kind of defect at all. It is then much more like sense perception or memory, though in some ways much more important. On the other hand, if you think of a human being as the product of blind evolutionary forces, if you think there is no God and that human beings are part of a godless universe, then you will be inclined to accept a view according to which belief in God is a sort of disease or dysfunction, due perhaps, to a sort of softening of the brain.

So the dispute as to who is healthy and who diseased has ontological or theological roots, and is finally to be settled, if at all at that level. And here I would like to present a consideration that, I think tells in favor of the theistic way of looking at the matter. As I have been representing that matter, theist and atheist alike speak of a sort of dysfunction, of cognitive faculties or cognitive equipment not working properly, of their not working as they ought to. But how are we to understand that? What is it for something to work properly? Isn’t there something deeply problematic about the idea of proper functioning? What is it for my cognitive faculties to be working properly? What is it for a natural organism-a tree, for example-to be in good working order, to be functioning properly? Isn’t working properly relative to our aims and interests? A cow is functioning properly when she gives milk; a garden patch is as it ought to be when it displays a luxuriant preponderance of the sorts of vegetation we propose to promote. But then it seems patent that what constitutes proper functioning depends upon our aims and interests. So far as nature herself goes, isn’t a fish decomposing in a hill of corn functioning just as properly, just as excellently, as one happily swimming about chasing minnows? But then what could be meant by speaking of “proper functioning” with respect to our cognitive faculties? A chunk of reality-an organism, a part of an organism, an ecosystem, a garden patch-“functions properly” only with respect to a sort of grid we impose on nature-a grid that incorporates our aims and desires. But from a theistic point of view, the idea of proper functioning, as applied to us and our cognitive equipment, is not more problematic than, say, that of a Boeing 747’s working properly. Something we have constructed-a heating system, a rope, a linear accelerator-is functioning properly when it is functioning in the way it was designed to function. My car works properly if it works the way it was designed to work. My refrigerator is working properly if it refrigerates, if it does what a refrigerator is designed to do.

This, I think, is the root idea of working properly. But according to theism, human beings, like ropes and linear accelerators, have been designed; they have been created and designed by God. Thus, he has an easy answer to the relevant set of questions: What is proper functioning? What is it for my cognitive faculties to be working properly? What is cognitive dysfunction? What is it to function naturally? My cognitive faculties are functioning naturally, when they are functioning in the way God designed them to function. On the other hand, if the atheological evidentialist objector claims that the theist without evidence is irrational, and if he goes on to construe irrationality in terms of defect or dysfunction, then he owes us an account of this notion. Why does he take it that the theist is somehow dysfunctional, at least in this area of his life?

More importantly, how does he conceive dysfunction? How does he see dysfunction and its opposite? How does he explain the idea of an organism’s working properly, or of some organic system or part of an organism’s thus working? What account does he give of it? Presumably he can’t see the proper functioning of my noetic equipment as its functioning in the way it was designed to function; so how can he put it? Two possibilities leap to mind. First, he may be thinking of proper functioning as functioning in a way that helps us attain our ends. In this way, he may say, we think of our bodies as functioning properly, as being healthy, when they function in the way we want them to, when they function in such a way as to enable us to do the sorts of things we want to do. But of course this will not be a promising line to take in the present context; for while perhaps the atheological objector would prefer to see our cognitive faculties function in such a way as not to produce belief in God in us, the same cannot be said, naturally enough, for the theist. Taken this way the atheological evidentialist’s objection comes to little more than the suggestion that the atheologician would prefer it if people did not believe in God without evidence. That would be an autobiographical remark on his part, having the interest such remarks usually have in philosophical contexts.  A second possibility: proper functioning and allied notions are to be explained in terms of aptness for promoting survival, either at an individual or species level.

There isn’t time to say much about this here; but it is at least and immediately evident that the atheological objector would then owe us an argument for the conclusion that belief in God is indeed less likely to contribute to our individual survival, or the survival of our species than is atheism or agnosticism. But how could such an argument go? Surely the prospects for a non-question begging argument of this sort are bleak indeed. For if theism-Christian theism, for example-is true, then it seems wholly implausible to think that widespread atheism, for example, would be more likely to contribute to the survival of our race than widespread theism.  By way of conclusion: a natural way to understand such notions as rationality and irrationality is in terms of the proper functioning of the relevant cognitive equipment. Seen from this perspective, the question whether it is rational to believe in God without the evidential support of other propositions is really a metaphysical or theological dispute. The theist has an easy time explaining the notion of our cognitive equipment’s functioning properly: our cognitive equipment functions properly when it functions in the way God designed it to function. The atheist evidential objector, however, owes us an account of this notion. What does he mean when he complains that the theist without evidence displays a cognitive defect of some sort? How does he understand the notion of cognitive malfunction?   





[1]W.K. Clifford, “The Ethics of Belief,” in Lectures and Essays (London: Macmillan, 1879), p. 183.

[2]Ibid, p. 184.

[3]Ibid, p. 186.

[4]Sigmund Freud, The Future of an Illusion (New York: Norton, 1961), p. 30.

[5]K. Marx and F. Engels, Collected Works, vol. 3: Introduction to a Critique of the Hegelian Philosophy of Right, by Karl Marx (London: Lawrence & Wishart, 1975). 

[6]John Calvin, Institutes of the Christian Religion, trans. Ford Lewis Battles (Philadelphia: Westminster Press, 1960), 1.3 (p. 43- 44).