Cognitive Neuroscience of Emotion

Book Review: Cognitive Neuroscience of Emotion, edited by Richard D.  Lane, M.D., Ph.D., and Lynn Nadel, Ph.D. New York, Oxford University Press, 2000, 431 pp., $60.00; $35.00 (paper).

Evidence linking specific psychological faculties to localized brain areas has been available for only 150 years, yet distinctions among the features of mental life have been made for thousands of years. Aristotle, for example, divided brain function into cognitive, emotive, and willful processes. This ancient distinction between cognition and emotion is reflected in the structure of the various DSMs of APA, which begin with a section on the disorders of cognition and distinguish them from disorders of mood, and in most training programs, where psychiatrists receive little training in the disorders of cognition and neurologists receive little or no training in the disorders of emotion.

The utility of the distinction between cognition and emotion and questions about its grounding in the brain’s neural substrate are explored in Cognitive Neuroscience of Emotion.  This volume is the product of a meeting held at the University of Arizona and is to be commended for presenting a variety of viewpoints on this fascinating question.

Is an alternative view viable? Does research support the view that the distinction between emotion and cognition is artificial? The book addresses these questions by beginning with overviews by Damasio and by Clore and Ortony. They review studies in which techniques used by cognitive scientists have been applied to the study of emotion. Damasio concludes that emotions and feelings should be distinguished and that the failure to make this distinction lies behind the paucity of studies in the brain basis of emotion. The evidence that he offers to support his view is weak, but he does demonstrate the value of studying both emotion and cognition with the same neuropsychological and imaging methods.  The distinction between emotion as an externally observable state and feelings as an internal experiential state could be validated if distinct physiological substrates were found for the two. Even if the distinction cannot be supported, it may serve a utilitarian purpose because physiological responses are easier to study. At this point in history, claims that the internal experience of feelings (or emotions) will yield to the methods of neuroscience are similar to the claim that the neural basis of consciousness will be discovered—promises rather than results.  Nevertheless, data-based approaches to these issues are the only way to move beyond the realm of rhetoric. 

The second section of this book reviews the potential role of the amygdala in the genesis, persistence, and interpretation of emotion. The demonstration that emotional expression is associated with a locus or loci does not prove that the distinction between cognition and emotion is artificial, but these chapters review a wide range of experiments whose results suggest that processes traditionally called “cognitive” are operative in animal behaviors that appear to reflect human emotion.  The third section reviews human research more directly.  Lesion studies and skin conductance studies support the contention that the amygdala is involved in human emotion, but they also demonstrate the involvement of other brain regions.  The hypothesis that the complex mental phenomena we refer to as emotion involve multiple structures, pathways, and molecular mechanisms is supported by much more data than the hypothesis that there is a single “emotion center.” The last section of the book examines the effects of brain lesions on emotional function. Again, there is abundant evidence linking emotional activation to multiple neural pathways.  This line of work does not directly address whether cognition and emotion are distinct or entwined brain capacities, but it does provide a set of methods by which neural mechanisms can be dissected.
Although the central question of the book is yet to be answered, this volume demonstrates clearly that the emotion/ cognition interface is an important area of study and that progress is being made along many fronts. The ultimate answer may well be that both views of the emotion/cognition dichotomy are true: these states share some circuitry and molecular mechanisms but also involve distinct loci and mechanisms. A much better understanding of the neural substrates of both emotion and cognition will undoubtedly develop, but new conceptual models of CNS organization and function may be needed before a comprehensive understanding emerges.  Readers interested in the even more radical view that our current conception of the term “emotion” needs to be rethought and perhaps even discarded will enjoy Paul Griffiths book, What Emotions Really Are: The Problem of Psychological Categories (1).

1. Griffiths PE: What Emotions Really Are: The Problem of Psychological Categories. Chicago, University of Chicago Press, 1997

Baltimore, Md.

take viagra cialis togetherviagra cialis pricecheap vpxlvpxl onlinecheap levitra professionallevitra professional onlinelevitra pricecheap levitralevitra onlinecialis jelly onlinecheap cialis jellycheap cialis soft tabscialis soft tabs onlinecialis super active onlinecheap cialis super activegeneric cialis onlinecheap generic cialischeap cialis professionalcialis professional onlinebrand cialis pricecheap brand cialiscialis pricecheap cialiscialis onlinebrand viagra pricebrand viagra onlineviagra jelly onlineviagra jelly priceviagra soft tabs priceviagra soft tabs onlinecheap viagra super activeviagra super active onlinecheap generic viagrageneric viagra onlinecheap viagra professionalviagra professional onlineviagra pricecheap viagraviagra onlinelevitra pricecialis priceviagra pricebrand viagra onlinecheap brand viagracheap cialis super activecialis super active onlinecheap vpxlvpxl onlinecheap levitra professionallevitra professional onlinecheap levitralevitra onlinecheap cialis soft tabscialis soft tabs onlineviagra soft tabs onlinecheap viagra soft tabsviagra super active onlinecheap viagra super activecheap generic cialisgeneric cialis onlinecheap generic viagrageneric viagra onlinecheap cialis professionalcialis professional onlinecheap viagra professionalviagra professional onlinecheap cialiscialis onlinecheap viagraviagra onlinelevitra onlinecheap levitrageneric cialis onlinecheap generic cialisgeneric viagra onlinecheap generic viagracialis professional onlinecheap cialis professionalviagra professional onlinecheap viagra professionalcialis onlinecheap cialisviagra onlinecheap viagrapurchase levitrapurchase cialispurchae viagraorder brand viagrabuy brand viagraorder cialis super activebuy cialis super activeorder vpxlbuy vpxlorder levitra professionalbuy levitra professionalorder levitrabuy levitraorder cialis soft tabsbuy cialis soft tabsorder viagra soft tabsbuy viagra soft tabsorder viagra super activecheap viagra super activecheap generic cialisbuy generic cialisorder generic viagrabuy generic viagraorder cialis professionalbuy cialis professionalorder viagra professionalbuy viagra professionalorder cialisbuy cialisorder viagrabuy viagraonline dating chat connectdating video chat roomscougar dating chatpersonals dating chatfree dating chat servicedating chat linedating phone chatchristian singles dating chatinterracial dating chatdating chat servicesdating chat room advicechristian dating chatteen dating chatusa dating chathot chat dating sitedating single chatadult dating chat roomsnaughty dating chatfree dating chatlocal dating chat linetop 10 free chat dating siteslive dating chatdating chat sitedating chat roomonline dating chatorder busparbuy busparorder lotrisonebuy lotrisoneorder betnovatebuy betnovateorder clonidinebuy clonidineorder nexiumbuy nexiumorder levaquinbuy levaquinorder prevacidbuy prevacidorder bentylbuy bentylorder prednisonebuy prednisoneorder differinbuy differinorder stromectolbuy stromectolorder avalidebuy avalideorder proventilbuy proventilorder noroxinbuy noroxinorder ciprobuy ciproorder omnicefbuy omniceforder augmentinbuy augmentinorder prednisolonebuy prednisoloneorder ventolinbuy ventolinorder elavilbuy elavilorder nolvadexbuy nolvadexorder cytotecbuy cytotecorder avodartbuy avodartcompletely free dating ukerotic dating personalsindian datingserious datingdating advicesex dating networksingles dating siteblack speed datingmature adult datingmarried adult datingfree adult datingmidget dating personalshairy women personals datingdating service sex personals

Truth in Ethics

Controversies, such as the Freedom of Speech debate at the Oxford Union, always brings people back as to what is truth, what are ethics, and whether they relate to each other in any critical manner. To this extent, we have, on The European Rationalist, written and reference a number of previous articles on the subject: How Could I Be Wrong? How Wrong Could I Be?; Delusions, Beliefs; Theism, Atheism, and RationalityScience and Truth; etc. At the level of general public debate we begin to think of issues such as to whether free speech (and consequently as to what we believe and why we believe it) has an envelope beyond which it becomes unacceptable to the current norms and metrics. The only problem is who decides – the media barons, big business, religious groups, powerful minorities?

I came across the book True to Life, and found a quite good review by Kieran Setiya of it that I think is worth reading to start off with.  

In True to Life, Michael Lynch sets out to defend “four truisms about truth”: truth is objective, a “cognitive good”, a worthy goal of inquiry, and something valuable in itself. On the back cover, Nussbaum says that the book “performs a major public service”.  

The argument of the book is intricate, though it is presented with an enviably light touch. It begins with the platitude that a belief is correct if and only if its object is a true proposition; deduces that, if p is true, it is good to believe p, other things being equal; interprets this as final or non-instrumental value; and concludes that truth is itself a normative property, and, given Moore’s “open question argument”, an irreducible one: “If truth matters, reductive naturalism is false.”

In a different context, it would be interesting to engage with these steps, each of which is controversial. Here, my focus is rhetorical. Who is Lynch writing for, and what are his chances of convincing them?

I think he cannot be writing for the post-modernist “enemies of truth” alleged to inhabit our English Departments. They will rightly feel that they are not taken seriously here. There is no mention of Derrida, and only a page or two on Foucault. In any case, the whole operation will seem to them naïvely unhistorical. To engage with them, one has to sink, or rise, to their level – as in Literature Against Itself.

Perhaps the aim of the book is prophylactic: it is meant to forestall the attractions of subjectivism and the cynical equation of truth with power. But if this is his persuasive task, Lynch has adopted an unfortunate strategy. Arguing that one cannot accept the value of truth without Moorean non-naturalism is bad salesmanship, even if is sound. It is not just the post-modern crowd who cannot stomach Principia Ethica: most philosophers find its commitments incredible.

The effect of True to Life, if it carries conviction, will thus be to enmire the truisms about truth in a swamp of metaphysics, to retrench the suspicion that those who believe in the possibility and the value of objective truth inhabit a Platonic jungle. As I said, that might be so – I haven’t tried to engage with Lynch’s arguments – but it would be terrible news. This truth might be one of those we do better not to believe.

Manipulating your mind

Manipulating your mind – What will science discover about our brains, and how are we going to deal with it?

The Decade of the Brain, proclaimed by US President George Bush in 1990, passed without making much of an obvious impact. But it did in fact produce considerable scientific advances in neuro-biology, giving scientists an exponentially increasing knowledge of how the brain works and the means to manipulate biochemical processes within and between nerve cells. This knowledge is slowly trickling down to society as well, be it in the pharmaceutical industry, to parents concerned about their child’s performance in school, to students looking for chemical helpers to pass their exams, or to military researchers who have an obvious interest in keeping soldiers awake and alert.

”Unlike the many claimed applications of genetics… diagnostic and therapeutic products from neurobiological research are already available”

The ability to fiddle with the brain with ever-increasing effectiveness has also created critical questions about how to use this knowledge. Francis Fukuyama, in Our Posthuman Future, Leon Kass, Chairman of the US President’s Council on Bioethics, and Steven Rose, a neurobiologist at the Open University, UK, are the most prominent and outspoken critics of the use of psychopharmaceuticals and other neurological techniques to analyse and interfere with human mental capabilities. Their concerns have also grasped the attention of neurobiologists, ethicists, philosophers and the lay public, who are all slowly realising the enormous potential of modern neuroscience. “People closely identify themselves with their brains, they don’t with their genes,” said Arthur L. Caplan, Professor of Bioethics at the University of Pennsylvania, Philadelphia, PA, USA.

Although these debates started in the late 1990s, it took the general public a bit longer to take notice—The New York Times and The Economist did not pick up on the issue until 2002. “There is a great amount of information about the brain but no one’s paying attention to the ethics,” Caplan said. “The attention of ethicists went to genetics because of the Human Genome Project…so we had to jump-start the ethics [in neurobiology].” But that is rapidly changing. Unlike the many claimed applications of genetics, such as gene therapy or molecular medicine, diagnostic and therapeutic products from neurobiological research are already available. Caplan sees four major controversial areas: the definition and diagnosis of certain types of behaviour, such as aggression, terrorism or poor performance in school; the use of drugs to alter such behaviour; questions about moral responsibility—with people going to court and saying ‘this man isn’t responsible because his brain is abnormal’; and eventually new debates about racial and gender differences.

These controversies are not just anticipated: most are already occurring. Society’s pursuit of perfection entails ‘treating’ whatever is not desirable—be it bad mood, aggression or forgetfulness. Many people take herbal memory enhancers, such as ginkgo biloba, even though they are probably no more effective than sugar or coffee. But neurobiology adds a new twist. By understanding the brain’s workings at the chemical level, it paves the way for much more efficient ways to tweak brain function. And many psychopharmaceuticals already enjoy a much broader popularity beyond treating neurological and psychiatric diseases. “When you think of the millions of pills that people take as anti-anxiety drugs, how many of these people are really anxious? Probably just a small percentage,” said James L. McGaugh, Director of the Center for the Neurobiology of Learning and Memory at the University of California, Irvine, CA, USA. Millions of school children in the USA are prescribed antipsychotic drugs or are treated for depression and attention deficit and hyperactivity disorder (ADHD), and the numbers in Western Europe are also increasing (Brower, 2003). There is an epidemic of new behavioural disorders: ADHD, seasonal affective disorder (SAD), post-traumatic stress disorder (PTSD), panic disorder (PD), narcissistic personality disorder (NPD), borderline personality disorder (BPD), antisocial personality disorder (APD), histrionic personality disorder (HPD)—soon we will run out of letter combinations to abbreviate them all. The explosive increase in prescriptions for Ritalin® for school children has already prompted questions about the apparent epidemic of ADHD. “Now it’s not that Ritalin is not effective in sedating an over-active kid, it certainly is, but it’s turning a complex social relationship into a problem inside the brain of a child and therefore inside the genes of a child,” said Rose (see interview, in this issue).

In a way, Ritalin is neuroethics “in a nutshell”, commented Wrye Sententia, co-director of the Center for Cognitive Liberty and Ethics (CCLE), a non-profit education, law and policy center in Davis, CA, USA, and head of its programme on neuroethics. The debate over the drug covers social, ethical and legal issues: who defines behaviour and behavioural disorder, who should control treatment, how should society react to drug misuse, and is it ethical to use drugs to gain an advantage over others? These are valid questions that apply equally to neuroethics in general.

Neuropharmaceuticals have already found applications outside a medical setting. Like amphetamines before it, Ritalin is increasingly used by healthy people to help them focus their attention. Similarly, the development of new drugs to influence the biochemistry of brain function also has broad economic potential outside the medical setting. Most memory-enhancing drugs available to treat Alzheimer’s, such as donezepil, galantamine or rivastigmine, inhibit cholinesterase to slow down the turnover of the neurotransmitter acetylcholine in the synapse. New drugs in the development pipeline will act on other compounds in the biochemical pathway that encodes memory: Cortex Pharmaceuticals (Irvine, CA, USA) are studying compounds called Ampakines®, which act on the AMPA receptor. This receptor responds to glutamate, which is itself involved in memory acquisition. Another class of drugs under development acts on the cAMP responsive element-binding protein (CREB), the last step in establishing long-term memory. “What we would expect is that drugs that enhance CREB signalling would be specific to inducing long-term memory and not affect upstream events of memory, such as memory acquisition and short term memory,” explained Tim Tully, Professor at Cold Spring Harbor Laboratory (NY, USA) and founder of Helicon Therapeutics (Farmingdale, NY, USA), one of two companies now working on drugs to increase CREB function.

None of these drugs, however, tackles brain degeneration itself, the cause of Alzheimer’s and other neurodegenerative diseases, but instead they delay the disease by squeezing a little more out of the remaining brain material. Consequently, they will also work on healthy people. Not surprisingly, the pharmaceutical industry has a great interest in this non-medical use of memory-enhancing drugs, according to McGaugh: “The Alzheimer market is a very important one, but small. The real market is everyone else out there who would like to learn a little easier. So they take a pill in place of studying harder.” Tully warned about the dangers of this off-label use of memory enhancers. The side effects of the first generation of memory drugs are a risk that should not be taken when there is no reason, he said. And this may never become an application, due to other intrinsic side effects. “Maybe it is not a good thing to have memory enhanced chronically every day for the rest of your life. Maybe that will produce psychological side effects, like cramp your head with too many things you can’t forget,” Tully said.

“The strong military interest in psychopharmaceuticals also presents another conundrum: if the military allows their off-label use, it would be hard to call for a ban on their civil use…”

Although memory is important, so too is the ability to forget negative experiences. As long-term memory is largely enhanced by stress hormones and emotional arousal, a horrendous event can overload the system and lead to PTSD: patients persistently re-experience the trauma. Researchers at Harvard University are now studying propranolol, a beta-blocker commonly used as a cardiac drug, as a means to decrease PTSD. Similarly, Helicon Therapeutics is working on CREB suppressors to achieve the same goal: forgetting unwanted memories. These drugs could be valuable for rape victims, survivors of terrorist attacks or young soldiers suffering from PTSD as a result of battlefield experiences. Nevertheless, an ethical debate over memory suppressors has emerged. Kass has described them as the “morning-after pill for just about anything that produces regret, remorse, pain or guilt” (Baard, 2003). But “if the soldier should be shot in the leg, he is treated. They mend the wounds. Now why wouldn’t they mend the mental wounds? On what moral grounds?” countered McGaugh. “We need the right regulations and we need the right education of society so that the social acceptance of how to use such drugs is appropriate,” said Tully. “Just to give the drug to every soldier that has been out in the field, that would be an abuse… A commander-in-chief, one would hope, would decide against such a use based on his education and on his advisors telling him scientists and experts have discussed this issue and it’s immoral to do something like that.”

“Freedom of thought is situated at the core of what it means to be a free person”

Cognitive enhancement is of just as much military interest as the treatment of PTSD. German fighter pilots in World War II took amphetamines to stay alert during British bombing raids at night. During the war against Iraq, US fighter and bomber pilots used drugs to keep awake during the long flights to and from their targets, which with briefing and debriefing could easily exceed 24 hours. Not surprisingly, the US Air Force is carrying out research on how donepezil could improve pilots’ performance. The strong military interest in psychopharmaceuticals also presents another conundrum: if the military allows their off-label use, it would be hard to call for a ban on their civil use, as Kass has suggested.

Neurological advances are not limited to new drugs. Brain imaging techniques, such as functional magnetic resonance imaging (fMRI) or positron emission tomography (PET), offer enormous potential for analysing higher behaviour. While neurologists originally used them to analyse basic sensual, motor and cognitive processes, they are now increasingly being used by psychologists and philosophers to investigate the mechanics of social and moral attitudes, reasoning and moral perceptions (Illes et al, 2003). Joshua Greene, a graduate student at Princeton University’s Center for the Study of Brain, Mind and Behavior, put his human subjects into a fMRI scanner and presented them with hypothetical scenarios in which they had to make a decision between two more or less bad outcomes of the situation (Greene et al, 2001). The results of the studies show how the brain weighs emotional and rational reasoning against each other in its decision-making. Potentially, this could be used as a sophisticated lie detector to see if someone answers a question spontaneously or after considerable reasoning. Other studies showed that the brain reacts differently at first sight when seeing a person of the same or a different skin colour (Hart et al, 2000; Phelps et al, 2000). That does not necessarily mean that everyone is a racist, but refinement of such methods could unveil personal prejudices or preferences. The use of brain scans to evaluate people’s talents or dispositions will therefore draw as much interest as the drugs used to manipulate them. “Parents will be falling over themselves to take these tests,” Caplan said. In contrast to Kass and other conservative critics, he therefore argues that regulation will not make sense but that it should be left to the individual to make decisions about whether to undergo diagnostic tests for behaviour or take behaviour-modifying drugs. “Medicine, business and the public will have to negotiate these boundaries,” Caplan said, but he remains worried that “peer pressure and advertising and marketing will make us take those pills.” Rose also does not call for a ban, but wants society to take control of these new advances and their applications, based on democratic decisions.

The use of these new tests and drugs may cause another problem. Going back to Ritalin, Sententia explained that an important reason for the apparent increase in ADHD may be overcrowded classrooms and overworked teachers, who are quick to label a child with ADHD rather than call for improvements in the school. “From the top down there is a clear message to put these kids on drugs,” Sententia said. Society should instead “put the parents’ rights back into focus” and better educate parents about behavioural disorders. This would give them more freedom to make their own decisions for their child “so they are not at the mercy of doctors or teachers,” she continued. Such “cognitive liberty”, as Sententia described it, would have to rest on better public education and understanding about the risks and benefits, the potentials and myths of neurobiology. “What I think we need to do in the next five or ten years is discuss exactly what is appropriate and inappropriate in applying these things,” said Tully. “Now is the time for education.”

This does not, however, solve the question of who controls diagnostic tools and treatment in the case of people who are not free or able to make their own decisions—such as children, prison inmates or psychiatric patients. CCLE, for instance, filed an amicus curiae (‘friend of the court’) brief to the US Supreme Court on behalf of Charles T. Sell, to argue against a court order requiring Sell to be injected with psychotropic drugs to make him mentally competent to stand trial for insurance fraud. Sententia sees some limitations, however, to cognitive freedom. Children do not enjoy the same civil rights as adults, but it should be the parents—not teachers or schools—who make the decisions about the diagnosis and treatment of their children, she said. Prison inmates also lose some of their individual rights when they are convicted, Sententia continued, and this may include their right to refuse medication. “The legal system will have to decide how to use this knowledge about the brain,” Caplan commented, in light of the “tremendous tension between brain privacy and social interest in controlling dangerous behaviour.” Sententia therefore stressed that all decisions about diagnosis and treatment must at least be in accordance with the US Constitution and the United Nations Declaration of Human Rights.

Some of the most important applications of this right to privacy concern using brain scans as a sophisticated lie detector for prisoners seeking parole, foreigners applying for a visa or employers testing their employees’ honesty. “What and how you think should be private,” Sententia said, because “freedom of thought is situated at the core of what it means to be a free person.” Caplan also expects more pressure from society in future to make sure that no such tests are performed without informed consent.

“The use of brain scans to evaluate people’s talents or dispositions will therefore draw as much interest as the drugs used to manipulate them”

Equally, Caplan, Sententia and others believe that individuals should be free to use neurological technology to enhance their mental abilities outside a medical setting. This is in contrast to the prohibitive stance taken by Kass and other conservatives who argue that it would be neither ‘natural’ nor fair to those who choose not to use such enhancement. “It’s not clear to me that all forms of enhancement are bad,” commented Adina Roskies, a neuroscientist and philosopher at the Massachusetts Institute of Technology’s Department of Linguistics and Philosophy (Cambridge, MA, USA). “There are all sorts of things that we do today that enhance our life prospects and that are not considered to be bad. … We’re far away from the ‘natural’ order already.” Thus, in some cases, instead of controlling or even restricting these new possibilities, it would be better if society focuses on trying to ensure that everyone has access to them, she continued. Given the increasing interest that the public is showing in the new possibilities offered by neuroscience, it may be too late for restrictions anyway. “There is no way of stopping this tide, the genie is out of the bottle,” Sententia said, “so the question is: how can we navigate this sea of change?”


  1. Baard E ( 2003) The guilt-free soldier. The Village Voice, Jan 22
  2. Brower V ( 2003) Analyse this. EMBO Rep 4: 1022–1024
  3. Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD ( 2001) An fMRI investigation of emotional engagement in moral judgement. Science 293: 2105–2108
  4. Hart A, Whalen P, McInerney S, Fischer H, Rauch S ( 2000) Differential response in the human amygdala to racial outgroup versus ingroup stimuli. Neuroreport 11: 2351–2355
  5. Illes J, Kirschen MP, Gabrieli JDE ( 2003) From neuroimaging to neuroethics. Nat Neurosci 6: 205
  6. Phelps EA, O’Connor KJ, Cunningham WA, Funayama ES, Gatenby JC, Gore JC, Banaji MR ( 2000) Performance on indirect measures of race evaluation predicts amygdala activation. J Cogn Neurosci 12: 729–738

Manipulating your mind – What will science discover about our brains, and how are we going to deal with it? Holger Breithaupt & Katrin Weigmann, EMBO reports 5, 3, 230–232 (2004)


Neurology and Law

Imagine this futuristic courtroom scene. The defence barrister stands up, and pointing to his client in the dock, makes this plea: “The case against Mr X must be dismissed. He cannot be held responsible for smashing Mr Y’s face into a pulp. He is not guilty, it was his brain that did it. Blame not Mr X, but his overactive amygdala.”

The legal profession in America is taking an increasing interest in neuroscience. There is a flourishing academic discipline of “neurolaw” and neurolawyers are penetrating the legal system. Vanderbilt University recently opened a $27 million neuroimaging centre and hopes to enrol students in a programme in the law and neuroscience. In the courts, as in the trial of serial rapist and murderer Bobby Joe Long, brain-scan evidence is being invoked in support of pleas of diminished responsibility. The idea is abroad that developments in neuroscience – in particular the observation of activity in the living brain, using techniques such as functional magnetic resonance imaging – have shown us that we are not as free, or as accountable for our actions, as we traditionally thought.

Defence lawyers are licking their lips at the possibility of (to use law professor Jeffrey Rosen’s succinct phrase) placing “the brain on the stand” to take the rap on behalf of the client. Though they failed to cut much ice in Long’s case, arguments that blame lies not with the defendant but with his overactive amygdala (supposedly responsible for aggressive emotions) or his underactive frontal lobes (supposedly responsible for inhibiting the expression of such emotions) are being deployed with increasing frequency. If our brains are in charge, and bad behaviour is due to them, our attitude to criminal responsibility, to punishment (the balance between rehabilitation and retribution) and to preventive detention of individuals thought to have criminal tendencies may all have to change.

Before we invest millions in “neurolaw” centres, however, we need to remind ourselves that observations of brain activity in the laboratory can explain very few things about us. We have no neural explanation for: sensations; the differences between sensations; the way our consciousness coheres at any particular time and over time; our relationship to an explicit past and an explicit future; our sense of being a self; and our awareness of other people as having minds like ourselves. All of these are involved in ordinary, waking behaviour. The confident assertion that “his brain made him do it”, except in well-attested cases – such as the automatisms associated with certain forms of epilepsy or the disinhibited behaviour that may follow severe brain injury – therefore goes beyond our current knowledge or understanding.

Those who blame the brain should be challenged as to why they stop at the brain when they seek the causes of bad behaviour. Since the brain is a physical object, it is wired into nature at large. “My brain made me do it” must mean (ultimately) that “The Big Bang” made me do it. Neuro-determinism quickly slides into determinism tout court.

And there is a contradiction built into the plea of neuromitigation. The claim “my brain made me do it” suggests that I am not my brain; even that my brain is some kind of alien force. One of the founding notions of neurolaw, however, is that the person is the brain. If I were my brain, then “My brain made me do it” would boil down to “I made me do it” and that would hardly get me off the hook. And yet, if I am not identical with my brain, why should a brain make me do anything? Why should this impersonal bit of matter single me out?

The brain is, of course, the final common pathway of all actions. You can’t do much without a brain. Decapitation is, in most instances, associated with a decline in IQ.

Nevertheless, there is a difference between events that owe their origin to the stand-alone brain – for example the twitching associated with an epileptic fit – and actions that do not. While we do not hold someone responsible for an epileptic fit, we do hold them responsible for driving against medical advice and causing a fatal crash. The global excuse “my brain made me do it” would reduce life to a condition of status epilepticus.

In practice, most brain-blamers are not prepared to deny everyone’s responsibility for anything and everything. While the brain is blamed for actions that attract moral disapprobation or legal sanction, people do not normally pass responsibility on to their brains for good actions or for neutral actions such as pouring a cup of tea or just getting up for a stretch after a long sit down. When asked why he is defending a particular client, a barrister is unlikely to say: “My brain made me do it, your honour.” This pick-and-mix neuro-determinism is grounds for treating a plea of “neuro-mitigation” with caution.

So we still retain the distinction between events such as epileptic fits that can be attributed to brain activity and those that we attribute to persons who are more than mere neural activity. Deciding on the boundaries of our responsibility for events in which we are implicated cannot be handed over to neuroscientists examining the activity of the isolated brain in the laboratory. As Stephen Morse, a professor of law, has reminded us, it is people, not brains, who commit crimes and “neuroscience . . . can never identify the mysterious point at which people should be excused responsibility for their actions”. That moral, legal question must be answered not in laboratories but in courtrooms and legislatures.

Meanwhile, the neuromitigation of blame has to be treated with suspicion except in those instances where there is unambiguous evidence of grossly abnormal brain function or abnormal mental function due to clearcut illness that may have its origin in brain disease. Our knowledge of the relationship between brain and consciousness, brain and self, and brain and agency is so weak and so conceptually confused that the appeal to neuroscience in the law courts, the police station or anywhere else is premature and usually inappropriate. And, I would suggest, it will remain both premature and inappropriate. Neurolaw is just another branch of neuromythology.


Why blame me? It was all my brain’s fault
The dubious rise of ‘neurolaw’ Raymond Tallis

The Times Oct 24, 2007

Philosophy at The End Of The Millennium: Existentialism, Nietzsche, Stirner, Postmodernism. Now what?

It seems right to begin with Kierkegaard – acknowledged as the father of existentialism. In his first book Kierkegaard gave a description of three philosophical positions or ways of life: i) a cultured form of worldly hedonism; ii) a life of a judgmental, dutiful moralist; iii) a spirituality which transcends both worldly hedonism and the rules of social morality or ordinary justice.

He called the book Either/Or. For he contended that, as such positions are discrete and self-contained, based on their own unique values, and as reason and logic can’t prove which position is objectively more true or superior, a subjective either/or decision, a free leap of faith, is required to adopt any one and commit oneself to it. Free choice here means choice in the face of the inability to establish the objective rightness of the decision; hence, choice taken in irresolvable uncertainty; hence, choice begetting angst -anxiety that we are completely wrong.

Kierkegaard rejected the Hegelian philosophy dominant in his day. It claimed that by use of reason we can all see how a position evolves out of previous ones and represents a rational advance. Reason can compare and assess positions. If we follow the logic of cultural evolution we make a smooth transition from one to another and eventually arrive at a shared final conclusion: the ultimate position objectively superior to all others. We won’t need a leap of faith. Reason will guide and assure us we’ve arrived at the highest truth. Then we can all go home.

Nietzsche and postmodernism similarly reject the idea that reason can establish objective truth and that positions or ways of life can be compared to see which one is ultimate. Nietzsche is famous for his perspectivism, ie, his argument that philosophies reflect different perspectives on reality and that all such perspectives are founded on diverse culturally relative assumptions and values. We can’t prove objective truth since the criteria for the truth -for what gets called true in a particular culture -vary relative to historical time and place. There are no independent criteria by which we can judge between positions. Moreover, behind logic stands evaluation: eg, that one values being rational, or questioning, or reflective, or analytical, or dialectical, or that one is bothered about non-contradiction, logical determinations of reality, and the like. After all, a late-medieval like Martin Luther can declare that reason is the devil’s whore -ie, that reason is a corrupt faculty, part of our fallen and sinful nature: not a reliable faculty to use in pursuit of truth. It will seduce us away from truth, which can only be found, says Luther, in a God-given scriptural revelation.

So, the value of reason appears relative and can be put in question. Other cultures have not valued it as much as we have in modern times. Nietzsche raises the question why we want truth at all rather than illusion and suggests it is only a kind of imperialism, or piece of moral naiveté, to assume truth is worth more than myth or appearance. Moreover, what we call truths are just our more triumphant fictions: ie, certain fictions, simplifications, and the like, come to the fore at a certain point in time and if they triumph they get called truths by most people in that culture. Thus, truth is basically a concept expressing a people’s incapacity to think otherwise. It reflects limitation, a degree of disempowerment. Our convictions are our prisons. At the same time, though, the temptation of truth is that it promises a power, viz, the security and superiority of feeling we live in the truth or possess the truth -as against others who are in the wrong. So, Nietzsche famously analyses truth and philosophy in terms of an underlying will-to-power.

Postmodernism is close. Foucault also analyses what’s called knowledge in terms of power -eg, that a group which successfully portrays itself as having knowledge thereby acquires power and that such knowledges arise via discrepancies of power in society between so-called experts and those not in the know: between the haves and the have-nots in society, the dominant and less dominant in education. It is the dominant elites which determine what gets to be called the canon of knowledge -shoring up their privileged positions and passing the canon down to future generations. There’s no guarantee the canon, or dominant regime of discourse, is truth rather than a temporarily triumphant fiction serving certain vested interests. (This may have crossed one’s mind before!)

Also close is Lyotard, who calls the many positions grand narratives, or stories of truth, and refers to them as language games. The games are discrete and circular, for they are founded on their own unique set of values and contain within themselves their own game rules or criteria for truth, knowledge, evidence, proof, right method, and the like. There is no objectively true game, since there is no independent position from which you could judge between the games to decide which one is best. Hence, the games are said to be incommensurable -ie, they can’t be measured or compared for their real truth-value. Truths and values are relative to the game you are playing. To say one language game is intrinsically or objectively superior to another would be as absurd as saying that soccer is intrinsically or objectively better than cricket. Games are simply different, not inherently better or worse.

Also similar to Nietzsche is Baudrillard’s notion of simulation and seduction. We don’t live in the real as such, he says, but in our cultural simulation of reality. In late-Capitalist consumer society, where mass media dominate, the mainstream cultural simulation is selected and mediated over and over again. It is reinforced through endless repetitions: hyper-mediated, hyper-realized. The simulation thereby becomes the hyperreal, the realer-than-real: an overdetermined simulation which appears natural, normal, an obvious truth.

Meanwhile, the real itself is a void, a nullity, a desert of the real, as Baudrillard puts it. All cultures are seduced by their truths. Moreover, seduction is not rational, or it is pre-rational, more basic than the rational. For to be rational already presupposes one has been seduced by the ideals of reason. Hence, Baudrillard seems to be in agreement with Nietzsche that behind reason stands evaluation or the mysterious non-rational -the other of reason -which Baudrillard calls seduction. However, Baudrillard, unlike Nietzsche and Foucault, is uncommitted to the view that seduction operates through power or a will-to-power, or even through desire, as some others would have it. Hence, he says we should forget Foucault -presumably Nietzsche too, at least on this point.

How then does seduction operate if not through power or desire? Actually, this is undecidable. For to analyze seduction in terms of power, or desire, or some other factor, be it psychological, psychoanalytic, natural and empirical, or supernatural and non-empirical, would already presuppose a seduction, ie, that one has been seduced by this or that discourse or perspective. Rather, the ultimate sources of seduction remain mysterious, a kind of secret rule of the game. We find ourselves seduced, we know not how or why. One thing remains though: whatever position or way of life we are seduced by, there is no way we can establish its objective or essential truth. It has value only relative to our seduction. To say our seduction is objectively best would be as absurd as Romeo saying Juliet is objectively best. He may feel she is, but he can’t establish this as a truth for others. So the implication of seduction theory in particular, and postmodernism in general, is that beauty and truth is in the eye of the beholder. Hence, it’s said that truth is dead in postmodernity -ie, essential truth, objective truth, is an outmoded notion, a concept from a dead language game of the past.

So, in the light of Kierkegaard and existentialism, Nietzsche and postmodernism, philosophical positions and ways of life now appear as perspectives, simulations, or discrete and discontinuous language games; or in more dramatic terms: at the end of the millennium, truth is dead. But was Kierkegaard right to say free choice or a leap of faith is required to jump the gaps? Is there free choice here? Is there even a self which is free to make such a choice? Does it have the free will? On these questions we find Nietzsche and postmodernism part company with Kierkegaard and existentialism. Let’s consider.

Descartes is the father of modern philosophy or what’s called modernity by postmoderns. Emphasis is on self and related concepts, such as autonomy, responsibility, accountability, free will, free choice, individuality, and the like. It begins with the Cartesian “I think therefore I am”. Several things are implied: that there is a self, that the self is a causal agent, that the self can control thought and action through free will, that the self is a free moral agent -ie, accountable and responsible. Philosophers like Descartes, Kant and Hegel, stressed the rationality of self and said the self is most free when most rational. Kierkegaard and existentialism object. Nevertheless, they still concur on free self, free choice, deliberation and decision, responsibility and accountability. Therefore, we have to say that existentialism belongs to modernity.

Now, what about Nietzsche? He rejects the “I think” in no uncertain terms. It is arbitrary to assume the “I” creates or controls thought and action. After all, thoughts, beliefs, actions, decisions, and the like, can be generated by underlying and unconscious agencies. This, of course, connects to will-topower. Will-to-power can operate in us at levels below the level of conscious awareness or control. The sense of having a free subjectivity, a free self, is itself an illusion generated by will-to-power in the human organism. Moreover, Nietzsche declares: the doctrine of free will is “a hangman’s metaphysics” -ie, a fiction invented by certain resentful and vengeful groups in the past so that others -criminals, conquering tribes, masters -can be held accountable and responsible and duly condemned, punished, or damned. Belief in free will thus serves to rationalize and legitimate righteous indignation and revenge under the fiction of justice and desert. The idea caught on.

Similarly, postmodernism decentres the self, ie, it undermines the ideology of the free self by pointing to factors which condition who we are, what we can think or say or believe, or what we can do. One catch-phrase is: the self does not speak language, but language speaks the self -ie, the cultural language or language games we are brought up in conditions our sense of subjectivity and the possibilities of thought. We may think we are free agents, but actually we are speaking and acting in accordance with our historical conditioning and cultural limitations.

So Nietzsche and postmodernism differ radically from Kierkegaard and existentialism in so far as the latter rely heavily on an assumption of free subjectivity reminiscent of Descartes. There can be no existential free choice, or free and accountable leaps of faith, if self is merely a simulation of selfhood, as Baudrillard might say, determined by modernity’s cultural code. Moreover, in the light of this, Nietzsche is surely not an existentialist and existentialism is not the heir to his thought; rather postmodernism is. Indeed, postmodernism could well be described as kind of neo-Nietzscheanism.

In sum: Kierkegaard the existentialist argued that positions can’t be compared by reason alone and that objective truth is impossible, then declared that a responsible, accountable, free leap of faith is required. He assumed the reality of free will or free subjectivity as the final ground of our action and commitment. Nietzsche and postmodernism object. Underlying factors, such as power, desire, cultural conditioning, language limitations, regimes of discourse, seduction, and the like, must be taken into account. Existentialism is itself a version of the hangman’s metaphysics. Are we speaking at a hangman’s society?!

What now of Max Stirner? Where does he stand? Stirner was writing at much the same time as Kierkegaard, in the 1840’s, and in a similar intellectual environment. Like Kierkegaard he rejected the dominant Hegelianism in which he was schooled. So in some ways he is similar to Kierkegaard, especially in that he too provides a sustained critique of rationalist metaphysics and objective truth. Moreover, at first glance he seems to be arguing in favour of free subjectivity, the free self or free ego, and free individualism. Thus, he might seem to belong in the existentialist camp. However, this is rather misleading. If we look more closely we find he is not committed to the idea of a free self or ego, and that, contrary to initial appearances and to his critics and commentators, he is not advocating individualist egoism at all.

Well, this needs some explaining. Stirner certainly argues against objective truth arrived at through reason, proposing instead that positions have been adopted in the past for underlying egoistic reasons of self-interest. Desire had more to do with it than reason. However, as with will-to-power, this egoistic will did not always operate at the conscious level of deliberation or control. Most of the time people have been unconscious or involuntary egoists, as Stirner puts it, ie, they may have thought they were choosing a position purely because of its truth, but since no position can exhibit its truth, the real motives were psychological, egoistic in the sense of being self-serving or apparently advantageous.

This is summed up in Stirner’s saying, “Nothing is sacred but by my bending the knee.” -meaning: nothing is simply given as sacred or true or right or valuable in itself, but only acquires this appearance of value by our elevating it to this sublime status, disempowering ourselves in relation to it. We project its value, declare it sacred, untouchable, inviolable, thereby losing the capacity to take back its value again, or annul it. We do this because we feel, however dimly, however unconsciously, it is advantageous to be aligned with the sacred.

However, Stirner argues we are, rather, disadvantaged in the process. For we become addicted to the sacred truth, and, as Stirner sees it, a better -more empowering, more reliable, more immediate, more liberating -mode of happiness, a happiness of non-addiction, can be found by undermining and annulling every sacred truth. We achieve this through realizing nothing is sacred of itself but only appears sacred via our projection -by our bending the knee. Seeing it is not sacred or inviolable in itself, we find we can violate it, ie, take back its value and annul it, thus letting go of it.

Example: consider people who fall romantically in love. At one level they feel it is advantageous to be thus enthralled -and so they pursue it: their own thralldom, their own servitude. The other becomes a sacred object or idol to which one becomes addicted, attached. One becomes emotionally dependent. There are certain highs involved, to be sure, which explains the temptation.

But there’s the down side. We are subservient in that our sense of emotionalwellbeing is vulnerable to the other’s will or changeability. As Stirner would say, we have fallen prey to tributariness -ie, we pay the other too much tribute, give the other too much weight, value or power. In short, we make the other sacred by bending the knee. This is the pattern of idolatry. The same applies to everything -eg, God, truths, faiths, beliefs, ideologies, reason, discourse, thought, and even the self or ego. We can make an little idol out of anything.

Do we possess our objects of belief and desire or do they possess us? For Stirner, re-phrasing Hamlet, to possess or be possessed -that is the question. Possessing them without them possessing us means we retain the capacity to take back value any time and cancel, suspend or annul it -ie, we can absolve ourselves of the thing, we can let it go, be non-attached and independent in relation to it. We can, for example, let that old lover go, let that old God go, let that old truth go, let even life itself go -let everything go. To be able to have and enjoy things without them having you, describes the non-attached condition Stirner calls Ownness. We come into our own, we develop maturity, when we can have and not have in this way.

God and the truth is dead for Stirner in that he can let them go. He is radically uncommitted. Indeed, he is not concerned for anything except “the self-enjoyment of life” -akin to what the Greeks called “eudemonia” -ie, philosophical good spirits. To attain and enjoy good spirits is Stirner’s purpose. Attainment comes via the realization that nothing is sacred except by our bending the knee and exercising the capacity to take back all things and annul their value or power over us. This implies we annul all the objects of belief and desire, hence, all the objects of hope and fear and time. What then remains? Only what Stirner calls “creative nothingness” -ie, the ongoing unfolding of life itself here and now without names, conceptualizations, divisions, limits. For these are all objects of belief or desire, potential idols. And we take these back and annul them. So it is no longer a matter of fear and hope, of time, of mediation. The immediate self-enjoyment of creative nothingness is realized where there are no idols left standing to block it. It is the free creative act of life-affirmation, of life affirming itself in and through us: a life-enjoyment without reason, that is, for no reason except itself because, well, enjoyment is enjoyable -which seems obvious.

Now, is this egoism? I say not. For ordinary egoism is the pursuit of enjoyment in time via the objects of belief and desire. And self-enjoyment is precisely not this. On the contrary, self-enjoyment is the radical alternative to ordinary egoism. But is it not egoism at least in the sense that Stirner believes in the free ego or individual self of egoism, as the commentators say? No, again. Stirner is not attached or committed to self or ego, since self or ego is simply a concept, an object of belief or desire, one more potential idol. He annuls it along with the rest. Note that Stirner’s motto throughout the book is not “I have set my affair on the ego or egoism”. His motto is, “I have set my affair on nothing.” Creative nothingness is the last word in his discourse, and on the last page of the book even the idea of the ego or owner is taken back, annulled, returned to the creative nothing from whence it came.

Stirner doesn’t belong in the existentialist camp because he is not committed to key existentialist notions: eg, self, free will, authenticity, accountability, responsibility. He would absolve himself of all such notions. He would not make an idol of them. Well, then, shall we say Stirner is more like a postmodern? After all, he was one of the first to use the term “modernity” to describe the previous period of philosophical culture, and he says that his own position -ownness, or self-enjoyment -comes after this, and so by implication is post-modern. In fact, a good case can be made that he was way ahead of his time, that critics and commentators have failed to understand him, and that he anticipated many themes of postmodernism a hundred and fifty years ago.
However, what Stirner most resembles, it seems to me, is Taoistic Zen. After all, Taoistic Zen is also all about radical non-attachment to any objects of temporal desire or belief and a contemplative openness to and appreciation of the Tao, understood as the nameless, the unconceptualized, Way of reality. As the first line of the Tao Te Ching says, the Tao or Way that can be named is not the real Tao or Way itself. Thus, the Tao is akin to Stirner’s creative nothingness and the contemplative appreciation of the Tao is akin to Stirner’s practice of immediate self-enjoyment.

What about similarity between Taoistic Zen, Stirner, and postmodernism? Well, in so far as postmoderns are committed to discourse itself, or the terms of their discourse -whether power, or desire, or deconstruction, or simulation, or seduction, etc. -and make a sacred idol out of them, then there would be little similarity. However, in so far as ironic detachment from discourse is hinted at in some texts -notably in the case of Baudrillard -then there may be a similarity. In Baudrillard, in his rather extreme brand of postmodernism, there is an ongoing unresolved ambiguity or equivocation over whether his discourse is to be taken as a serious or sacred truth about the real or whether he is instead engaged in a kind of provocative and ironic game with the reader. The former is suggested by his description of himself as a moralist and metaphysician. The latter is suggested by references to his text as theory-fiction and by pronouncements that the secret of theory is that there is no longer any truth in theory. In short, Baudrillard prevaricates on this crucial issue. And so, in the end, one must forget Baudrillard.

Stirner privileges the calm contemplative self-enjoyment of creative nothingness above all and he seems rather scornful of other pursuits. He prefers aloof retreat from the world and he seems to have gone on to live the rest of his life this way. Same goes for mainstream Taoistic Zen. However, postmodern writers, including Baudrillard who at least flirts with the void and contemplative silence, tend to privileged discourse or writing as such, and so churn out endless books -even if they are books of theory which argue we can’t write books of theory any more. This seems to be the state of play in philosophy as we approach the end of the millennium.

Which leaves me with one last question to address tonight. Is there a way forward from here into the next millennium, a way beyond the positions outlined so far, a way beyond even postmodernism: a post-postmodernism perhaps? Is there life after theory? This strikes me as being the primary research question in philosophy at the present time. And to judge by the number of books and compilations with the word “after” in the title, I wouldn’t be alone.

I’ll advance the following conjectures. If the first millennium, the medieval millennium, pre-modernity, can be categorized as the Age Of Faith -ie, where religious faith, piety, theology, supernaturalism, etc. increasingly preoccupied cultural life; and if the second millennium, the modern millennium, can be categorized as the Age Of Reason -ie, where theorizing, reasoning, science, humanism, critical thinking -eventually leading to late-twentieth century postmodern irony, ambiguity, and nihilism -increasingly preoccupied cultural life; then perhaps the next millennium might be characterized differently from both and be called the Age Of Art. This would be an age after theory, an age which is post-religious and post-rational -or in sum, post-truth, and, therefore, also post-irony, post-nihilism, even post-Baudrillard, even post-postmodern: an age where art and artistic effects come to the fore and preoccupy cultural life.

Art existed in previous ages, of course. However, each age has a dominant principle which other interests serve, and in those ages art served the dominant principles of faith or reason. So, in medieval times reason and art were pressed into the service of faith: faith went in search of understanding through reason in theology and in search of aesthetic self-expression through religious art. In modern times, faith and art are pressed into the service of reason: faith becomes either a rational faith, faith within the bounds of reason alone, as Kant had it, or a faith in reason itself; and art becomes rational, humanist, realist, socialist, critical, avante garde, etc., following the evolving trends of critical theory.

What I envisage, then, is an age where art really comes into its own, ie, artistic creativity and effect, aesthetic quality and interest, becomes the dominant principle and faith and reason is pressed into its service. Faith becomes faith in art as a way of life: an artistic faith -in art and imagination we trust, rather than in God we trust (or in science). Reason and its associated qualities logical argument, order, proportion, method, clarity, coherence, concision, discursive elegance, etc. -is employed in so far as it contributes in a work to its aesthetic quality. The latter, then, is what counts, not reason itself. So good or bad in such an age is not decided by a dominant religion or piety, nor by a dominant rational methodology or science, but by degree of artistic appeal.
For example, consider theory -ie, the old representational language game: ie, a game purporting to contain knowledgeable propositions truthfully representing reality as it is -eg, God exists, electrons exist, the self exists, freedom exists, etc. However, representational language turns on epistemology ie, the study of knowledge, which claimed to give the logos or knowledgeable account of knowledge, the truth about the truth. It claimed to know what knowledge is and exhibit its possibility. This always was an absurd undertaking, however, founded in a paradox. For to know what knowledge is presupposes we already know what knowledge is in claiming to have knowledge about knowledge. Put another way, to say a criterion of the truth is a true criterion of the truth either presupposes the criterion already and so begs the question, or else sets up an infinite regress by bringing in another criterion. The upshot is simple: epistemology is impossible; hence, knowledge and truth is impossible; hence, the age-old representational language game of theory is impossible; hence, we need to move beyond representational language and the claim that theory contains statements as true representations of reality.

We cease prevaricating and unambiguously drop the pretense that theory is really saying anything about reality at all. But what then can it be doing? Is there another way of intending or understanding a text? There surely is. Literature, creative writing, fiction, theatre, poetry -do not have to claim to be representing reality. They can be an alternative to the representational language game of truth. A novel, for instance, might be a complete fabrication from beginning to end, an exercise of the artistic imagination, a fantasy work. However, it can still have merit, ie, in an aesthetic sense if it succeeds in generating aesthetic arousal and interest in the reader. So theory, after theory, must be understood this way: as creative writing, literature, prose poetry art. This still allows there can be good and bad theory, but good and bad is not determined by criteria such as truth-content, representational correspondence to reality, or verisimilitude, but by aesthetics.

In short, in the blink of eye -perhaps we should make it at the stroke of midnight bringing in the year 2000? -everyone becomes an artist. Thus: philosophers, theologians, fundamentalists, mystics, scientists, sociologists, critical thinkers -all artists, all exercising their creative imaginations, expressing themselves, inventing theory. No longer any ambiguity about it: theory is theory-fiction. We start from there. We drop the irony and pretense of truth and switch over to a purely aesthetic paradigm. We all become artists, artists all the time, even in our own heads. For thought -ongoing internal discourse -no longer represents reality either. Everyday thinking itself is art, is imagination, is story-telling. Of course, we can be relatively good or bad artists. The criterion is not representational truth, if truth is dead, but turns on aesthetics: broadly speaking, on the degree to which whatever is generated is pleasing or interesting.

Here are some dictionary synonyms for the word “interesting” -absorbing, arousing, amusing, appealing, attractive, compelling, curious, engaging, engrossing, entertaining, gripping, intriguing, novel, original, provocative, stimulating, thought-provoking, unusual. These and related aesthetic terms, such as, beautiful, sublime, elegant, inspiring, moving, etc., now take over from the old terms associated with the dead language of representation, such as, truth, knowledge, correspondence, coherence, pragmatism, probability, proof, evidence, demonstration, verification, falsfication, legitimation, etc. So observe that, where once Lyotard reported there is a legitimation crisis regarding theory there is no longer a legitimation crisis, since, after theory, theory no longer makes claims which require legitimation. Rather, whatever value theory-fiction has turns on its aesthetic merits. The quest, therefore, is no longer a quest for the truth -which always was an impossibility -but, rather, the point of theory and every other aesthetic creation is simply this: to make life more interesting!

Observe those who are down, depressed, dull, in the doldrums, those for whom life has lost its spice, for whom life seems meaningless, who may even contemplate suicide. Life shows no interest. What they need is arousal -that which would enable them to find life more interesting. That is where art comes in. Art is therapy. Art is the endless capacity of the human imagination to create and re-create interest in life, and thereby, meaning and value. And it comes in all shapes and forms: not just books, paintings, films, music, but also: religion, science, mythology, philosophy, debate, psychoanalysis, politics, Zen meditation, whatever. Everything is theatre. Go to a church or ashram or zendo -or for that matter, a parliament -and the theatricality is obvious. Less obvious, but no less theatrical, are our therapy rooms, science labs, and lecture halls. Note the costumes, the props, the role plays, the standards of good and bad form, the rules of procedure -the stage directions, in other words. There is no truth to be found in any of it. Nevertheless, it can be extremely interesting. What’s more, it keeps us all alive and kicking.

All we need do now is create more art as best we can -more inventive art, more pleasing art, more arousing art, more comprehensive art -art for its own sake, where art is the dominant ethos and everything else, eg, faith, reason, virtue, is subservient to the aesthetic principle. Moreover, it is no longer a matter of saying one art form is inherently better than another, eg, that one religion is better than another, or that science is better than religion, or vice versa, or that meditation or contemplation is better than intellectual work or an active life in the world. For they are all equivalent as art forms, and to say one is superior would be like saying horror movies are inherently better than tragedies or comedies. It is merely a matter of what makes life seem more fascinating to you. So generate and enjoy! After theory, this can be done more freely and with a clear intellectual conscience. For truth is no longer a constraint. If it interests one to think there are fairies at the bottom of the garden, then one can entertain the thought, and thereby entertain oneself. After all, this is no more or less true than that there is a God or an electron at the bottom of the garden. Indeed, perhaps fairies ride about on electrons and angels still dance on pinheads. As for the Big Bang, that’s a particularly stirring form of science fiction -however, a fashion which, quite possibly, will be outmoded in fifty or hundred years.

But at this point perhaps we need to consider two typical objections to life as art. First: that it is escapist. However, to claim devotion to art is mere escapism from reality presupposes one can prove what is reality. And after theory, this can’t be done. Moreover, after theory, any theory of reality is itself art. Thus, the objection is outmoded. That’s why entertaining fairies at the bottom of the garden is just as valid as entertaining electrons (if electrons are entertaining). Second: life devoted to art is morally irresponsible. This again presupposes truth, this time a truth of morality. Moreover, after theory, moral theory is itself an art form, as is the ethical self. That is: one finds a type of character attractive, hence one is drawn to those who exhibit it, more-or-less, and one tends to create it as a preferred self-image. Thus, in an age of art, ethics turns on aesthetics, rather revamping the “beautiful soul” idea -except it is no longer claimed beauty has an objective or universal standard or that we ought to conform to one. Beauty is contextual, as is morality. However, if we are concerned as artists to be aesthetically appealing the likely way is to become more beautiful and interesting whether in appearance or character. A pain in the bum is a poor artist in the medium of morality. Of course, one may be good in some other way. But if the ideal in an age of art is maximum comprehensive artistry, it behoves us to develop our artistic talents in as many mediums as possible, as best we can, including the medium of morality. In this way, we become eclectic artists, somewhat Renaissance-like. So virtue is included in an age of art, as is faith and reason, under the dominant aesthetic principle.

If there is anything to avoid it is simply that which usually makes for bad art hence, such as: the ugly, the displeasing, the inelegant, the irritating, the banal, the clichéd, the commonplace, the stereotypical, the repetitious, the overdone, the long-winded, the unoriginal, the uninspired, the dull, the boring, the superficial, the inept, the poorly crafted, the technically unproficient, the juvenile, the unripe, the jaded, the stale, etc. We apply such criteria when adjudicating things in context -eg, a play, an academic essay, a poem, a painting, a scientific paper, a thesis, a political manifesto, a dance, a sermon, a news report, a character, a song, and so forth. Experienced judges usually find themselves in agreement with other experienced judges in the same field. Still, judgments are subjective in reflecting and expressing one’s lack of interest or pleasure in the work, a deficit of aesthetic arousal.

We might note that the disturbing, the unsettling, the occasionally discordant or displeasing, is not always an objection to a piece. It depends on how these elements fit into and complete a whole which overall may be aesthetically pleasing.

Which leads to my penultimate point tonight. People have no idea what reality as a whole is. Indeed, it is quite comic when they think they do. At such times they appear as perceptive as the soap box they’re standing on (which is entertaining in its own way). At any rate, things appear thus after theory. This opens a strategy of re-enchantment. For whenever some discord arises in life, some painful episode, to avoid disenchantment we just have to realize that within the whole this discord may play a positive essential part. It may be a fine artistic touch, a piece of finesse lending grace to the total picture, even if grace is currently incognito. In other words, in an age of art after theory, it is easy to entertain ourselves with the idea that reality itself is an artistic work and that this is how our sufferings can be justified and accommodated. After all, the truth of the idea is no longer relevant. All that matters is that it be a re-enchanting idea to engage with. One just needs to contemplate reality in this fashion, as a perfect aesthetic Whole or Way or Tao, to defuse the blues.

Finally, it will no doubt have occurred to the perceptive person that my discourse tonight must be, according to its own lights, beyond truth. This is so. It is only an argument. An argument could be completely convincing to everyone who hears it, and yet still be false. So what has it to do with truth? My discourse, therefore, is merely intended as a piece of creative writing which may or may not provoke a lively aesthetic effect. It’s sole purpose is to interest or re-enchant, at least its author. If nothing else, it has achieved that.


Existentialist Society Lecture. 2nd Nov. 1999.


Society and Porn

As a society we are further from turning off the porn than we have ever been. Pornography is everywhere – it masquerades as “gentlemen’s entertainment” in the form of clubs, it infiltrates advertising and there are even plans to send it to mobile phones.

In the US, with the pornography industry bringing in up to $US15 billion (£8 billion) annually, people spend more on porn every year than they do on movie tickets and all the performing arts combined.

Each year, in Los Angeles alone, more than 10,000 hardcore pornographic films are made, against an annual Hollywood average of 400 movies.

Pornography is not only bigger business than ever before, it is also more acceptable, more fashionable, more of a statement of cool. From pieces “in praise of porn” in normally sober magazines, to Victoria Coren and Charlie Skelton’s book, published last year, about making a porn film, to the news that Val Kilmer is to play the part of pornography actor John Holmes in a new mainstream movie, there is a widespread sense that anyone who suggests pornography might have any kind of adverse effect is laughably out of touch.


Coren and Skelton, former Erotic Review film critics, focus on their flip comic narrative, scarcely troubling themselves with any deeper issues. “In all our years of watching porn,” they write, in a rare moment of analysis that isn’t developed any further, “we have never properly resolved what we think about how, why and whether it is degrading to women. We suspect that it might be. We suspect that pornography might be degrading to everybody.”

With pornography, it seems as if the sheer scale of the phenomenon has, in time-honoured capitalist fashion, conferred its own respectability; as a result, serious analysis is hard to come by. Only occasionally is there broadcasting that gives any kind of insight.

The British documentary Hardcore, shown two years ago, told the story of Felicity, a single mother from Essex, England, who travelled to Los Angeles hoping to make a career in pornography.

danni ashe

Danni Ashe’s website is one of the most popular sites on the Internet. Reuters

Arriving excited, and clear about what she would not do – anal sex, double-vaginal penetration – she ended up being coerced into playing a submissive role and agreeing to anal sex.

Felicity – the vicissitudes of whose own troubled relationship with her father were mirrored by the cruelty of the men with whom she ended up working – eventually escaped back to Britain.

Hardcore offered a rare, unadorned look at the inside of the industry, as did Pornography: The Musical, albeit in a more surreal form, with actors interrupting sex to break into song.

Yet what about the millions who consume pornography, the men – for they are, despite pornographers’ claims about growing numbers of female fans, mostly men – who habitually use it? How are they affected? Is pornography, as most these days claim, a harmless masturbatory diversion?

There are suggestions that a heavy diet of porn might encourage men inappropriately to expect sex. Is that true? And what about more profound effects? How does it affect relationships? Is it addictive?

Does it encourage rape, pedophilia, sexual murder? Surely tough questions need to be asked.

First, though, some definitions. According to the Shorter Oxford Dictionary, the word “pornography” dates to 1864, when it described “the life, manners, etc of prostitutes or their patrons”.

More recently, it has come to signify material, in the words of Chambers, “intended to arouse sexual excitement”. Its most common themes, however, are power and submission. By contrast, “erotica”, which is pretty hard to find now, carries additional connotations of “amorousness” and is far less concerned with control and domination. No, it is pornography plain and simple, from venerable “wrist mags” such as Playboy, to the almost daily bombardment of teaser pornographic emails, that confronts all of us ceaselessly.

The received wisdom, pushed hard by mass-market magazines such as FHM, is that men derive a pretty uncomplicated enjoyment from pornography. That, certainly, is the argument put forward by such proponents as the British food writer A.A. Gill, who has directed his own pornographic film, and the musician Moby, who once said in an interview: “I like pornography – who doesn’t? I don’t really trust men who claim to not be interested in porn. We’re biologically programmed to respond to the sight of people having sex.”

Danny Plunkett, then features editor of Loaded, takes an equally relaxed view. “We know that a lot of people enjoy it and take it with a pinch of salt. We certainly don’t view it as dangerous.”

But is it as simple as this? One of my best friends is a man for whom pornography has apparently never held even the slimmest interest. Moby may choose to distrust him, but his sex life otherwise has always seemed to me perfectly robust. He is, however, so much in the minority as to seem almost an oddity.

For most men, at some point in their lives, pornography has held a strong appeal and, before any examination of its effects, this fact has to be addressed. Like many men, I first saw pornography during puberty. At boarding school, dog-eared copies of Mayfair and Knave magazines were stowed behind toilet cisterns; this borrow-and-return library system was considered absolutely normal, seldom commented upon and either never discovered by the masters or tacitly permitted. Long before my first sexual relationship, porn was my sex education.

No doubt (though we’d never have admitted it then) my friends and I were driven to use porn through loneliness: being away from home, we longed for love, closeness, unquestioning acceptance. The women over whom we masturbated – the surrogate mothers, if you like – seemed to be offering this but, of course, they were never going to provide it.

The untruths it taught me on top of this disappointment – that women are always available, that sex is about what a man can do to a woman – I am only now, more than two decades on, finally succeeding in unlearning.

From men everywhere come similar stories. Nick Samuels, 46, an electrical contractor – now, with a wife and four children, the very image of respectable fatherhood – says he first discovered the power of pornographic images at the age of 16, when he found a copy of Mayfair in his father’s garage. “I can even remember the picture. There was a woman walking topless past a building site and the builders were ogling her from the scaffolding. It was pretty soft stuff, but it heightened my senses and kicked off my interest in pornography. Before long, I was reading Whitehouse and then, through a friend at my squash club, I was introduced to hardcore videos.”

Si Jones, a 39-year-old north London vicar who regularly counsels men trying to “come off” pornography, admits that, for him, too, it was his introduction to sex. “As a teenager, I watched porn films with my friends at the weekend. It was just what you did. It was cool, naughty and everyone was doing it.” Set against today’s habit of solitary internet masturbation, Jones’s collegiate introduction to porn seems peculiarly sociable.

Today, boys no longer clandestinely circulate magazines after school; nor do they need to rummage through their father’s cupboards in search of titillating material. Access to internet pornography has never been easier, its users never younger, and the heaviest demand, according to research published in the New York Times, is for “‘deviant’ material including pedophilia, bondage, sadomasochism and sex acts with various animals”.

At its most basic level, pornography answers natural human curiosity. Adolescent boys want to know what sex is about and porn certainly demonstrates the mechanics. David Morgan, consultant clinical psychologist and psychoanalyst at the Portman Clinic in London, which specialises in problems relating to sexuality and violence, describes this phase as “transitional, like a rehearsal for the real thing. The problem with pornography begins when, instead of being a temporary stop on the way to full sexual relations, it becomes a full-time place of residence”.

Morgan’s experience of counselling men addicted to porn has convinced him that “the more time you spend in this fantasy world, the more difficult it becomes to make the transition to reality. Just like drugs, pornography provides a quick fix, a masturbatory universe people can get stuck in. This can result in their not being able to involve anyone else”.

For most men, the way pornography objectifies sex strikes a visceral emotional chord. Psychotherapists Michael Thompson and Dan Kindlon, in their book Raising Cain: Protecting The Emotional Life of Boys, suggest that objectification, for boys, starts early. “By adolescence, a boy wakes up most mornings with an erection. This can happen whether he is in a good or bad mood, whether it is a school day or a weekend . . . Boys enjoy their own physical gadgetry. But the feeling isn’t always, ‘Look what I can do!’ The feeling is often, ‘Look what it can do!’ – again, a reflection of the way a boy views his instrument of sexuality as just that: an object.

“What people might not realise when they justly criticise men for objectifying sex – viewing sex as something you do, rather than part of a relationship – is that the first experience of objectification of sexuality in a boy’s life comes from his experience of his own body, having this penis that makes its own demands.”

But the roots go back further still. Research has shown that boy babies are treated more harshly than their female counterparts and, as they grow up, boys are taught that success is achieved through competition. In order to deal with this harsh masculine world, boys can learn not to trust their own feelings and not to express their emotions. They become suspicious of other men, with whom they’re in competition, after all, and as a result they often feel lonely and isolated.

Yet men, as much as women, hunger for intimacy. For many males, locked into a life in which self-esteem has grown intrinsically entwined with performance, sex assumes an almost unsustainable freight of demands and needs. Not only does the act itself become almost the only means through which many men can feel intimate and close, but it is also the way in which they find validation. And sex itself, of course, cannot possibly satisfy such demands.

It is into this troubled scenario that porn finds such easy access. For in pornography, unlike in real life, there is no criticism, real or imagined, of male performance. Women are always, in the words of the average internet site, “hot and ready”, eager to please. In real life, by contrast, men find women are anything but: they have higher job status, they demand that they be sexually satisfied, and they are increasingly opting to combine career and motherhood.

Men, say psychologists, also feel threatened by the “emotional power” they perceive women wielding over them. Unable to feel alive except when in relationships with women, they are at the same time painfully aware that their only salvation from isolation comes in being sexually acceptable to women.

This sense of neediness can provoke intense anger that, all too often, finds expression in porn. Unlike real life, the pornographic world is a place in which men find their authority unchallenged and in which women are their willing, even grateful servants. “The illusion is created,” as one male writer on pornography puts it, “that women are really in their rightful place and that there is, after all, no real and serious challenge to male authority.”

Seen in this light, the patently ridiculous pornography scenario of the pretty female apartment-hunter (or hitch-hiker, driver with broken-down car, or any number of similar such vulnerable roles) who is happy to let herself be gang-banged by a group of overweight, hairy-shouldered couch potatoes makes perfect psychological sense.

The porn industry, of course, dismisses such talk, yet occasionally comes a glimmer of authenticity. Bill Margold, one of the industry’s longest-serving film performers, was interviewed in 1991 by psychoanalyst Robert Stoller for his book Porn: Myths for the Twentieth Century. Margold made no attempt to gloss over the realities. “My whole reason for being in this industry is to satisfy the desire of the men in the world who basically don’t care much for women and want to see the men in my industry getting even with the women they couldn’t have when they were growing up. So we brutalise a woman sexually; we’re getting even for lost dreams.”

As well as “eroticising male supremacy”, in the words of anti-porn campaigner John Stoltenberg, pornography also attempts to assuage other male fears, in particular that of erection failure. According to psychoanalytical thinking, pornography answers men’s fetishistic need for visual proof of phallic potency. Lynne Segal, professor of psychology and gender studies at Birkbeck College, University of London, writes: “Men’s specific fears of impotence, feeding off infantile castration anxiety, generate hostility towards women. Through pornography, real women can be avoided, male anxiety soothed and delusions of phallic prowess indulged, by intimations of the rock-hard, larger-than-life male organ.”

Pornography, in other words, is a lie. It peddles falsehoods about men, women and human relationships. In the name of titillation, it seduces vulnerable, lonely men – and a small number of women – with the promise of intimacy, and delivers only a transitory masturbatory fix.

Increasingly, though, men are starting to be open about the effect pornography has had upon them. David McLeod, a marketing executive, explains the cycle. “I’m drawn to porn when I’m lonely, particularly when I’m single and sexually frustrated. But I can easily get disgusted with myself. After watching a video two or three times, I’ll throw it away and vow never to watch another again. But my resolve never lasts very long.”

He has, he says, “seen pretty much everything. But once you start going down that slope, you get very quickly jaded”.

Like many men, McLeod is torn. Quick to claim that porn has “no harmful effects”, he is also happy to acknowledge the contradictory fact that it is “deadening”. Andy Philips, a Leeds art dealer and, at 38, a father for the first time, says there have been times when he has been “a very heavy user”. His initial reaction, like that of many of the men to whom I spoke, is studiedly jokey: “I love porn.” Yet, as he grows more contemplative, he admits: “I’ve always used it secretly, never as part of a relationship. It’s always been like the other woman on the side. It’s something to do with being naughty, I guess.”

Again and again, despite now being married, he is drawn back. “You can easily get too much of it. It’s deadening, nullifying, gratuitous, unsatisfying. At one point I was single for three years and I used a lot of porn then. After a while, it made me feel worse. I’d feel disgusted with myself and have a huge purge.”

Extended exposure to pornography can have a raft of effects. By the time Nick Samuels had reached his mid-20s, it was altering his view of what he wanted from a sexual relationship. “I used to watch porn with one of my girlfriends, and I started to want to try things I’d seen in the films: anal sex, or threesomes.” Sometimes, he says, this was OK – “she was an easygoing person”. At other times, “it shocked her”.

Married for 15 years, he admits he has carried the same sexual expectations into the marital bedroom. “There’s been real friction over this; my wife simply isn’t that kind of person. And it’s only now, after all these years, that I’m beginning to move on from it. Porn is like alcoholism; it clings to you like a leech.”

Psychoanalyst Estela Welldon, author of the classic text Mother, Madonna, Whore, has treated couples for whom such scenarios spiralled out of control. “A lot of men involve their partners in the use of porn. Typically, they will say, ‘Don’t you want a better sex life?’ I have seen cases in which first the woman has been subjected to porn and then they have used their own children for pornographic purposes.”

When couples use porn together – a growing trend, if anecdotal evidence is anything to go by – there is, says Welldon, “an illusory sense that they are getting closer together. Then they film themselves having sex and feel outside themselves. This dehumanising aspect is an important part of pornography. It dehumanises the other person, the relationship, and any intimacy”.

Even when in a loving sexual relationship, men who have used porn say that, all too often, they see their partner through a kind of “pornographic filter”. This effect is summed up eloquently by US sociologist Harry Brod, in Segal’s essay Sweet Sorrows, Painful Pleasures: “There have been too many times when I have guiltily resorted to impersonal fantasy because the genuine love I felt for a woman wasn’t enough to convert feelings into performance. And in those sorry, secret moments, I have resented deeply my lifelong indoctrination into the aesthetic of the centrefold.”

Running like a watermark through all pornography use, according to Morgan at the Portman Clinic, is the desire for control. This need, he says, has its roots in early childhood. “A typical example might be a boy with fairly absent parents, either in emotional terms or in actual fact.” The boy, wishing his parents were more present – more within his control, as it were – can grow up wishing “to find something over which he can have control. Pornography fills that space”.

But the user of pornography is also psychologically on the run, Welldon adds. “People who use pornography feel dead inside, and they are trying to avoid being aware of that pain. There is a sense of liberation, which is temporary: that’s why pornography is so repetitive – you have to go back again and again.”

Lost in a world of pornographic fantasy, men can become less inclined, as well as increasingly less able, to form lasting relationships. In part, this is due to the underlying message of pornography. Ray Wyre, a specialist in sexual crime, says pornography “encourages transience, experimentation and moving between partners”.

Morgan goes further: “Pornography does damage,” he says, “because it encourages people to make their home in shallow relationships.”

Jan Woolf believes it might also prevent a relationship getting started. A former special needs teacher, she lasted only six months as a film censor in 2001. During this time, she watched hundreds of hours of hardcore videos. At the time, she was single. “If I’d been in the early stages of a relationship, it would have been very difficult, because I’d have been watching what I might have been expected to be doing, except it would never have been like that.” She left the job because the porn was starting to make her feel “depressed – I wanted my lively mind back”.

The more powerful the sense of pre-existing internal distress, the more compelling becomes the pull towards pornography. For John-Paul Day, a 50-year-old Edinburgh architect in his first “non-addictive” sexual relationship, the experience of being a small boy with a dying mother drove him to seek solace in masturbation. He says he has been “addicted” to pornography his entire adult life. “The thing about it is that, unlike real life, it is incredibly safe,” Day says. “I’m frightened of real sex, which is unscripted and unpredictable. And so I engage in pornography, which is totally under my control. But, of course, it also brings intense disappointment, precisely because it is not what I’m really searching for. It’s rather like a hungry person standing outside the window of a restaurant, thinking that they’re going to get fed.”

Day, who has attended meetings of Sex Addicts Anonymous for 12 years, says, “pornography is central to my own sex addiction in as much as sex addiction has to do with the use of fantasy as a way of escaping from reality. Even in my fantasies about ‘real’ people, I am really transforming them into pieces of walking pornography. It is not the reality of who they are that I focus on, but the fantasy I project on to them”.

Like drugs and drink, pornography – as Day has realised – is an addictive substance. Porn actor Kelly Cooke, one of the stars of Pornography: the Musical, says this applies on either side of the camera. “It got to the point where I considered having sex the way most people consider getting a hamburger. But when you try to give it up – that’s when you realise how addictive it is, both for consumers and performers. It’s a class A drug, and it’s hell coming off it.”

The cycle of addiction leads one way – towards ever harder material. Morgan believes “all pornography ends up with S&M (sadomasochism)”. The infamous Carnegie Mellon study of porn on the internet found that images of hardcore sex were in far less demand than more extreme material. Images of women engaging in acts of bestiality were hugely popular.

The mechanics of the pornographic search – craving, discovery of the “right” image, masturbation, relief – makes it, says Morgan, work like “a sort of drug, an antidepressant”.

The myth about porn, as a witness told the 1983 Minneapolis City Council public hearings on it, is that “it frees the libido and gives men an outlet for sexual expression. This is truly a myth. I have found pornography not only does not liberate men, but on the contrary is a source of bondage. Men masturbate to pornography only to become addicted to the fantasy. There is no liberation for men in pornography. (It) becomes a source of addiction, much like alcohol. There is no temporary relief. It is mood-altering. And reinforcing, ie, ‘you want more’ because ‘you got relief’. It is this reinforcing characteristic that leads men to want the experience they have in pornographic fantasy to happen in real life”.

In its most severe form, this can lead to sexual crime, though the links between the two remain controversial and much argued-over. Wyre, from his work with sex offenders, says: “It is impossible not to believe pornography plays a part in sexual violence. As we constantly confront sex offenders about their behaviour, they display a wide range of distorted views that they then use to excuse their behaviour, justify their actions, blame the victim and minimise the effect of their offending. They seek to make their own behaviour seem normal, and interpret the behaviour of the victim as consent, rather than a survival strategy. Pornography legitimises these views.”

One of the most extreme examples of this is Ted Bundy, the US serial sexual murderer executed for his crimes in January 1989. The night before his death, he explained his addiction to pornography in a radio interview: “It happened in stages, gradually . . . My experience with . . . pornography that deals on a violent level with sexuality is that, once you become addicted to it, and I look at this as a kind of addiction like other kinds of addiction, I would keep looking for more potent, more explicit, more graphic kinds of material. Like an addiction, you keep craving something which is harder, harder, something which gives you a greater sense of excitement, until you reach the point where the pornography only goes so far . . . It reaches that jumping-off point where you begin to wonder if, maybe, actually doing it will give you that which is beyond just reading about it or looking at it.”

Bundy, as damaged as he was, stopped short of blaming pornography for his actions, though it was, he believed, an intrinsic part of the picture. “I tell you that I am not blaming pornography . . . I take full responsibility for whatever I’ve done and all the things I’ve done . . . I don’t want to infer that I was some helpless kind of victim. And yet we’re talking about an influence that is the influence of violent types of media and violent pornography, which was an indispensable link in the chain . . . of events that led to behaviours, to the assaults, to the murders.” In the understated words of Wyre: “The very least pornography does is make sexism sexy.”

The average man, of course, whatever his consumption of pornography, is no Bundy. Yet for those who have become addicted, the road to a pornography-free life can be long and arduous. Si Jones advises accountability. “Make your computer accountable, let other people check what you’ve been looking at.”

And the alternative to pornography, says Morgan, isn’t always easy. “Relationships are difficult. Intimacy, having a good relationship, loving your children, involves work. Pornography is fantasy in the place of reality. But it is just that: fantasy. Pornography is not real and the only thing human beings get nourishment from is reality: real relationships. And, anyway, what do you want to say when you get to the end of your life? That you wish you’d spent more time (masturbating) on the internet? I hardly think so.”


(some content changed)

Rationalism and Freethinking – Some Basics

Rationalism as a philosophy is defined as using reason and logic as the reliable basis for testing any claims of truth, seeking objective knowledge about reality, making judgments and drawing conclusions about it. Although rationalism must ultimately rely on sense perceptions, but it must also couple sense perceptions with logic and evidence. To be consistent with logic, the thought process of a rationalist must be free from logical fallacies, catalogued in many introductory books on logic or critical thinking. There is no place for personal bias or emotion in rationalism, although emotion and rationalism are not mutually exclusive, each has its place. More on this later.

Freethinking, which is sometimes confused with rationalism, is defined as the free forming of views about reality independent of authority or dogma, be it from a divine or human source. If we stick to the strict definitions, then freethinking is not synonymous with rationalism. One need not be strictly rational to be a freethinker. One is allowed the leeway to believe or form any opinion, not necessarily rational (essentially “think as you like”), as long as it is not influenced by existing religious, cultural or traditional dogma or authority. A postmodernist (Read intellectual anarchist) may claim to be a freethinker according to this non-restrictive definition. But rationalism is much more restrictive. It enforces logic and evidence as the guiding principle in thinking and forming opinions and cognition.

So although rationalism invariably leads to freethinking, but freethinking does not necessarily imply rationalism, since freethinking may include irrational views, beliefs and personal bias. I have attempted to provide my own definitions in a precise way in a recent post (Faith Philosophy and Dogma) to help set the criteria for freethinkers/freethinking.

I must point out that I have tried to define and explain rationalism in the sense it is commonly understood today. I have not tried to approach the concept of rationalism from the perspective of the history of philosophy. In philosophical literature rationalism have been historically used to mean a certain epistemological school. The epistemic rationalism of Des Cartes, Spinozza, Wolff, Leibnitz et alia postulated that human knowledge is attainable apriori through intellect alone, independent of senses. To them the true source of knowledge were innate ideas. Sense perception to them was a poor or incomplete source of knowledge. Rationalism was in contrast with empiricism, whose principal proponents were Locke, Hume, Berkeley et alia. But my definition of rationalism is, in my view more meaningful, pragmatic and consistent with contemporary scientific thinking.

Rationalism as a philosophy demands some strict mental discipline that many find hard to implement in their thoughts and actions. Many may not even be aware that they are not being strictly rational. The reason for this is that some mistakenly associate rationalism with certain ideals and outlook that do not necessarily follow from rationalism. Rationalism as a philosophy inevitably leads to scientific method through logic and critical thinking. Therefore a rationalist cannot subscribe a priori to any ideology, political or ideological, nor can a rationalist make statement of truth that is not a strict proposition.

So a rationalist cannot claim to be a strict atheist, i.e cannot assert that “God does not exist”, since God is not a logically well-defined and meaningful concept, all definitions of God in any religious context runs into contradictions and logical inconsistency. So the existence or non-existence of God are both logically meaningless to a rationalist. A rationalist can only take a noncognitivist position in the God context. For more details on this issue please carefully review the following two articles at :


Does it mean a rationalist cannot have any opinion at all about anything? Of course not. If an opinion does not contradict logic, evidence or observation, rationalism does not prevent one from forming a tentative opinion. For example it is not against rationalism to hypothesize about all the POSSIBLE causes of a crime, when definite evidence is missing to point to the actual cause. Same can be said about theories to explain certain facts of reality. That’s what science is about. Scientific speculation is just that. Theories are just possible explanation about facts and observations. Before theories can become laws they are just scientific opinions. But the important point to realize is that rationalist opinions, although not yet proven, should nevertheless be consistent with logic or observations (i.e does not contradict logic or observations) and should not use ill-defined terms.

Rationalism cannot be a basis for subscribing to a political party based on any dogma, or to express an a priori affiliation or support for a non-dogma based political party.. One can certainly do so as a human out of emotional need or bias, but not DUE TO rationalism. For a rationalist who chooses to be guided by pure rationalism, not emotion, support for a non-dogma based political party should be based on policies, performance, efficiencies and other objective criterion, thus need not be a static one, but changeable based on an ongoing assessment of the fulfillment of those criteria. There is no such concept as party loyalty in a rationalist vocabulary. Some intellectuals believe that certain political stand in an ideological, social or political controversy is required by rationalism, e.g leftist ideology, pro-choice stand in abortion, nurturist stand in nature-nurture debate, etc to name a few. Many of them commit the fallacy of appeal to emotion (invoking patriotism/nationalism) to justify an uncritical adoption of one side of a political issue.

To a rationalist, an apriori biased stand is not consistent with rationalism. They should be prepared to accept whichever viewpoint that scientific and logical reasoning may lead to, even if that goes against the popular trend of thinking. Rationalism is ruthless, it does not need to pamper to one’s emotional need or wishes, or care about political correctness..

In personal life, that means a rationalist has to acknowledge and be critical of the unpleasant facts, if necessary, about one’s near and dear ones, if evidence so suggests. Being able to separate facts from personal biases is an essential hallmark of rationalism. By the same token, a rationalist has to acknowledge, and criticize , if need be, the shortcomings of the race, religion or language he/she belongs to, in a detached way, free from personal bias, as well as acknowledge the superiority of another race, religion in a certain aspect, if objective evidence suggests so. Rationalism also does not imply making an a priori assumption that all bad or wrongs are equal, just because political correctness says so. Rationalism demands doing the required homework to quantify and recognize shades in right and wrong in morality and shades of good and bad in attributes by some objective criteria when applicable. This requires intellectual courage and integrity, as it can be potentially incur one the scorn of the majority, for whom the priority is loyalty, pride, patriotism etc. But rationalism does not recognize such mental constructs or sets such priority. It only cares for logic and evidence.

Rationalism does not allow taking a stand just because it is politically correct or popular. Many intellectuals associate the terms liberal, progressive etc with rationalism/freethinking. But liberal, progressive etc are usually understood and judged in the context of which stand one takes vis a vis certain issues, e.g pro-choice in abortion, leftist ideology ij politics, nurturist stand in the nature/nurture debate, a puritanic belief that all bads are equal (i.e cultural and moral relativism) etc. But rationalism does not require one to adopt such positions, and in fact in certain issues,\ may lead to the opposite stand by scientific evidence and logic. I will not dwell at length on the specifics of those scientific evidences in all such cases as it is a topic on its own and I am only interested on the general aspects of rationalism in this essay. A small example may help to illustrate rationalistic approach to an issue. IF we adopt the axiom that ending a “life” is morally wrong, THEN the act of abortion by definition will be morally wrong, since biology tells us that a fetus has life of its own. There is no value judgement involved, that was a conclusion derived from purely logical inference. (Notice the IF.. THEN.. construct). Whether we should adopt “ending life is morally wrong” as an axiom of course is not dictated by rationalism. But in fact we can derive that axiom from rationalism if we adopt another axiom as more fundamental, for example the axiom that we should do whatever is needed to increases the odds for the survival of human species. In that case rational thinking using evolutionary biology tells us that IF we adopt the precept “ending life is morally wrong”, THEN it increases the odds for the survival of human species (Again notice the IF.. THEN.. contruct). Whether we should consider “increasing the odds of the survival of human species” as a moral imperative is of course beyond rationalism. This is an intuitive moral axiom. This example clearly shows that rationalism does have a role in formulating moral precpets, barring the mosr primitive moral axioms. Even humanism, is not strictly derived from rationalism. Humanism follows from rationalism if the postulate “we should put priority on the welfare of maximum number of humans irrespective of race, color, creed, ethnicity etc.” is added to rationalism. It must be noted that all religions and dogmas claim human welfare as their goal as well. But what differentiates their view of humanism from rational humanism is that for them, that goal is claimed to be achievable only through the implementation of their dogma. So dogma comes first for them. Not only that, the priority for welfare in most religions and dogmas is reserved for their followers. But rational humanism does not make that distinction. Once humanism is arrived through rationalism, the notions of democracy and secularism follows as corollary.

Rationalism+Human good–> Humanism–> Democracy–>Secularism

Another point that many may have already wondered is that how can we decide who is rationalist or not? After all, followers of all religion or dogma claim they believe in logic and reason. Doesn’t every one have their own logic and every religion their own logic? So how can one not be rational? This is a tricky question that can lead to a slippery slope if not clarified beforehand. Cultural and moral relativists, postmodernists exploit such slippery slope to argue that all are equal, nothing is more valid than another etc. The logic and evidence referred to in rationalism, is shared by humanity with an overwhelming consensus crossing race, religion and affiliation etc. In other words they are universal. Modern logic finds much in common with the logic of early Greek, Hindu and Buddhist philosophers, as well as the early Muslim rationalists (Mutazillites) during the time of the House of Wisdom in Bagdad. This logic has been perfected and improved by later philosophers, like Locke, Hume, Kant and many Mathematicians and logicians of the twentieth century. This is the logic that is taught with tax payer’s funding in public schools in most nations of the world as well as secular private schools. This is also the logic that has WORKED. This logic has been the basis of the scientific method that has been so successful, has changed the world, made predictions about nature that was tested and verified to be true. It is also leading humanity towards continued advancement. It is no surprise that this is the logic that people have staked their money in teaching and learning. There are a set of unambiguous rules for valid logical reasoning, both informal and formal taught in elementary logic class that can act as guide to resolve dilemmas, ambiguities, paradox. contradictions, disputes etc. Also it is important to note that claims must be backed up by not just logic, but evidence and objectivity as well, both of are lacking in claims of religious or other dogmas. Contrast that with the “logic” that person “A” uses to rationalize his own belief, or the “logic” of religion “X” to rationalize that religion. Such “logic” is not shared universally, nor has it demonstrated its utility by coming up with any predictions, inventions or innovations, nor to the discovery of any fundamental truth about nature or reality. A “logic” that has been invented as a dedicated ploy to justify one dogma or belief is no logic at all. Besides such logic does not have universal appeal.

It should suffice to note that a dogma by definition is not based on logic and evidence, so to try to justify a dogma by logic is a fallacy to begin with and thus contrary to rationalism. It is quite intriguing to see vocal champions of religious dogmas even among some PhD’s of reputed universities, who are not ashamed to claim that their belief is supported by logic and evidence!

Rationalism also implies skepticism. Skepticism requires one to doubt any claim to truth, unless proven by evidence and logic, and to suspend belief or judgment in absence thereof, which clearly follows from rationalism. In personal life, such skepticism forces one to refrain from forming judgement or drawing hasty conclusions or opinion about a person or any claim of truth. In the absence of any evidence or logic a skeptic should stay in a “do nothing” i.e neutral mode. This “do nothing” neutral mode is a level most minds cannot recognize and needs some effort to become at ease with it. Most feel tempted to ruch to an opinion one way or the other, even in the absence of any supporting data. If and when the evidence or logic is available only then a skeptic can form an opinion, that is dictated by the evidence and logic, not by their wishful desires or biases.

A rationalist has to have the intellectual courage to acknowledge unpleasant truths. A rationalist never gained/gains materially or otherwise by being rational. It is just a philosophy that they find intuitively appealing.

Let me now turn to some mistaken notions about rationalism that is quite common among many. Many think that rationalism means an arrogant claim to infallibility, that rationalism never admits of ever being wrong, that it denies the possibility that logic itself may be wrong! All these are due to a lack of careful reflection. First that one could be wrong is a trivial and self-evident fact. It is like saying that one cannot be sure that he/she will make it to the destination as the flight may crash. ACKNOWLEDGING that fact of the limits and uncertainties in one’s knowledge is a matter of humility. Humility is a personality trait.  

Rationalism is a philosophy, not a trait. Rationalism does not prevent one, nor does it mandate one to possess certain personality trait. Second to say that “logic” itself may be wrong is to commit a fallacy. Because to judge something as “wrong” needs a logic of its own. One cannot use logic to judge the same logic as wrong! We have assumed that there exists only one system of logic that works best. Until we find a better system of logic, it is a fallacy to judge that logic as wrong. But saying that the “logic” is not wrong does not mean saying that one cannot make mistakes. Mistakes are due to an individual’s limit or flaw in applying logic, not due to logic itself. But there is no better way to overcome that limit than logic itself. Anyway, that humility of admitting the self-evident fact of fallibility is built in the scientific method.  

Scientific method, which is derived from rationalism is based on the premise that there is no absolute or final truth, and that any conclusion about reality is always tentative, subject to continual revision in light of further evidence. But one must not conclude that just because in certain instance one could predict the truth correctly by non-rational (intuition, guess) means that means intuition is superior to rationalism as a means for seeking truth. For example if a coin is tossed, an intuitionist may intuitively guess that the coin will come heads up. A rationalist cannot predict the outcome on the basis of logic and science (It is incredibly complex calculation) If the coin does fall heads up, does it prove that intuition is superior to rationalism? Of course not. Let me now clarify what rationalism is not or cannot It is a mistaken to believe that rationalism can solve all problems in life, or prevent them. It cannot. The fact it cannot is because the truth in many situation in life is not always known in advance for one to make the right decision. Rationalism is limited by the knowledge or truth that is needed in making an informed decision to solve or prevent a problem. In an indeterminstic situation intuitive guesses and judgement is inevitable. And the intuition of rational person is not guaranteed to be right. So in those situations in life where there are unknowns and uncertainties, intuitive guesswork cannot be avoided. Rationalism may offer some guidelines in making the best guesses, but it cannot offer a guarantee for success. For example, rationalism cannot guarantee one will make the right choice in marriage or relationship. Rationalism cannot prevent one from making mistakes in life. Gamble in life cannot be totally averted through rationalism. Risk cannot be either. More generally speaking, from an utilitarian point of view, rationalism is no guarantee to material success in individual life. Rationalism is a principle based on logic and evidence. In an imperfect world, that is not always the sure route to material success. Just like honesty is not. But the value of rationalism goes beyond personal gains or interests. It’s value lies in the collective imnprovement of the quality of human life by following rationalistic approach. COnsider the cost human society has paid and is paying in terms of dollars and man hours for believing in dogmas and faiths that have no logic or evodence as its basis. How much time and resources are being spent towards relgiouis rituals, how much suffering and persecution has enforcement of some cruelst relgious dogmas brought to many decent humans? If majority of a society adopt rationalism as their personal philosophy, then such wastage and social evils could be abolished or minimized. Society would prosper faster then. A common thinking is that morality is beyond rationalism. I think that is a mistaken view. Although the moral axioms at the bottom of a moral system may have to be assumed arbitrarily based on intuition, once the axioms are accepted, further moral precepts based on those axioms can certainly be rationally analyzed or developed. Rationalism is the product of human mind. So is morality. There is no apriori cause for them to be not connected. In the ultimate analysis since it is the laws of nature that has created human brain and thus rationalism, so it should be in principle possible to formulate a moral system based on the same laws of nature via rationalism. It may have to be an evolutionary process.

It must also be emphasized that not all human brains are equally capabale of rationalism or programmed for rational thinking. There is no guaranatee that rationalism can be inculcated by preaching or training. Human brain, being inherently complex, have varying degrees of potential for each type of thinking. It is possible certain brains are more susceptible to certain cues that triggers rational thinking, while others are impervious to any cues. There are some PhD’s of renouned universties, even after being exposed to some of the finest rationalistic arguments, writings and philosophical essays, continue to defend religious dogmas, sometimes even using the very same rationalistic arguments and languages they read about! They are impervious to any rational cues at all. Majority of humans are easily susceptible to cues of dogmatist preaching or rationalist thinking. They are up for grabs, so to speak. These are the fence-sitters, swing voters in the rationalism vs. dogmatism election, metaphorically speaking. It does not make much sense to say “thou shalt be rational”. The best that those who value and cherish rationalism can do is to target this majority, present to them examples of rational arguments to refute or critique issues, debunk the claims of mystics, godmen and other charlartans by logical means and evidence. This can be through electronic and print media, or preferably if possible through practical workshops as has been done in many rural outbacks of India. I also strongly suggest that rationalism be included in high school curricula. While it may be unrealistic to expect this to happen in the current environment in many countries where religious sentiments run high, specially if rationalism is pitted against the popular religion, it may be acceptable including rationalism as a general philosophy to emphasize reason and evidence over blind faith and superstition. Leading educators and academicians need to take the lead in lobbying with the relevant authorites for such curricular changes.

Next, to many, rationalism means robbing one of the sense of beauty, romanticism, love, compassion , i.e leaves one heartless and devoid of emotions. This is a big myth. Rationalism stresses separating the head from the heart, not REPLACING heart with head. Certain things are intrinsically rooted in instinct, and thus beyond rationalism. Love, fear, altruism, conscience (sense of right and wrong), these are biologically rooted instincts. Instincts are not controllable or influenced by rationalism. Instincts are more or less hardwired in our genes and manifested through the workings of the limbic system of our brain. Whereas rationalism results from the thought process determined by the evolution of cerebral cortex. Humans posses both these brain components. So a rational person can feel an instinctive fear in certain environment, or can feel passionate love for certain person. What differentiates a rational person from a less or rational or emotional person is the synaptic connectivities in their cerebral cortex, not in their limbic system. So when it comes to primal instincts controlled by limbic systems, for example self-preservation, the difference disappears. In a life threatening situation, control is automatically taken over by the limbic system from the cerebral cortex, biological instinct of aggression may kick in, and at that point whatever one does is not subject to rationalism anymore. Taste is also instinctive. Rationalism has nothing to do with it.

Although rationalism does not decide or control our tastes and emotions, it can however EXPLAIN (or at least try to through scientific method) the basis of such emotions and likes or dislikes. Rationalism cannot affect or control love. But rationalism can certainly help explain the biological (in both evolutionary and biochemical terms) origin of love, morality and other human values and attributes. The same can be said about all other instincts and emotions. A good example of that would be the book “Why we feel : The Science of Emotions” by Victor Johnston. So being rational does not by any means deprive of those instincts, tastes and emotions, because they are an integral part of being human, rational or not. Rationalism enables humans to understand and explain the underlyinmg basis of emotions, it does not rob us of the emotions. A neurologist does not lose his mind(brain) in trying to understand the workings of the brain, nor does an evolutionary biologist ceases to be a loving mate or parent in trying to explain and understand the biological roots of love, simply because we have no control on our biological instincts, whether we are rational or not. Rationalism however can however help to control the impulses that emotions may lead to. In biological language, although the generation of emotions in the limbic system itself cannot be controlled, the impulsive ACTS (e.g aggression) that those emotions often lead to can be controlled by the feedback mechanism of the cerebral cortex over the limbic system.
Another “reason” for viewing rationalism with cynical eyes by many is because it is believed by them that humanitarian acts should come from an emotional impulse, not from a rationalization process, which does not take the compassion factor in the decision of such acts. On first look, it may look like a noble view, putting heart before head. But as I pointed out, compassion, humanitarian acts all are derived from altruism, a biologically rooted instinct, so rationalism cannot affect it. Although rationalism can certainly manage altruistic instinct in a way that ensures optimum utilization of it. Impulsive altruistic acts do not always lead to the best results. Rationalism can help to channelize our altruistic instincts in the most optimal manner. At a very personal level, of course even a rationalist can (and often does) act out of an impulse and do an act of humanitarianism or compassion, since doing so is not contradicted by logic. Compassion should not REPLACE rationalism, but must be accompanied by it. A good example would be the case of a judge granting leniency to convicted on compassionate grounds. But the compassion follows only after a thorough rational analysis of the crimes committed by the convicted. Rationalism is truly applicable in forming opinions, judgments, learning the truth and solving problems, but not to instincts, or impulses that are non-judgmental, non-intrusive and innocuous

Another “reason” for viewing rationalism with cynical eyes by many is because it is believed by them that humanitarian acts should come from an emotional impulse, not from a rationalization process, which does not take the compassion factor in the decision of such acts. On first look, it may look like a noble view, putting heart before head. But as I pointed out, compassion, humanitarian acts all are derived from altruism, a biologically rooted instinct, so rationalism cannot affect it. Although rationalism can certainly manage altruistic instinct in a way that ensures optimum utilization of it. Impulsive altruistic acts do not always lead to the best results. Rationalism can help to channelize our altruistic instincts in the most optimal manner. At a very personal level, of course even a rationalist can (and often does) act out of an impulse and do an act of humanitarianism or compassion, since doing so is not contradicted by logic. Compassion should not REPLACE rationalism, but must be accompanied by it. A good example would be the case of a judge granting leniency to convicted on compassionate grounds. But the compassion follows only after a thorough rational analysis of the crimes committed by the convicted. Rationalism is truly applicable in forming opinions, judgments, learning the truth and solving problems, but not to instincts, or impulses that are non-judgmental, non-intrusive and innocuousLastly I will be remiss if I do not point out the challenge that rationalism is facing from the postmodernist thinking that seems to be gaining ground in recent years. Postmodernists are challenging that very golden product of rationalism, namely scientific method by insisting that scientific method is just one among many EQUALLY valid route to truth and deserves no special privileged status. This is nothing but intellectual anarchism. Postmodernists are nothing but armchair social scientists that have fallen much behind modern scientific paradigms and are threatened by the scientific approach that the social sciences are adopting (rather being forced to adopt). They are watching with frustration one after another social discipline is losing ground to the exact sciences. Not being able to face upto the challenge of the sciences some of them have chosen the treacherous art of deconstruction and misapplying it to scientific method. So rationalism now faces challenges from two fronts, religious dogma (which Europeans successfully faced during the renaissance), and postmodernism, which is a new challenge that needs to be faced. So the need to emphasize rationalism is more now than ever. Hopefully my fellow Mukto-Monas will share my passion for rationalism.

Aparthib Zaman,