(This article appeared in the 7 December, 2006 issue of Nature.)
Can brain scans of a racist, liar or psychopath accurately tell whether that person will persecute, fib or kill? No, say experts in the ethics of neuroscience, who are increasingly concerned that such images will be used to make dangerous legal or social judgements about people’s behaviour. They say it is time for scientists, lawyers and philosophers to speak up about the limitations of such techniques.
“Lawyers want to know ‘Can I put somebody on the scanner and tell if they’re racist?’” says Elizabeth Phelps, a psychologist and neuroscientist at New York University who has studied the brain’s response to race. “We as a group of scientists have to be able to say that we can’t make that distinction.” Phelps spoke at a panel discussion on the emerging field of neuroethics held in New York last week.
Neuroscientists increasingly use technologies such as functional magnetic resonance imaging to see how blood flow in the brain changes when we see pictures, recall memories or make decisions. But these images are prompting concerns about how they might be over-interpreted or misused (see Nature 435, 254–255; 2005).
Outside the lab, neuroimaging is being touted as a way to detect lies (see Nature 437, 457; 2005) or to predict what shoppers might buy. There have been suggestions that brain imaging could be used to screen police officers for race bias by showing them faces of particular ethnicities.
But most scientists say that studies of behavioural or physical responses — for example, a person’s reaction to different races in real life — should trump imaging every time. That’s because interpreting brain scans, and correlating them to actions, is inaccurate at best. All we can really gain from such studies is a more nuanced understanding of behaviour, says Phelps.
The persuasiveness of brain scans has already drawn them into the courtroom. In a landmark case in the US Supreme Court in March 2005, several leading scientific groups, including the American Medical Association, the American Psychiatric Association and the National Mental Health Association, filed briefs to support the premise that teenagers are less rational than adults. The data included a brain-imaging study showing that the prefrontal cortex, which governs impulse control and reasoning, develops late in adolescence (see Nature 442, 865–867; 2006), and could explain some irrational aspects of teenage behaviour.
Many groups thought this study could help rule against the death penalty. But although the court ruled against the death penalty for those younger than 18, it chose not to cite the brain-imaging study, relying instead on behavioural studies that showed adolescents are more impulsive, more vulnerable to peer pressure and more affected by stress.
Stephen Morse, a professor of law and psychiatry at the University of Pennsylvania, Philadelphia, thinks it was a wise decision. Although the imaging study helped to explain why a particular group might have different behaviour, the behaviour itself is more important than the changes seen in the brain, he says. Citing the study would have given it too much credibility, he adds, and opened the door for further claims that imaging predicts behaviour. “The legal and moral claims being made [from imaging studies involving very few people] are far too extensive.”
Morse is a founding member of the Neuroethics Society, set up earlier this year by a group of lawyers, philosophers and scientists to address issues raised by the use of brain scans and other future applications of neuroscience (see Nature 441, 907; 2006). Many neuroscientists are concerned about inappropriate applications of their research, but they rarely come out and say so. Scientists should speak up, but it will also take lawyers and sociologists to lay out the concerns, says Morse. “We need scientists to say what they know and what they don’t know,” he says. “But the implications are not a science question, they are a moral question, a social question, a legal question.”