While Chance’s current research was motivated by the events of 9/11, Langleben’s interest in deception was sparked by
an article he read several years ago. It stated that children with attention deficit disorder have trouble lying.

“I thought, ‘Now what does this mean?’” he recalls. “I’ve had some patients with ADD, and they had no trouble telling me all kinds of stories about why they didn’t show up for an appointment.” But what the article was trying to say is that children with ADD have trouble doing it successfully. “They tend to just blurt [the truth] out.” That was consistent with the prevailing hypothesis that persons with ADD have trouble with response inhibition. It also got Langleben to thinking about the connection between response-inhibition and deception. When Langleben moved to Penn to do substance-abuse research, subjects frequently lied to him about their addictions. They would deny their substance abuse, even when this denial endangered their lives. His interest in deception renewed, Langleben came up with the idea of testing a model of it with fMRI.

The areas of the brain that were activated during the card game he tested fit nicely into a neuroscience framework, he says. What the anterior cingulate does is make a selective response when you have more than one choice.

Langleben points to a group of paintings on the wall. “Say I’m asking which one of these pictures you want to take home. You know I’d like you to take that one, but you like this other one better. The anterior cingulate is going to get busy there, because you know it is better for you to do something to please me [than tell the truth].” The left prefrontal cortex, in turn, tells the motor cortex that’s in charge of the hand to press the button in the MRI experiment to answer yes or no. In a lie, “it requires more work to redirect the thumb to some other place.” Despite the neat fit of this model, Langleben bets that different areas would activate during different kinds of deception. “This is not the lie center.”

He continues to fine-tune the study, which in its latest version gives the subject a choice of which cards to hide and separates the process of giving instructions from the testing itself. “The subject who’s doing the deception has to believe the deception is not known to the target.” To have two different people instructing and testing also helps remove an appearance of “endorsement” by the investigator, which could throw off the experiment.

Though pure research is his first interest, Langleben says this technology could lead to a lie detector, as long as it’s used and applied judiciously. According to Langleben, defendants would have to consent to fMRI testing; otherwise it wouldn’t work. “It only takes moving around the head, and the whole thing is [thrown] off.”

“If Daniel Langleben’s technology ever works, then we have an enormous ethical issue,” says Paul Wolpe: “Where is it appropriately used? If I say I’m willing to get in the scanner and show you I’m telling the truth, that’s one thing. But can the court tell me to get in the scanner? Is that self-incrimination?” It was easier for ethicists to cope with the old polygraph machines, he suggests, since their very unreliability placed them beyond the ethical pale.


Pills Not Prisons

“Is a world with more pharmacology and less prison better or worse than a world with more prison but less pharmacology?” Dr. Lawrence Sherman, the Albert M. Greenfield Professor of Human Relations and professor of sociology, posed that question in 2002 when he addressed the American Society of Criminology. Sherman, who directs the Fels Center of Government and the Jerry Lee Center of Criminology [“A Passion for Evidence,” March/April 2000], says that treatment for mental illness as an alternative to prison could be one part of the prescription for an “emotionally intelligent justice system.”

The combination of criminal justice and psychopharmaceuticals—or “neurocorrectors,” as Martha Farah calls them—alarms some ethicists. “Using chemicals to suppress ‘aggression’ or sexual desire are threats to fundamental human nature and autonomy in a way that incarceration is not,” Wolpe says. “Why not surgery? How do you insure compliance? Do doctors then become penal workers, enforcing pharmaceutical punishments? I doubt that state medical boards would allow that. It seems there are many issues here.”

Farah writes that medicating criminals makes us uneasy in ways that sentencing someone to take anger-management classes does not. “In anger-management class, a person is free to think, ‘This is stupid. No way am I going to use these methods.’ In contrast, the mechanism by which Prozac curbs impulsive violence cannot be accepted or resisted in the same way.”

But for Sherman the potential for cultivating “a range of pharmaceutical responses to serious crime” is worth exploring. “If selective serotonin reuptake inhibitor pills reduce violence 75 percent among depressed people, as tests so far suggest, then perhaps the next research question is what are the costs of producing that benefit—in side effects to people taking the pills, or in loss of other personality functions?”

Great strides could be made by a research partnership of ethicists, biopsychologists, and criminologists, he says—provided there is no “major disaster.” Noting the lawsuits against pill manufacturers brought by the survivors of depressed patients who committed suicide while on medication, Sherman says: “What we can’t prevent is a public controversy that blames the drug. But what we can do is large field tests over a long period to be very sure that the drug produces less murder rather than more.” At stake is not just pharmacology in the abstract, he adds, “but the lives of people in prison—two million—every day getting raped or murdered because nobody believes that we can provide more effective ways to prevent crime.”

To argue that convicted offenders aren’t capable of giving informed consent is to “take away their rights just as much as when you put them in prison,” Sherman says. The victim’s wishes should also be taken into consideration. Research shows that “when the victim is given a choice between pure retribution and opportunity to meet with the offender and to find some way to turn the offender’s life around [perhaps through medication], many victims—perhaps the majority across all crimes—will choose to meet with the offender.

“The crude picture of ‘pop a pill, solve a problem’ overstates the reliance on pharmacology that is likely to come out of this kind of research and development, ” he adds. It is far more likely that pharmacology will be one part of a “combination of victim-centered responses to crime [that] fosters increased social support for offenders remaining law abiding.”

A portrait of August Vollmer, the founder of the American Society of Criminology in 1941, hangs above the fireplace in Sherman’s office. Vollmer believed that one day science—“especially interventions with violent personalities”—would be able to prevent crime, Sherman says. “Before the pharmacological revolution, the prevailing view in science was that Vollmer was writing science fiction. But just as Jules Verne’s rockets came true, perhaps a humane way of controlling violent behavior by means other than prison will also come true.”


Getting to Know You,
Getting to Know All About You

Despite an explosion in fMRI-based research, there are still limits on what a brain scan can tell us about our next-door neighbor or any individual.

“It’s not the case right now in 2003 that you can stick somebody in a scanner and say, ‘Oh yes, this person definitely suffers from bipolar disorder, Attention Deficit Disorder, or has a history of drug abuse or what have you,’” Farah says. Most of the studies are only able to glean average differences among groups of people.

“But even now, scans can be somewhat informative, some of the time,” she says. “If an individual happens to be at one end or another of a continuum, you might well be able to infer something about the person from their brain scan.” One example is the work of Turhan Canli of SUNY-Stonybrook, which has shown a correlation between patterns of brain activation during certain tasks and personality traits like extroversion and neuroticism. “A lot of his scans look kind of middling and ambiguous, but certain scans just scream ‘extrovert’ or ‘introvert’ and, he tells me, those scans invariably predict the person’s personality correctly. So while the state of the art is not yet up to classifying individuals in general, it can from time to time reveal something of an individual’s psychology.”

Where the technology will take us next is hard to predict. “All I can say about the progress of cognitive neuroscience is that, so far, it has surprised us with its speed,” Farah says. Just a decade ago, “the idea of personality showing up in brain images would probably have been ridiculed as being na‘ve.” MRI has steadily evolved and improved, and scientists continue to learn more about where in the brain—and in what situations—to look for differences. “I see no reason that all kinds of personal psychological characteristics will not one day be reliably measured with functional neuroimaging.”


From time to time Caplan says he gets a letter from someone convinced that a machine has been implanted in his or her brain for mind-control purposes. Perhaps they heard about the Robo Rat project at SUNY-Buffalo, in which investigators sent lab rats on a controlled tour of campus by remotely stimulating electrodes inserted in the animals’ brains. This, and earlier animal experiments, have led some to fear that it’s possible to control people’s behavior through such technology.

For the record, says Kenneth Foster, “It’s not happening now—and it may be possible conceptually, but it’s not likely to be much of an issue in the future.” Though it’s important to consider all the possible implications of this technology, he says, “The Brave New World is [not] already here.” All of which raises the point that it’s possible to worry too much about progress in the neurosciences.

“All of the journalists I’ve talked to want to hear about the dangers, the dehumanization—and the creepier the better,” says Farah. “I think [neuroethicists] would serve society better if we reframed the issues. Instead of ‘What must we guard against?’ we should ask, ‘What can this new knowledge do for us, and how can we deploy it to bring the most benefit to humanity and the least risk?’”

page 1 > 2 > 3 > 4

© 2004 The Pennsylvania Gazette
Last modified 01/19/04

Who’s Minding the Brain?
By Susan Frith
Illustration by Jon Sarkin

page 1 > 2 > 3 > 4

Jan|Feb Contents
Gazette Home