Truth and consequences

Q&A/With colleagues across Penn, this Penn psychiatry professor has found a way to use a brain scan to tell truth from lie.

Penn psychiatry professor Daniel Langleben

“You always have to ask yourself: Would you want to end up in your own machine?”

 

--Penn psychiatry professor Daniel Langleben

Daniel Langleben’s research, which indicates that brain scan technology can be used to tell if a person is lying, has been picked up by news outlets worldwide. Some have even hyped Langleben’s findings as a new-age crime-fighting tool—maybe even a way to stop future acts of terrorism.

But even though Langleben admits there are exciting real-world applications for the work—work he says would not have been possible without contributions from Penn colleagues such as Ruben Gur and Anna Rose Childress, among others—he is just as excited by the basic neuroscientific findings that the work is generating.
Langleben, an assistant professor of psychiatry, did not consciously set out to invent a lie-detector test. In fact, the road that led him here to Penn—and his most recent finding that functional magnetic resonance imaging (fMRI) can discrimate truth from lie with around 90 percent accuracy—is a winding one that started, nearly a decade ago, on the West Coast.

Daniel Langleben’s research, which indicates that brain scan technology can be used to tell if a person is lying, has been picked up by news outlets worldwide. Some have even hyped Langleben’s findings as a new-age crime-fighting tool—maybe even a way to stop future acts of terrorism.

But even though Langleben admits there are exciting real-world applications for the work—work he says would not have been possible without contributions from Penn colleagues such as Ruben Gur and Anna Rose Childress, among others—he is just as excited by the basic neuroscientific findings that the work is generating.
Langleben, an assistant professor of psychiatry, did not consciously set out to invent a lie-detector test. In fact, the road that led him here to Penn—and his most recent finding that functional magnetic resonance imaging (fMRI) can discrimate truth from lie with around 90 percent accuracy—is a winding one that started, nearly a decade ago, on the West Coast.

Penn psychiatry professor Daniel Langleben and colleagues

Penn psychiatry professor Daniel Langleben and colleagues

Q. Your recent research on deception has its roots with your interest in Attention Deficit/Hyperactivity Disorder, right?

A. I started working on ADHD in 1997 at Stanford. As part of the work, I was reviewing literature on ADHD and seeing some children with ADHD. In late 1997 or early 1998, people started coming up with newer hypotheses for ADHD. Up until then, it was, ‘Oh, they’re hyper kids.’ But then people started wondering, what’s really wrong? What’s the common denominator? The hypothesis then and even now was linked to deficient response inhibition—a deficient ability to respond to incoming stimuli. Suddenly ADHD became one of the most interesting neuroscience constructs … because it became an entire idea of how we all operate. It can be something very important.

Q. How did researchers test this idea?

A. People started coming up with tasks. ... The classic task is called Go/No Go. You have a screen in front of you, and things pop up on the screen—a letter. And you are told, when that letter shows up, you press the button. Now that’s an easy test. You will probably make a few errors, though, and you can already see that some people will make more errors than others, just because they’re not paying attention, and that’s part of ADHD. But we’re focusing on response inhibition, and you need to do something to discriminate between people who stop their responses well and those that don’t. So you tell them, press the button for all letters except for the letter A. … The children with ADHD are expected to make some errors.

Q. How does this link up to what you’re doing now?

A. In 1998, I was looking for papers on ADHD and found this paper—a tangential paper, really. But it was the first time where deception and response inhibition were put in the same article. [The author] was talking about how the ability to deceive is another example of behavioral control, which was a pretty novel thing to say. I looked at this and said, ‘Oh, that’s interesting. What about deception and [brain] imaging?’ She was not thinking about imaging, because she was not doing it, but I was. We had already put the Go/No Go test in the scanner.

So now, I was thinking ADHD and deception and the scanner—all these pieces in the same box. So I did some checking on the web to see if anyone else was doing this, with imaging, and there was nothing. There were absolutely zero papers. I had the idea about a year, trying to figure out what I was going to do with it.

Q. You had to find a way to test it, right?

A. The only missing piece was how to operationalize deception. It was around this point that I had moved from Stanford to Penn. What I found from the deception literature was that there is the polygraph, which is ... a system of measuring physiological responses combined with a specific way of asking questions, which is called a control question test, or CQT. But there is also another way of asking questions. It’s called GKT and it stands for “guilty knowledge test,” but that’s a misnomer. It has nothing to do with guilt. It’s more about you having been exposed to an item before, or not. It’s really a prior knowledge test. … From that, you can build information from the brain.

Q. So your idea was to use GKT, in combination with the brain scan, to tell if someone was lying. What happened?

A. We made a little task that was very similar to GKT, and lo and behold, when you look at it, it looks almost just like Go/No Go. … It was very simple, really. This was not rocket science. It was just putting together things that already existed. There were a few other groups that were doing it at the time, but we managed to kind of beat them to it. I think we were the first to present it at a national conference, and that’s what the news outlets took to.

Q. That initial result led to even more work, right? Earlier this fall, you had a couple more papers come out along with some Penn colleagues that generated even more press attention.

A. My interest, actually, was not really in lie detection. My interest was in the phenomenon itself. It’s an interesting neuroscience phenomenon. It’s related to ADHD and response inhibition.

But then we were asked, “Can you really use this research for lie detection? What is between what you’ve already done and lie detection?” I said what’s between it is that we have to be able to do it in a single [person]. And so we had to bring in the heavy guns to do that.

[Recently] we got the results. We got two papers out of it, and they were published in September, one day apart. We basically showed that you can do [the test] in single subjects, you can do it on a trial-by-trial basis, and you can do it with different statistical approaches.

Q. Your results show that fMRI is accurate for detecting lying, under those circumstances, around 90 percent of the time. So what’s the possible application?

A. We have an ongoing project that is increasing the stakes, and increasing the emotional impact and making it even more robust, and applying it especially to the investigation of organizations. … Any kind of crime, including terrorism, happens in the brain. Terrorism and crime happen here first, that’s undeniable. And if we can keep it there, that would be nice. It would be good I think even for the perpetrator of the crime to have not done it, or in some way to not bring it the catastrophic proportions of a mature plan. More than that, the investigative techniques available for people today are terrible. People are either resorting to torture, which is just out of the question, or resorting to legal questions that don’t bring them [answers] fast enough. Forensic neuroscience is going to happen. It’s already being proposed, but it’s like people are reluctant to touch it.

Q. Why is that?

A. They say it’s mind-reading, or intrusion, but it’s not really. If you don’t want to cooperate with fMRI, you won’t have it. ... And we are also very carefully watching the ethical implications of this. We’re aware of them. It’s easy to just pooh-pooh them and say, ‘Well this is too important, too urgent, let’s do it.’ Not always. And it can come back to bite us. You always have to ask yourself: Would you want to end up in your own machine? Because Dr. Guillotine did.

 

Originally published on November 3, 2005