In theory, a markedly improved method of lie detection could have as profound an impact as DNA evidence. BARRY BLITT

The most egregious liar I ever knew was someone I never suspected until the day that, suddenly and irrevocably, I did. Twelve years ago, a young man named Stephen Glass began writing for The New Republic, where I was an editor. He quickly established himself as someone who was always onto an amusingly outlandish story—like the time he met some Young Republican types at a convention, gathered them around a hotel-room minibar, then, with guileless ferocity, captured their boorishness in print. I liked Steve; most of us who worked with him did. A baby-faced guy from suburban Chicago, he padded around the office in his socks. Before going on an errand, Steve would ask if I wanted a muffin or a sandwich; he always noticed a new scarf or a clever turn of phrase, and asked after a colleague’s baby or spouse. When he met with editors to talk about his latest reporting triumph, he was self-effacing and sincere. He’d look us in the eye, wait for us to press him for details, and then, without fidgeting or mumbling, supply them.

One day, the magazine published an article by Steve about a teen-ager so diabolically gifted at hacking into corporate computer networks that C.E.O.s paid him huge sums just to stop messing with them. A reporter for the online edition of Forbes was assigned to chase down the story. You can see how Steve’s journalism career unravelled if you watch the movie “Shattered Glass”: Forbes challenged the story’s veracity, and Steve—after denying the charges, concocting a fake Web site, and enlisting his brother to pose as a victimized C.E.O.—finally confessed that he’d made up the whole thing. Editors and reporters at the magazine investigated, and found that Steve had been inventing stories for at least a year. The magazine disavowed twenty-seven articles.

After Steve’s unmasking, my colleagues and I felt ashamed of our gullibility. But maybe we shouldn’t have. Human beings are terrible lie detectors. In academic studies, subjects asked to distinguish truth from lies answer correctly, on average, fifty-four per cent of the time. They are better at guessing when they are being told the truth than when they are being lied to, accurately classifying only forty-seven per cent of lies, according to a recent meta-analysis of some two hundred deception studies, published by Bella DePaulo, of the University of California at Santa Barbara, and Charles Bond, Jr., of Texas Christian University. Subjects are often led astray by an erroneous sense of how a liar behaves. “People hold a stereotype of the liar—as tormented, anxious, and conscience-stricken,” DePaulo and Bond write. (The idea that a liar’s anxiety will inevitably become manifest can be found as far back as the ancient Greeks, Demosthenes in particular.) In fact, many liars experience what deception researchers call “duping delight.”

Aldert Vrij, a psychologist at the University of Portsmouth, in England, argues that there is no such thing as “typical” deceptive behavior—“nothing as obvious as Pinocchio’s growing nose.” When people tell complicated lies, they frequently pause longer and more often, and speak more slowly; but if the lie is simple, or highly polished, they tend to do the opposite. Clumsy deceivers are sometimes visibly agitated, but, over all, liars are less likely to blink, to move their hands and feet, or to make elaborate gestures—perhaps they deliberately inhibit their movements. As DePaulo says, “To be a good liar, you don’t need to know what behaviors really separate liars from truthtellers, but what behaviors people think separate them.”

A liar’s testimony is often more persuasive than a truthteller’s. Liars are more likely to tell a story in chronological order, whereas honest people often present accounts in an improvised jumble. Similarly, according to DePaulo and Bond, subjects who spontaneously corrected themselves, or said that there were details that they couldn’t recall, were more likely to be truthful than those who did not—though, in the real world, memory lapses arouse suspicion.

People who are afraid of being disbelieved, even when they are telling the truth, may well look more nervous than people who are lying. This is bad news for the falsely accused, especially given that influential manuals of interrogation reinforce the myth of the twitchy liar. “Criminal Interrogation and Confessions” (1986), by Fred Inbau, John Reid, and Joseph Buckley, claims that shifts in posture and nervous “grooming gestures,” such as “straightening hair” and “picking lint from clothing,” often signal lying. David Zulawski and Douglas Wicklander’s “Practical Aspects of Interview and Interrogation” (1992) asserts that a liar’s movements tend to be “jerky and abrupt” and his hands “cold and clammy.” Bunching Kleenex in a sweaty hand is another damning sign—one more reason for a sweaty-palmed, Kleenex-bunching person like me to hope that she’s never interrogated.

Maureen O’Sullivan, a deception researcher at the University of San Francisco, studies why humans are so bad at recognizing lies. Many people, she says, base assessments of truthfulness on irrelevant factors, such as personality or appearance. “Baby-faced, non-weird, and extroverted people are more likely to be judged truthful,” she says. (Maybe this explains my trust in Steve Glass.) People are also blinkered by the “truthfulness bias”: the vast majority of questions we ask of other people—the time, the price of the breakfast special—are answered honestly, and truth is therefore our default expectation. Then, there’s the “learning-curve problem.” We don’t have a refined idea of what a successful lie looks and sounds like, since we almost never receive feedback on the fibs that we’ve been told; the co-worker who, at the corporate retreat, assured you that she loved your presentation doesn’t usually reveal later that she hated it. As O’Sullivan puts it, “By definition, the most convincing lies go undetected.”

Maybe it’s because we’re such poor lie detectors that we have kept alive the dream of a foolproof lie-detecting machine. This February, at a conference on deception research, in Cambridge, Massachusetts, Steven Hyman, a psychiatrist and the provost of Harvard, spoke of “the incredible hunger to have some test that separates truth from deception—in some sense, the science be damned.”

This hunger has kept the polygraph, for example, in widespread use. The federal government still performs tens of thousands of polygraph tests a year—even though an exhaustive 2003 National Academy of Sciences report concluded that research on the polygraph’s efficacy was inadequate, and that when it was used to investigate a specific incident after the fact it performed “well above chance, though well below perfection.” Polygraph advocates cite accuracy estimates of ninety per cent—which sounds impressive until you think of the people whose lives might be ruined by a machine that fails one out of ten times. The polygraph was judged thoroughly unreliable as a screening tool; its accuracy in “distinguishing actual or potential security violators from innocent test takers” was deemed “insufficient to justify reliance on its use.” And its success in criminal investigations can be credited, in no small part, to the intimidation factor. People who believe that they are in the presence of an infallible machine sometimes confess, and this is counted as an achievement of the polygraph. (According to law-enforcement lore, the police have used copy machines in much the same way: They tell a suspect to place his hand on a “truth machine”—a copier in which the paper has “LIE ” printed on it. When the photocopy emerges, it shows the suspect’s hand with “LIE ” stamped on it.)