Imag­ine: A con­vict­ed drunk dri­ver who needs to con­vince a judge he hasn’t had a drink in years. A father in a cus­tody bat­tle who needs to prove he did not abuse his spouse. A sus­pect­ed cor­po­rate thief who needs to prove his innocence.

What does the classic lie detector actually measure? Is a change in pulse and blood pressure evidence of a lie or simply evidence of nervousness?

These are just some of the peo­ple will­ing to pay $5,000, or more, to expose their brains to sci­en­tists to show that their words match their truthfulness.

Know­ing for cer­tain when some­one is lying is the stuff of dystopi­an sci­ence fic­tion – and the hope of cops and spies around the world.

And, if some aggres­sive tech­nol­o­gy entre­pre­neurs get their way, the tech­nol­o­gy will become a real­i­ty, com­ing soon to courts and inter­ro­ga­tion rooms near you.

What makes this pos­si­ble is func­tion­al mag­net­ic res­o­nance imag­ing (fMRI), which has already trans­formed the world of neuroscience.

Cephos Cor­po­ra­tion and No Lie MRI are two Amer­i­can com­pa­nies at the fore­front of using fMRI tech­nol­o­gy to ver­i­fy truth in the cor­po­rate, gov­ern­ment and legal realms.

Cephos, based in Mass­a­chu­setts, argues that its tech­niques fit all the legal stan­dards and tests to be admis­si­ble in court.

No Lie, based in Cal­i­for­nia, has scores of clients inter­est­ed in using its process as evi­dence of their truth­ful­ness in court hearings.

Steven J. Lak­en, founder of Cephos – whose cor­po­rate mot­to is ​“Our Busi­ness Is the Truth,” and whose mis­sion state­ment reads, ​“We believe Truth is among the most valu­able of com­modi­ties” – claims a 93 per­cent suc­cess rate in deter­min­ing truth-telling from lying. (Humans are typ­i­cal­ly able to tell the dif­fer­ence between truth and lies about 50 per­cent of the time, and tra­di­tion­al lie detec­tor machines aver­age around 85 percent.)

Both com­pa­nies are con­fi­dent that with­in months, judges will allow fMRI results to be admit­ted in tri­als. Despite the cor­po­rate con­fi­dence that Cephos and No Lie exude, legal schol­ars, neu­ro­sci­en­tists and ethi­cists are much less optimistic.

The tech­nol­o­gy behind the fMRI is rel­a­tive­ly sim­ple. Slide a human being wired with elec­trodes into a tube that is essen­tial­ly a large mag­net. Mea­sure blood flow in the brain and pro­duce an image that is filled with bright and dark spots. The fMRI reads your brain in real time by mea­sur­ing the flow and use of oxygen.

The the­o­ry behind the fMRI’s abil­i­ty to detect a lie is based on human phys­i­ol­o­gy. It takes more effort to tell a lie than to tell the truth, so, if you are lying, your brain works hard­er and more oxy­gen is used. Put the per­son in an fMRI machine, ask ques­tions and inter­pret the rel­a­tive­ly bright and dark spots.

Through­out the Unit­ed States, fMRIs are used in labs and hos­pi­tals to study brain injuries, states of med­i­ta­tion, phys­i­cal cen­ters of men­tal dis­or­ders, hap­pi­ness, lust, emo­tion­al states and a pletho­ra of oth­er states of mind. But the attacks of 9⁄ 11 gave a whole new urgency to the idea.

Jonathan Moreno, a senior fel­low at the Cen­ter for Amer­i­can Progress and author of Mind Wars: Brain Research and Nation­al Defense, esti­mates that at least 50 U.S. labs began study­ing the use of neu­ro­science and lie detec­tion after 9⁄ 11 , many of them fund­ed by the Depart­ment of Defense.

“There’s enor­mous pres­sure com­ing from the gov­ern­ment for this,” says Paul Root Wolpe, a bioethi­cist at the Uni­ver­si­ty of Penn­syl­va­nia. ​“There is rea­son to believe a lot of mon­ey and effort is going into cre­at­ing these technologies.”

But as quick­ly as the inter­est has grown, so, too, have con­cerns over its implications.

Hen­ry T. Greely, a Stan­ford Uni­ver­si­ty law pro­fes­sor and a lead­ing crit­ic of using fMRI for lie detec­tion, argues that there are three fun­da­men­tal prob­lems. First, there is no evi­dence that the tech­nol­o­gy works. Sec­ond, there’s no evi­dence that lies can be detect­ed. And third, there’s no reg­u­la­tion of the field. In effect, Greely says, ​“Any­one can promise anything.”

He rejects the claim of high lie-detect­ing accu­ra­cy, large­ly because the exper­i­ments are con­duct­ed in con­trolled set­tings. Cephos and No Lie base their results on stud­ies with stu­dents being dis­hon­est and hon­est in arti­fi­cial sit­u­a­tions. Greely says such stud­ies tell us noth­ing about real life.

Ethi­cists, too, are alarmed. Although the empha­sis these days is on the accused prov­ing their inno­cence, giv­ing cre­dence to the tech­nol­o­gy could invite mis­use or abuse.

The MacArthur Foundation’s Law and Neu­ro­science Project is one of a series of ini­tia­tives across the Unit­ed States try­ing to make sense of the bur­geon­ing use of neu­ro­science in legal matters.

Accord­ing to Michael S. Gaz­zani­ga, the project’s direc­tor, ​“The risk that sci­ence reject­ed for use in courts – due to the strin­gent require­ments for accu­ra­cy – may still be used wide­ly in soci­ety for oth­er pur­pos­es is always present.”

The tra­di­tion­al lie detec­tor device – which requires hook­ing the sub­ject up to wires to record pulse, blood pres­sure and breath­ing – burst into the world in the 1920s when med­ical stu­dent John Lar­son and police offi­cer Leonarde Keel­er announced they had cre­at­ed a truth machine.

The lie detec­tor test has had a trou­bled his­to­ry ever since. Ken Alder, a his­to­ri­an at North­west­ern Uni­ver­si­ty and author of The Lie Detec­tors: The His­to­ry of An Amer­i­can Obses­sion, describes the con­cept of using tech­nol­o­gy to sort out truth and lies as ​“unique­ly American.”

Greely, Wolpe and oth­ers detect two major con­cep­tu­al mis­takes in the whole idea of lie detec­tion. First, mea­sur­ing the actions of the brain does not tell you any­thing about what the brain is actu­al­ly think­ing. Sec­ond, lying should be judged not by machines but by people.

It’s sim­i­lar to a fun­da­men­tal dilem­ma. What does the clas­sic lie detec­tor actu­al­ly mea­sure? Is a change in pulse and blood pres­sure evi­dence of a lie or sim­ply evi­dence of nervousness?

Alder’s exhaus­tive explo­ration of how a tech­no­log­i­cal obses­sion can go wrong is rife with exam­ples of what Gaz­zani­ga wor­ries about.

Despite the Supreme Court’s 1998 rejec­tion of lie detec­tor tech­nol­o­gy on the grounds that it was not reli­able, lie detec­tors are still used in job inter­views, secu­ri­ty clear­ances and interrogations.

In April 2008, the Pen­ta­gon start­ed issu­ing hand-held lie detec­tors to sol­diers in Afghanistan. It argued that the device’s inac­cu­ra­cy didn’t mat­ter so long as it gave sol­diers an edge in con­fronting pos­si­ble ter­ror­ists. This ​“erring on the side of tech­nol­o­gy” over real­i­ty is what scares many observers with the growth of fMRI technology.

Jonathan Marks, a bioethi­cist at Penn State Uni­ver­si­ty who stud­ies inter­ro­ga­tion tech­niques in the war on ter­ror, says the use of fMRIs could, in fact, increase the use of torture.

“[P]eople [could] begin to say, ​‘the fMRI picked him out as a ter­ror­ist so let us give him a going over in the inter­ro­ga­tion room,’ ” says Marks. ​“Con­trary to the view that fMRI will ren­der tor­ture obso­lete, it might become a license for fur­ther abuse of detainees because its read­ings will con­vince peo­ple that they have a ter­ror­ist on their hands.”

Night­mare sce­nar­ios, like the ones Marks sug­gests, have the Amer­i­can Civ­il Lib­er­ties Union (ACLU) concerned.

Bar­ry Stein­hardt, direc­tor of the ACLU’s Tech­nol­o­gy and Lib­er­ty Project, says fMRIs need to be kept in check.

“There are cer­tain things that have such pow­er­ful impli­ca­tions for our soci­ety – and for human­i­ty at large – that we have a right to know how they are being used so that we can grap­ple with them as a demo­c­ra­t­ic soci­ety,” says Stein­hardt. ​“These brain-scan­ning tech­nolo­gies are far from ready for foren­sic uses and, if deployed, will inevitably be mis­used and misunderstood.”