Across Europe epassport gates have been used for many years. Now newer technology is being developed Getty Images / Alex Grimm / Staff

Refugees claiming asylum in Europe are subject to an extensive and intense bureaucracy. Starting next year, it may become even more difficult, as several EU member states are trialling a technology that claims to analyse travellers’ expressions for indications that they may be lying.

The technology will be used for law enforcement purposes but also as a potential way to make travel for individuals quicker. However, some scientists aren’t convinced it’s an accurate or moral use of technology.


According to a document published by the European Commission, a company called European Dynamics, based out of Luxembourg, has created a system called “IBORDERCTRL” which will analyse the facial expressions of those at the border, looking for “micro-expressions”.

These are small facial tics which last between 1/25 and ⅕ of a second which could suggest that individuals are not telling the truth, through a two step process. The first is a “pre-screening” where travellers use an online application to upload pictures of their passports, and proofs of funds, before answering questions which will be personalised to the traveller’s “gender, ethnicity and language”, via webcam.

Read next Corporate snitches are using screen monitoring to find and fire slackers Corporate snitches are using screen monitoring to find and fire slackers

The second step takes place at the border. At this point, the system will compare images of the individual’s facial expressions from pre-screening, with photos taken during previous border crossings. It does not say what will happen for people who have not crossed the border before.

After this, the “potential risk” from the traveller will be “recalculated”, which is the point at which the border guard will take over from the automated system. Lasting six months, trials are taking place in Hungary, Greece and Latvia, and will cost €4.5 million (all of which is funded by the European Union).


After the six month trial is completed its results will be analysed and further decisions will be taken upon whether it should be rolled out more widely. There is no guarantee that the technology will be adopted.

The system has the potential to speed-up travel for citizens. However, the trial's webpage states it will be used to “catch illegal immigrants” and contribute to the “prevention of crime and terrorism”.

As Brexit looms, it's clear the tech to solve the Irish border problem is either untested or imaginary Brexit As Brexit looms, it's clear the tech to solve the Irish border problem is either untested or imaginary

Read next Why the NHS Covid-19 contact tracing app failed Why the NHS Covid-19 contact tracing app failed

While this might sound like a scenario from a dystopian novel, the use of technology and science to enforce a border is not particularly new. Louise Amoore, a professor at Durham University who researches the ethics of algorithms, coined the term “biometric border” in a paper in 2005, stating that technology will increasingly be used to enforce boundaries, real or imposed.


“The trials of the algorithms at borders will yield yet more training data, and is extremely likely to allow errors and false positives to be folded back into the algorithm,” she says. “There is no doubt in the research I and others have done that people have been wrongly detained, questioned, stopped, searched and even deported on the basis of a technology that has huge propensity for false positives.”

While the technical framework of the project spans multiple areas of border security, the evaluation aspects rely on convolutional neural net (CNN) algorithms, which are commonly used in facial recognition. (A tech that isn't always accurate).

They generate clusters of data from the videos or images which they are trained on – such as those at the border – and these will then be given a group attribute, such as the probability of telling the truth. Amoore describes this process as a “machine-generated threshold of truthfulness”, rather than a “lie-deception” tool akin to a polygraph, which it was previously reported to be (and which have dubious scientific grounding).

However, videos and images can subject to misinterpretation by machines, as Joy Buolamwini explores in her project, “The Coded Gaze”. This is a tendency for non-white faces to not be labelled accurately, especially as they can be associated with mistrust and deception in the first place.

Read next How to properly secure your Ring camera against getting hacked How to properly secure your Ring camera against getting hacked

Chris Frith, a fellow at the Royal Society and a neuroscientist at UCL, says that the technology itself is likely to be vague and not very useful. “It is very difficult to see how any such techniques can distinguish ‘lying’ from general fear and anxiety,” he says. “People are always hoping for some quick, cheap solution, and also like to claim the objectivity associated with science, so perhaps there’s been an increase in cherry picking scientific techniques and ignoring the majority of scientific opinions.”

A report from the Royal Society, as part of its Brain Waves project exploring neuroscience’s application in society, stated that law enforcement had previously identified this use of neuroscience as a potential area for further development.

Europe is using smartphone data as a weapon to deport refugees Privacy Europe is using smartphone data as a weapon to deport refugees

Evidence shows differences in the brain function if someone is lying, but that this is not conclusive. However, there are also many ways to fool a system – it is well documented that people can convince themselves they are telling the truth if they lie often enough, or they can train themselves in “counter-measures”.

Andrew Balmer, a senior lecturer in sociology at Manchester University who has studied lie detection, says any attempt to find a truth is inherently political. “Lies can become habitual and automatic, especially if we tell them frequently, or we practice them,” he says. “So lie detection is scientifically and sociologically invalid. Any attempt to objectively determine whether someone is lying is doomed to fail.”

Read next The UK’s data sharing deals with Europe are about to get real messy The UK’s data sharing deals with Europe are about to get real messy

"There are many cases in the USA, where the polygraph machine is used widely, of serious injustices resulting from this kind of interrogation," he adds. "We have done well in the UK and in other parts of Europe to keep lie detection out of our justice system. We shouldn't let it take hold now."

In a description of the new European trial, its co-ordinator said the project is designed to go beyond biometric technologies. "We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks," George Boultadakis of European Dynamics said in a press release.

This is not the first time that governments have sought to enforce borders with science and tech.

In the UK, border control officials have been using automated facial recognition gates for several years. These epassort gates allow travellers to present their passport to a machine and cameras match the images displayed to their face. The gates are widely used.

In a different situation, the UK government proposed (and trialled for a short period of time) DNA isotope testing at its border in 2009. This scheme was supposed to identify whether asylum seekers and refugees were from the areas they claimed to be from – such as Somalia – to increase their chances of safe refuge. It was also used to confirm that members of families were related to each other, which could affect their visa applications. Scientists at the time said it was a dangerous precedent to set, and the scheme was discontinued.

However, home secretary Sajid Javid apologised in late October to prospective immigrants who had their DNA sequenced as a condition of applying to live in the UK with support from their family. As was reported widely in July of this year, children separated from their families at the US-Mexico border would undergo DNA testing to find their parents, raising all kinds of moral problems.

Read next The Met Police's facial recognition tests are fatally flawed The Met Police's facial recognition tests are fatally flawed

However, while this use of technology could be misplaced, the tools themselves are useful in a wider sense – for example, these kinds of neural networks are used effectively, often for life-saving purposes, in the detection of tumors. “In both cases, there is an assumption that a machine can make visible something that a human cannot see,” says Amoore. “But in one of the instances, the science is used in a way that is unethical.”

More great stories from WIRED

– Why the iPad Pro won’t save the free-falling tablet market

– Small robots will make farming efficient and kill tractors

– Scientists explain why Hyperloop is so dangerous and difficult


– China wants to make supersonic trains. They won't work

– Inside the intensely political philosophy of the Fallout games

Get the best of WIRED in your inbox every Saturday with the WIRED Weekender newsletter