An EU-funded project is developing an ‘intelligent control system’ to test third-country nationals who reach the EU’s external borders, including a sophisticated analysis of their facial gestures.

The Intelligent Portable Border Control System, iBorderCtrl, is a series of multiple protocols and computer procedures which are meant to scan faces and flag ‘suspicious’ reactions of travellers who lie about their reasons for entering the Schengen area.

The AI-based screening system will check up to 38 facial micro-gestures of travellers – like eye direction, pupil dilation, minimal voice changes and micro-expressions undetectable to human guards – which were collected during a series of questions asked by the border agents at the checkpoints.

iBorderCtrl was developed with funding from the EU’s Horizon 2020 research and innovation program and is coordinated between 13 EU-wide partner institutions around Europe.

The European Commission contributes €4.5 million to the development of the new tool, created in September 2016 and managed by a Luxembourg company called European Dynamics. The conclusion of the test phase is planned for August 2019.

Learning to trust the killer robots It’s been more than 75 years since the American science fiction writer Isaac Asimov published his seminal ‘Three laws on Robotics’, a concise ethical framework that came to govern the principles by which Artificial Intelligence (AI) has been developed worldwide.

While aiming to tighten the control of Europe’s borders and contribute to the prevention of crime and terrorism, according to the Commission it is also meant to speed up traffic of the rising number of people entering the EU every year.

“We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis of European Dynamics told the Commission after the pilot was announced.

“iBorderCtrl will collect data that will move beyond biometrics and on to biomarkers of deceit,” he added.

The pilot project consists of a two-stage procedure and works on a voluntary basis. In a first step, travellers who seek to enter the EU have a pre-screening test at home while filing an online application where they have to upload various documents including pictures of their passport, visa and proof of funds.

They will then answer questions adapted to the individual applicant from a computer-animated border guard over a webcam interview, where the responses will be analysed by an AI “deception detection” system developed at the Manchester Metropolitan University.

German industry expert: 'No one will be bypassed by digitisation' Digitisation has become a fix point in the public debate as significant effects on the economy and the working world are expected. Are Germany and the EU well prepared? EURACTIV Germany spoke to Iris Plöger.

According to the Commission, the system is, however, not meant to replace human border guards: “Border officials will use a hand-held device to automatically cross-check information, comparing the facial images captured during the pre-screening stage to passports and photos taken on previous border crossings.”

The results are accessed at the border checkpoints, where travellers, in a second step while handing over their passport and visa details on a terminal, undergo a second screening.

“After the traveller’s documents have been reassessed, and fingerprinting, palm vein scanning and face matching have been carried out, the potential risk posed by the traveller will be recalculated. Only then does a border guard take over from the automated system,” the Commission added.

“It does not make a fully automated decision. It provides a risk score,” Keeley Crockett of Manchester Metropolitan University explained in a promotional video.

Individuals flagged as “low risk” will go through a short re-evaluation of their information for entry, while, if the AI system flags anything suspicious, “higher-risk passengers” will be subjected to more detailed checks.

The project will be tested in real conditions in Hungary, Greece and Latvia.

According to the project website, the 175 km section on the main migration route, the border between Hungary and Serbia, is considered a hotspot. To “test the proposed system in a relevant environment”, the Hungarian National Police will run tests at two border checkpoints with “significant traffic.”

The Greek pilot will test five different cases: pedestrians, bus, vehicle, passenger and freight train crossings, while the border between Greece and FYR Macedonia is considered a hotspot. The Latvian pilot faces 31 border crossing points (5 airports, 10 seaports, 3 railways, 13 roads).

The UK, Germany, Poland, Germany, Spain and Cyprus have expressed interest to participate in the project following initial trials.

Accuracy and bias

As a member of the iBorderCtrl team told New Scientist, early testing revealed that the digital border guard only had a 76% success rate, but the team is “quite confident” that can be raised to 85%.

Nevertheless, critics point out that there are many factors that leave room for error.

Concerns about accuracy are raised due to the fact that participation is voluntary and “offers ease of control for travellers willing to cooperate with authorities to speed up the border control.”

“In such circumstances, I do not see what ground truths may exist in order to assess the robustness, reliability or accuracy of the system, nor how the relevant algorithms will be trained to detect biomarkers for deceptive behaviours,” said Antoinette Rouvroy, a researcher of the FNRS at the Center for Research in Information, Law and Society (CRIDS).

Autonomous weapons, AI are future of defence but require ethical debate, says expert Should a computer be allowed to take decisions over life and death? As artificial intelligence is playing an increasing role, many issues still need to be discussed in parliament and by the general public, Ulrike Franke told EURACTIV about the ongoing autonomous weapons debate and the future of warfare.

“Or we would have to presume that illegal immigrants and potential criminals would voluntarily opt in for the automatic detection, which is not very plausible,” she added.

In the past, studies have also proven that facial recognition algorithms have an extensive problem with biases and affect especially women and minorities disproportionately.

For example, facial recognition algorithms designed by IBM, Microsoft, and Face++ had error rates of up to 35% higher when detecting the gender of darker-skinned women compared to lighter-skinned men, according to a study published earlier this year.

“Before deploying such AI systems in any sector of activity or government, a careful assessment should be made of the unavoidable biases involved in any system of detection, classification and forward-looking evaluation of people and behaviours.

“Digitisation is a transcription of reality, and there is no neutral transcription of reality,” said Rouvroy.