Identity theft is often a multi-layered process. Once a thief gets one bit of your information, they try to use it to get more. The hackers behind the 2015 data breach of the US Internal Revenue Service (IRS), for example, used personal information they’d previously stolen from thousands of Americans to answer security questions on the IRS website, and in turn get access to their tax returns.

The security questions asked about personal details, like, “On which of the following streets have you lived?” and, “What is your total scheduled monthly mortgage payment?”

The hackers in the IRS case successfully got through that security measure, but what if the agency had a system in place that could detect whether the person answering the questions really was who they claimed to be? In a recent study conducted in Italy, researchers demonstrated how such a system could work.

In the study, published recently in PLoS One, the researchers quizzed 40 respondents about their personal details. Half of the respondents were asked to answer the questions truthfully, but the other half were given details about fake identities they had to memorize and use in the quiz.

The computer quiz kept track of the movement of each respondent’s mouse as they answered the questions, and noted how the fakes differed from the truth-tellers when they moved the cursor from the bottom of the screen to the answers at the top.

The quiz consisted of 12 questions like, “Do you live in Padua?” and “Are you Italian?” That covered details an identity thief could easily remember and answer, but then the quiz threw them a curve ball.

“What is your zodiac sign,” it asked in the second series of 12 questions, which were designed to be easy for the genuine respondents, but more difficult for the fakers to work out.

“While truth-tellers easily verify questions involving the zodiac,” the study says, “liars do not have the zodiac immediately available, and they have to compute it for a correct verification. The uncertainty in responding to unexpected questions may lead to errors.”

After the researchers took the mouse-movement data collected from the quizzes and trained a machine-learning algorithm to analyze it, they found that was indeed the case. It was able to discern the fake responses from the real ones 95% of the time.

“From a cognitive point of view,” the study said, “it is confirmed that unexpected questions may be used to uncover deception.”

The study also noted, however, that “unexpected questions require answers to be carefully crafted and this may be a limitation in online automatic usage of the technique.”