You’ve probably heard of fingerprint scans, iris scans, and perhaps even eye gaze scans, but what about footstep-based biometrics? New research published on the preprint server Arxiv.org investigates the use of artificial intelligence (AI) in identifying a person by their footsteps alone.

Researchers at the Indian Institute of Technology in Delhi describe the system in a paper titled “Person Identification using Seismic Signals generated from Footfalls.” It’s based on a fog computing architecture, which employs edge devices to carry out much of the computing, storage, and communication involved in data collection. (This cuts down on costs by minimizing bandwidth and energy requirements, the team noted.)

“[With our approach], individuals are only required to walk through the active region of the sensor,” they wrote. “Human identification systems have significant applications in various areas.”

The system consists of three layers: things (sensors paired with low-end processors, and embedded processors paired with transceivers); fog (embedded processors and transceivers); and cloud (a server). The things layer, which in this implementation consists of a Raspberry Pi Zero, a geophone (a ground motion transducer that converts ground movement into voltage), and a long-range transceiver module, automatically extracts the portion of the seismic signal that represents a footfall and compresses it before sending it over ZigBee to the fog layer. The fog layer — a Raspberry Pi 3 model B — receives the football signal, decompresses it, extracts important features from it, and classifies the signal before passing it onto the cloud over Ethernet or Wi-Fi. Lastly, the cloud performs inference.

To train machine learning models capable of distinguishing between steps (and by extension, people), the researchers collected both the time and frequency of footfalls in addition to their length and cadence (the gap between two consecutive footsteps). Over the period of a month, they used a geophone to collect roughly 46,000 footfall events from eight barefooted test participants — the largest dataset of its kind, the team claims.

They posit that in the real world, data collection would be best accomplished by dividing a “monitoring area” — e.g., a college or factory — into “zones” (factory floors, departments) and sub-zones (rooms, hospital wards).

In the process of model training, the team found that about 875 footsteps per class — about eight minutes of walking — were required to achieve accuracy greater than 85 percent, but their results beat that baseline in the end. During testing, the best-performing AI system matched an individual with his or her footsteps 92.29 percent of the time from just seven consecutive footsteps.

A notable drawback of the system is its inability to ID more than one person at a time — two or more confuse the system. The researchers leave this to future work, but believe the current iteration could be reliably used to register classroom or workshop attendance, detect intruders, and control home appliances.

“The main advantages of this type of biometric system are [that] seismic sensors can be easily camouflaged; evading detection is impossible because footstep patterns are inimitable; it does not breach individual’s privacy; [and it’s] less sensitive to environmental parameters and beyond the capacity of an individual to decode and manufacture the raw signal,” they wrote.