In 1844 the first finger operated electrical device for long distance communication was invented. It was soon discovered that operators using this machine, the telegraph, were unwillingly disclosing more information than the message itself. Experienced telegraph operators were able to identify their colleagues by listening to patterns in the rhythm of Morse code being communicated1. During World War II, American intelligence developed a methodology called “The Fist of Sender” to distinguish telegraph messages coming from allies or enemies2. With the advent of the computer, the same identification concept was adapted to a computer keyboard. In 1983, J. Garcia filed the first patent describing a method able to identify a person via their style of typing on a computer keyboard3. In the last 30 years, many have proposed the development of pattern recognition algorithms to certify the identity of a person from typing-derived features4,5, now known as keystroke dynamics. Fig. 1 shows examples of time-based features commonly employed for user identification. Two recent reviews4,5 compare various keystroke dynamics classification methods, some of them achieve an excellent accuracy, with an identification rate higher than 95%. The main challenge that these methods face is the need of taking into account the user variability due to physical or psychological variations, an aspect reported in various publications5,6. Our hypothesis is that while physical or psychological variations may undermine accuracy for biometric applications, they may be leveraged to infer the psychomotor status of the subject typing.

Figure 1 A pictorial representation of keystroke dynamics variables. Full size image

Current algorithms are highly specialized for biometric identification and are not tuned to characterize health-related variations. The algorithms span a vast range of families: basic statistical features7,8,9, Bayesian analysis10,11, autoregressive models12, hidden Markov models13,14, artificial neural networks15,16 and other machine learning techniques17,18. Each of these approaches is employed to grant or deny access to a computer system, hence, a primary requirement is the reliance on a small number of key presses in order to avoid excessive burden on the user who needs to access the system. In light of these constraints, algorithms are typically tailored for recognizing typing patterns in known pre-defined text, which helps achieve a high classification performance. On the other hand, we want to extract relevant psychomotor-related information regardless of the text typed and without changing an individual's daily workflow. By employing software that operates in the background of one's daily activities, we would be able to monitor typing patterns longitudinally that can be used to infer or detect changes in health or state, especially considering the number of daily interactions one typically has with keyboards, touchscreen devices or other appliances.

Inverse kinematic analysis of the forces involved in key presses by means of high speed cameras in conjunction with force sensors19, electromyography20 and computer models21 identifies three phases for a single keystroke: key mechanism compression, finger impact and fingertip pulp compression, followed by release. The key mechanism compression starts when the finger first reaches the contact with the key and ends when the key has been fully pressed, it accounts for ~12% of the total Hold Time (HT). When a key reaches full compression maximum finger deceleration and peak force occurs, this phase accounts for another ~11% of the hold time. Then, the tip of the finger moves down less than a millimeter due to the skin compression and it is finally released from the depressed key, this phase lasts for the remaining ~77% of the hold time. Interestingly, the duration of this phase is not correlated to the forces employed in the first two19; Jindrich et al.22 compared the finger tapping kinematics on four structurally different keyboards with three different hand postures, finding that kinematics, endpoint forces, net joint torques and energy production showed similar patterns.

Previous studies aimed at explaining the neurobiology of typing and keystroke dynamics have revealed that hold times are generally very short, typically around 100 milliseconds19,20; still, keystrokes trigger both cortical and subcortical brain networks which have been identified by neuroimaging functional studies. Witt et al.23 compared and contrasted 38 independent studies (22 fMRI and 16 PET) solely on finger tapping in order to identify the brain activation areas. In all studies, the consistent areas activated were: primary sensorimotor cortex (SM1), supplementary motor area (SMA), basal ganglia (BG) and cerebellum. Additionally, clusters of activation were observed in the premotor and parietal cortices, these regions are known to play an important role in the transformation of sensory input to motor tasks and the production of complex motor tasks. Thus, impairment related to these areas may be detectable via changes in typing patterns.

In order to test this hypothesis, we selected a condition known as “sleep inertia” as a proof of principle. This psychomotor effect has been described as a state of grogginess, impaired cognition, reduced motor dexterity and disorientation while awakening24. Its impact on a subject performance is comparable to being sleep deprived for 24 hours25 or inebriated26,27. Although always present during awakening, the effect is maximal after the slow wave or deep sleep phases, lasting typically 15 to 30 minutes in healthy subjects28,29.

In this study we present a novel algorithm based on the evolution of key press latencies or hold time that is able to detect the psychomotor decline induced by sleep inertia in healthy subjects. Fig. 1 shows a graphical representation of hold time and other keystroke dynamic variables.