This section describes the series of experiments carried out by the two polydactyly subjects, P1 and P2, to investigate the neuromechanics and functions of their hands. Some experiments involved in addition a group of control subjects with five-fingered hands. The study was approved by the institutional ethics committees at the University of Freiburg, Imperial College London, EPFL and King’s College London. Each subject gave informed consent prior to starting every experiment.

MRI analysis of hand anatomy

The underlying anatomy of the hand of subject P1 was visualized using MRI in the Department of Perinatal Imaging and Health, King’s College London. T1 weighted, inversion recovery and proton density images were acquired with a 1.5 Tesla Siemens Aera system (Erlangen, DE). Images could not be acquired from subject P2 due to a metallic dental implant.

Hand biomechanics

A dedicated hand interface to measure the isometric force of each finger (shown in Fig. 2a) was developed at the Human Robotics group, Imperial College London, to investigate the force capability of either left or right fingers, in individuals with either five-fingered or six-fingered hands. The hand was placed horizontally on the interface as shown in Fig. 2a. Five or six of the eight 3D printed supports, each affixed to a load cell (HTC), could slide linearly to accommodate a left or right hand of any size so that the subject could comfortably exert a vertical force with the tip of each finger.

Forces across all fingers were recorded at 128 Hz. Experiments were carried out with this interface on the two polydactyly subjects as well as on a population of 13 control subjects (six females) with five-fingered hands between 25 and 35 years old. The subjects were seated in front of a table with the interface positioned on top of it so that the forearm was resting on the table in a natural position.

Initially, subjects were asked to exert the maximal possible force with a single finger. This maximal force (MF) was recorded for each finger separately starting with the thumb and ending with the little finger. Figure 2b shows the MF for five- and six-fingered subjects. Using this data, the enslaving e ij , characterizing the dependence between fingers i and j, was computed as

$$\begin{array}{*{20}{c}} {e_{ij} = \frac{{F_j\left( i \right)}}{{{\mathrm{{MF}}}_j}}} \end{array},$$ (1)

where i is the finger that generates MF while F j (i) is the force produced simultaneously by finger j and MF j is the maximal force of finger j. The enslaving for five- and six-fingered subjects are presented in Fig. 2d.

Then the subjects were asked to control 10%, 20%, or 30% of MF during 15 s long trials. Three trials were carried out at each force level, totalizing 3 × 3 × 5 = 45 or 3 × 3 × 6 = 54 trials per session for five- and six-fingered subjects respectively. Five-fingered subjects carried out only one session while the six-fingered subjects performed two (subject P1) or three (subject P2) sessions. The data from this experiment were used to examine how the force variability depends on the amount of force exerted. In each trial, the force variability was computed as the standard deviation of the force across the time window [1300–1800]/128 s, which was selected so that the subjects were correctly exerting the required force during this period in almost all trials. Five trials (1 trial in a control subject, 2 trials in subject P1 and 2 trials in subject P2) were excluded from the analysis as they showed extraordinary high fluctuations of the force across time, indicating that the task was not carried out successfully on these trials. Figure 2c shows the standard deviation of the force as a function of the magnitude of the force for five- and six-fingered subjects.

We also computed the enslaving for the 10%, 20%, or 30% MF tasks (Supplementary Fig. 2). The normalization by the maximal force (MF j ) was replaced by 10%, 20%, or 30% of the maximal force, respectively.

Functional MRI

P1 and a group of nine control participants with five-fingered hands took part in the fMRI experiment. P2 was excluded due to a metallic dental implant. In a block design, participants performed a taping movement during 20 s with a single finger (20 taps per block, 1 tap per second) followed by 10 s of rest. Four blocks were performed for each finger in pseudo-randomized order (24 trials for P1 and 20 trials for controls). P1 performed two sessions, one for each hand. Controls performed only one session with the right hand. All participants were trained on the movements before entering the fMRI scanner.

Images were acquired on a short-bore head-only 7T scanner (Siemens Medical, Germany) with a 32-channel Tx/Rx rf-coil (Nova Medical, Germany). Functional images were acquired using a sinusoidal readout EPI sequence23 and comprised 28 axial slices. Slices were placed over the central sulcus (approximately orthogonal to the central sulcus) in order to cover the primary motor cortices (voxel resolution 1.3 × 1.3 × 1.3 mm3; TR = 2 s, FOV = 210 mm, TE = 27 ms, flip angle = 75°, GRAPPA = 2). Anatomical images were acquired using an MP2RAGE sequence24 in order to allow the precise localization of the precentral sulcus (see below) and for display purposes (TE = 2.63 ms, TR = 7.2 ms, TI1 = 0.9 s, TI2 = 3.2 s, TRmprage = 5 s). To aid coregistration between the functional and the anatomical images, a whole brain EPI volume was also acquired with the same inclination used in the functional runs (81 slices, voxel resolution 1.3 × 1.3 × 1.3 mm3, FOV = 210 mm, TE = 27 ms, flip angle = 75°, GRAPPA = 2). Subjects were scanned in supine position.

All images were analyzed using the SPM8 software (Wellcome Centre for Human Neuroimaging, London, UK). Preprocessing of fMRI data included slice timing correction, spatial realignment, smoothing (FWHM = 2 mm) and coregistration with anatomical images. Caret 5 (Van Essen Laboratory, Washington University School of Medicine) was used for surface visualization. To localize the voxels included in the analysis of activation patterns (Supplementary Fig. 3), a first GLM analysis was computed, which included one regressor per finger (6 for P1 and 5 for controls) and six rigid movements regressors. A functional mask for finger movements was defined as the active voxels in the F-contrast associated with any type of finger movement (p < 0.05 FWE). In addition, an anatomical mask corresponding to the sensorimotor cortex was designed using published probabilistic cytoarchitectonic maps25,26,27. The anatomical mask included the primary motor cortex M1 (Brodmann areas 4a and 4p) and the primary somatosensory cortex S1 (Brodmann areas 3a, 3b, 1 and 2). The anatomical mask was back-projected onto the native space of each participant. This led to 2190 voxels in the left hemisphere of P1 for right finger movements, 2037 voxels in the right hemisphere of P1 for left finger movements, and 343.8 ± 417.1 (mean ± std) voxels in the left hemisphere of controls for right finger movements (Supplementary Fig. 3).

To analyze the activation patterns within the selected voxels associated with each trial of finger movement, a second GLM analysis was computed, which included one regressor for each finger tapping trial (24 for P1 and 20 for controls) and six rigid movements regressors. Separately for each participant, the beta estimates for each tapping trial were extracted within the selected voxels (resulting in a trial × voxels matrix). These high-dimensional patterns were projected to two dimensions by classical multidimensional scaling (MDS), which finds low-dimensional projections preserving approximately the pairwise distances between the high-dimensional activation patterns14. As distance metric for the MDS, we used the cross-validated Mahalanobis distance14. For the five-fingered control group, MDS was carried out for each subject separately. As MDS projections induce an arbitrary rotation we aligned the projections of the individual subjects using Procrustes alignment14. Standard error ellipses shown in Fig. 2e were computed from the covariance across subjects. As the Procrustes alignment can also remove some of the true inter-subject variability14, we used a Monte-Carlo procedure to estimate a correction and adjusted the standard error ellipses accordingly14. For the polydactly subject P1, we computed the covariance by bootstrapping the trials. For each bootstrap sample an MDS projection was computed. The bootstrapped MDS projections were aligned using Procrustes alignment. The standard error ellipses (Fig. 2e, Supplementary Fig. 4) were computed from the covariance across bootstrapped MDS projections, adjusted by correction factors estimated by a Monte-Carlo procedure14.

Finger localization task

A finger localization task20 was conducted to investigate the perceived hand shape of P1, P2, and of a group of nine controls. Participants were blindfolded and their hand was placed below a structure topped by a 2D grid. They had to point on the grid with the index of the free hand towards the cued locations on the tested hand. They were required to identify three locations on each finger: the first knuckle, the second knuckle and the tip (total of 18 locations per hand for P1 and P2, and 15 locations for controls). Each location was tested six times for P1 and P2, four times for controls. The task was conducted for both hands in P1 and P2, only for the right hand in controls. The task was conducted once with tactile cueing, i.e. the target locations were touched with a plastic filament, and once with verbal cueing, i.e. the target locations were orally named. The localization error was measured for each tested location as the 2D-Euclidean distance between the reported positions on the grid and the real positions of the tested locations on the grid (Fig. 2f). Similar results were obtained with tactile and oral cueing; we only report the results from tactile cueing.

Object manipulation and common movement tasks

Experimental setup: The subjects were seated in front of a desk during the two tasks described below. An electromagnetic motion capture system (Polhemus Liberty 240/16-16) was used to record the hand and finger movements during the object manipulation and the common movement tasks (see Supplementary Fig. 5A). The hands were kept at 0.6 m distance from the main Polhemus system to maintain the recording noise below 0.005 mm. In total, 12 respectively 14 sensors were attached to the hand and fingers of five- or six-fingered subjects using medical tape. Every sensor measured three Cartesian coordinates for the position and three angles for the orientation relative to the main station. Each sensor was connected to the Polhemus system by plastic insulated aluminum wires. Two large sensors (9 × 11 × 6 mm3 at maximum positions, 9.1 g) were placed on the skin on top of the middle and thumb metacarpal bones. The others were small sensors (spherical, 17.3 mm length, 1.8 mm outer diameter, <1 g) which were placed at the distal and proximal phalanges of each finger. Measurements were recorded at 120 Hz.

Object manipulation task: The two polydactyly subjects and 13 control subjects with five-fingered hands (six females, mean age 24.8 with standard deviation 2.0) participated in an object manipulation task. The experimental procedure for the object manipulation task was adapted from ref. 21. We chose 50 objects with different shapes, sizes, textures and materials (see Supplementary Fig. 5B). These objects were without metal or paramagnetic materials so as to not interfere with the Polhemus measurement based on magnetic fields. The subjects were blindfolded and were given the objects one by one. They had to explore an object with one hand, and guess what it is (see Supplementary Movie 4). Each object was explored for 30 s. When an object was recognized earlier than 30 s, the subject was asked to explore special features of this object such as tips, edges etc.

Common movement tasks: The two polydactyly subjects and 8 of the 13 subjects with five-fingered hands who carried out the object manipulation task (five females, mean age 24.3 with standard deviation 2.0) also performed four common movement tasks (see also Supplementary Movie 5). Tying shoe laces: The end of two shoe laces were fixed on a table and the subjects were required to tie the laces with two hands. Flipping book pages: The subjects were given a book and had to flip pages using one hand only. Napkin folding: The subjects received a paper napkin and had to fold it into a specific shape (as used in restaurants) and in a specific sequence using both hands. Rolling a towel: Subjects were given a towel and asked to roll it into cylinders using both hands. Five minutes of movement per task was recorded during which subjects were asked to repeat the task as often as they wanted.

Data analysis: The position of every small sensor relative to the large sensor on the middle of the metacarpal bones was used for further analysis. Raw positional measurements were smoothed with a Savitzky-Golay filter (third order, length 41 sample points equivalent to 341.67 ms). Movement velocities were computed from raw positional measurements with a first derivative Savitzky-Golay filter (third order, length 41 sample points equivalent to 341.67 ms).

Analysis of finger (in) dependence: To assess the (in)dependence of finger movements we estimated the mutual information between the movements of different fingers. The mutual information between two continuous stochastic signals X and Y is defined as:

$$\begin{array}{*{20}{c}} {I\left( {X,Y} \right) = {\int}_X {{\int}_Y {p\left( {x,y} \right){\mathrm{log}}_2\left[ {\frac{{p\left( {x,y} \right)}}{{p\left( x \right)p\left( y \right)}}} \right]{\mathrm{d}}x{\mathrm{d}}y} } } \end{array},$$ (2)

where p(x, y) is the joint probability density function of X and Y, p(x) and p(y) are the marginal probability density functions of X and Y. Note that the mutual information is symmetric, i.e. \(I\left( {X,Y} \right) = I\left( {Y,X} \right)\). In case of multivariate Gaussian density functions Eq. (2) simplifies to

$$\begin{array}{*{20}{c}} {I\left( {X,Y} \right) = \frac{1}{2}{\mathrm{log}}_2\left[ {\frac{{{\mathrm{det}}\left( {\sigma _X} \right){\mathrm{det}}\left( {\sigma _Y} \right)}}{{{\mathrm{det}}\left( {\sigma _{XY}} \right)}}} \right]} \end{array},$$ (3)

where σ X , σ Y are the covariance matrices of the marginal densities X and Y and σ XY is the covariance matrix of the joint density. A more intuitive understanding of the mutual information can be gained for univariate normal signals X and Y for which Eq. (3) further simplifies to

$$\begin{array}{*{20}{c}} {I\left( {X,Y} \right) = {\mathrm{log}}_2\sqrt {\frac{1}{{1 - r(X,Y)^2}}} } \end{array},$$ (4)

where r(X, Y) is the Pearson correlation coefficient between X and Y. To estimate the mutual information between two fingers, we used the six-dimensional position measurements from the two sensors at each finger, estimated the covariance matrices from the time series of movement positions and applied Eq. (3).

Prediction of individual finger movements from movements of other fingers: The movement of each individual finger was predicted from the movements of the other fingers. For six-fingered subjects the prediction was carried out with and without the supernumerary finger; the latter to facilitate comparison with the results from five-fingered subjects. The x/y/z-positions of the two sensors at each finger constituted the six-dimensional movement vector of each finger. These six components were individually predicted from the 24- or 30-dimensional movement vectors of the remaining four or five fingers. Prediction was done using linear least-squares and nonlinear support vector regression. We used twofold cross-validation with chronological splits of the data to avoid overfitting. The quality of prediction was quantified by computing the coefficient of determination (R2) between predicted and actual movement for each component of the six-dimensional movement vector and then averaging the R2 values across the six dimensions. We used support vector regression with a Gaussian kernel and the hyperparameters (i.e. the kernel width as well as the regularization parameter) were optimized on the training data set. We used the Matlab implementation (“fitrsvm”) for support vector regression and optimization of hyperparameters. To reduce computation time the data were downsampled to 120/20 = 6 Hz.

Principal component analysis (PCA) of degrees of freedom21,28,29: PCA was performed on the sensor x/y/z-positions measured with two sensors at each finger during the object manipulation and the common movement tasks. The cumulated amount of variance captured by an increasing number of principal components is plotted in Fig. 3b and Supplementary Fig. 6B. To compute the effective number of dof we applied two algorithms: the cross-validation PCA with Eigenvector method recommended in ref. 30 and the cross-validation PCA method using expectation maximization for missing values as proposed in ref. 31. Both methods use a cross-validation procedure where the PCA is first computed from training data and then applied to predict the samples of the test data while training and test data set are mutually exclusive30,31. In our case we used tenfold cross-validation and chronologically split the movement data separately for each task into ten parts using in each fold nine of those parts in the training and one part in the test data. The first and last 10 s of the test data set were excluded for each task to avoid any influence of the training on the test data due to the auto-correlation of the movement. The mean squared error between prediction and actual data was computed as a function of the number of principal components. The number of principal components which yielded the smallest error was used as an estimate for the effective number of dof and was computed for each subject separately. For each subject we averaged the determined number of principal components across both methods30,31 and used this as an estimate of the number of degrees of freedom (Fig. 3c, Supplementary Fig. 6C).

Information theoretic analysis of degrees of freedom: In addition to the PCA analysis described in the previous section, we analyzed the degrees of freedom using information entropy. In contrast to the PCA, the analysis of information entropy takes into account potential nonlinear relationships between finger movements. Information entropy, on the other hand, requires an estimate of the joint probability distribution of the finger movements. To compute this joint probability distribution, we discretized the finger movements by classifying the movement state of each finger into one of three conditions from the set MS = {rest, flexion, extension}, based on the movements of the distal and proximal interphalangeal joints. Spherical coordinates (distance, polar and azimuth angle) of the distal sensor relative to its proximal sensor were computed. PCA was performed on the polar and azimuth angles and the movements along the first principal component were used to represent the movements of each finger. For each finger, the first derivative v of the first PC was calculated as the difference between two consecutive time bins and used to derive the current movement state based on a threshold μ = 0.3 SD(v): flexion for v < −μ, extension for v > μ, rest otherwise. Different threshold values (μ = 0.4 SD(v) or μ = 0.1 SD(v)), as well as different set of states (only two states: flexion for v < 0 and extension for v > 0), did not change our general conclusion regarding the comparison of the information entropy between five- and six-fingered subjects. We computed the information or Shannon entropy (H) of the joint probability distribution of the movement states of all fingers (p):

$$H = - \mathop {\sum}\limits_{s_1 \in MS} {\mathop {\sum}\limits_{S_2 \in MS} \cdots } \mathop {\sum}\limits_{S_n \in MS} {P\left( {s_1,s_2, \ldots ,s_n} \right){\mathrm{log}}_2} \left[ {\left( {s_1,s_2, \ldots ,s_n} \right)} \right],$$ (5)

where s i ∈ MS is the state of finger i. For n fingers the number of different movement states is 3n and the maximum entropy is therefore log 2 (3)n which is obtained when all possible movement states have equal probability.

Joint movement of thumb, index and supernumerary finger: For each time point we computed the movement speed for each finger as the magnitude of its three-dimensional velocity vector at the fingertip. We then classified the movement state of each finger in each time point as either “rest” or “moving” by comparing the speed to a threshold value which was chosen as the 10th, 30th or 50th percentile of the speed distribution across all time points and all fingers. From these data we estimated the conditional probabilities that thumb and index finger or thumb alone or index finger alone were moving given the supernumerary finger was moving. These conditional probabilities were estimated for the three speed thresholds (Fig. 3e, Supplementary Fig. 6E).

Video game for six fingers

Polydactyly subjects sat in front of a computer monitor (DELL U2713HM) approximately 0.6 m from the screen, on which six target boxes were displayed in the lower centre of a black screen. During the experiment, oscillating cursors passed through the target boxes (Fig. 3g and Supplementary Movie 6). Each of these oscillating squares had a different frequency within a predefined range. The individual target boxes could be “touched” by pressing a corresponding key on a standard computer keyboard. Keys were chosen to match the hand geometry of individual subjects to ensure pressing the keys was comfortable. The subjects were instructed to track the oscillating cursors and to press the corresponding button once the cursor was within its associated target box. If the button was pressed within this time window, it counted as a correct press, if it was pressed outside it was counted as a false press. The number of correct and false presses were summed over all fingers and accumulated over the time of the trial.

The performance of the subjects was rated on their accuracy (correct presses/target count) and error rate (false presses/all presses). The aim was to increase accuracy while decreasing the error rate. At the beginning of each trial the target accuracy and the error rate threshold was set according to the level (Supplementary Table 1); each level was defined by the movement speed of the oscillating cursors and thresholds on the accuracy and the error rate. Once the subject crossed both thresholds, the participant was expected to maintain their performance above the accuracy and below the error threshold for 2 minutes, at which point the trial would end and the level would be increased. For each subsequent level, the accuracy threshold was set 10% higher and the error rate was set 10% lower. If the subject was able to cross the 70% threshold for accuracy and go below the 30% threshold for the error rate, the oscillation frequency range was increased by 0.05 Hz. After increasing the oscillation frequency, the accuracy threshold and error rate were set back to the original value of 50%. See Supplementary Table 1 that highlights the parameter values associated with different levels. If the subject was not able to reach the next level within 7 minutes, the trial was aborted and after a short break, the subject was asked to repeat the same level.

During each trial, the following additional visual feedback was presented to the subject. If no key was pressed, the target boxes were displayed in white. Pressing a key while no cursor was in the corresponding box, i.e. a false press, the target box turned red. Pressing a key while a cursor was in the corresponding box, i.e. correct press, the target box turned blue. Below the target boxes, two bars gave visual feedback about the subject’s overall performance. The upper bar reflected the accuracy and the lower bar the error rate. If the accuracy of the subject increased, the accuracy bar filled up and vice versa. At the same time, decreasing the error results in filling of the error bar, such that an error rate equal to 0 resulted in an entirely filled bar, i.e. the value of 1-error rate was presented. Each bar was red until the subject crossed the set threshold of the corresponding bar, at which point it turned green. The threshold values were shown as gray markers on the bars. As soon as both bars turned green, a red countdown of 120 s appeared in the lower centre of the screen. If one bar turned red again before the time was expired, the countdown was reset to 120 s and disappeared until both bars were green again. Furthermore, each cursor individually appeared in red (if below) or green (if above) for the performance threshold in relation to the individual performance of the corresponding finger, so the subjects had an indication of which finger required improvement.

The evolution of performance is shown in Fig. 3h. Subjects were tested for five consecutive days as well as 10 days after. The subjects performed the task for 1 h per day. The subjects had to use two different finger combinations to press the keys; either all six fingers from the right hand or the right hand but replaced the SF with the index finger of the left hand (Fig. 3h).

Statistical analysis

For comparing two independent samples we used the nonparametric, two-sided Wilcoxon ranksum test and computed 95% confidence intervals on the effect size (i.e. the difference of the population means) by using the two-sample pooled t-interval. For comparing two paired samples we used the nonparametric, two-sided Wilcoxon signed rank test and computed 95% confidence intervals on the effect size by using the paired t-interval. All reported confidence intervals reflect the mean for five-fingered subjects subtracted from the mean for six-fingered subjects, i.e. positive values indicate larger values for six-fingered subjects.

To assess the correlation between two variables we computed the Pearson correlation coefficient. We did not assess the statistical significance of the Pearson correlation coefficient as the samples across which correlations were computed were not independent.

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.