Principal Component Analysis and Pre-Processing

Principal Component Analysis (PCA) is used to explain the variance-covariance structure of a set of variables through linear combinations. It is often used as a dimensionality-reduction technique. We use this technique and reduce the number of dimensions that are present in our data.

Before PCA our Dimensions were 20000 * 9600

We reduce the number of dimensions to 20 which leads to,

After PCA the dimensions of the data becomes 20000 * 20

We now Normalize the data to make sure different features take on similar range of values, For this purpose we use StandarScaler().

We split the data into 80–20 ratio using sklearn package

Now that we have the training and testing data which has been normalized we can start training different models to classify the hand gestures.

Stochastic Gradient Descent

Here we use the ‘LOG’ loss function as a parameter

Decision Tree Classifier

The maximum depth of the decision tree is set as 10 in the parameter.

Random Forest

The number of trees has been set as 100 and the depth of each tree has been set to 15 in the parameters.

Logistic Regression

Naive Bayes

We are using the Gaussian Naive Bayes algorithm. Other types include multinomial naive bayes etc.

Gradient Descent Classifier

RESULTS

Stochastic Gradient Descent : 70.3%

Decision Tree : 95%

Random Forest : 99.925%

Logistic Regression : 72.2%

Gaussian Naive Bayes : 65.6%

Gradient Descent : 23.6%

CONCLUSION

Based on the results presented above, we can conclude that one of the classifiers is able to accurately classify the gestures with an accuracy of 99.925% based on a Random Forest Classifier algorithm.

The Accuracy of the model is based on many aspects in our dataset and also the features present in the training data. The dataset was created without any moise i.e, the gestures presented are reasonably distinct, the images are clear and without background. Also there were enough number of samples which made our model robust.

The drawback is that for different problems, we would probably need more data to update the parameters of our model into a better direction. Because of the chaos and noise in the real world scenario we need more noisy data that resembles the real world.

For the full notebook checkout my Github repository : D

CITATION

T. Mantecón, C.R. del Blanco, F. Jaureguizar, N. García, “Hand Gesture Recognition using Infrared Imagery Provided by Leap Motion Controller”, Int. Conf. on Advanced Concepts for Intelligent Vision Systems, ACIVS 2016, Lecce, Italy, pp. 47–57, 24–27 Oct. 2016. (doi: 10.1007/978–3–319–48680–2_5)