In September, I won a Leap Motion Controller at a hackathon and started thinking about what I could build with it. After playing around with the SDK a little bit to understand what kind of data I could get, I thought it would be cool to build a translator for sign language.

Deaf people have valuable things to offer society, and it’s a shame when communication is a barrier. After talking with a friend who was excited about the idea, we decided to work on a sign language translator at TAMUHack (a hackathon at Texas A&M). In this post, I’d like to talk about how we approached the challenge.

On the big day, we set up our workstation and got to hacking. Our first goal was to get something working that would transcribe signs of the ASL alphabet as a user signed them above the device. Matt started working on getting data on each joint into a database I had set up so that we could experiment with the data more easily. With data in hand, we set to work getting the actual recognition part working.

Machine Learning Basics

For those of you unfamiliar with machine learning, it’s often broken down into a few different kinds of problems. For our project, we were attempting to solve a classification problem. Classification problems are problems where the input is some data points, and the output should be a label. For example, determining if a given email is spam or not is a classification problem.

In our case, we’re trying to go from the per-joint position data that the Leap Motion API gives us to a letter of the alphabet. Using the awesome scikit-learn library, I set to getting a classifier up and running while Matt gathered more data.

By around 3:00am, we had a simple transcription device more or less working. Playing around with it, we knew it definitely wasn’t perfect, but it showed promise. At one point, while attempting to solve the issues we were running into, we turned to Markov chains. The idea was that because certain letters commonly precede others, we should be able to figure out that a person signing “q” will probably sign “u” next. In the end, that wasn’t a super helpful idea – we later tore it out. After we input some more training data, the recognition was good enough that we could work with it.

We had a little bit of extra time, so we decided to try to make something fun to demonstrate the sign language recognition we were able to implement. We wanted to make a language learning tool, something like Rosetta Stone for sign language. Given the amount of time we had left, we set our sights on a simple game that would reward players for making as many signs as they could in thirty seconds.

By 8:00am, we had something working. Our game would put up a picture of a sign and ask the player to mimic it. Once she had, she would be awarded 100 points and pick another sign for her to input. At the end of thirty seconds, the game has you enter your name on the leaderboard:

We were happy with our project by about an hour before it was due. We were then able to focus on presentations.

Hackathon Pro-Tip: Presentations

If you’re going to a hackathon, I highly recommend putting some time into planning out your presentation. Having a plan for the best way to demonstrate your project to judges can put that last bit of polish on your project and really make a difference.

After the initial, science-fair style presentations, we (along with five other projects) got to present at closing ceremonies. Awesome! I had to rush a little bit (because the event was running a little late), but talking about our project in front of everyone was very fun.

We ended up getting second place, which I’m very proud of. Learning more about the Leap Motion and how to use it was also a blast. If you want to see any code, it’s all available on GitHub.