Quick Summary:

We designed this project particularly for specially abled people who are speech impaired. We created a wearable hand glove which they can wear and it converts the sign language (American Sign Language) to speech output. The glove works on the principles of Machine Learning Algorithm that identifies the gestures regardless of different hand sizes. Also, it is facilitated with a Bluetooth Module so that, two speech impaired people can talk to each other remotely within the range of 50 meters using the sign language.

This project was a part of our Final Semester project at the School of Engineering and Applied Science, Ahmedabad University. We were a team of two students I and Ms. Rachana Solanki interested in Interaction Design and Human-Computer Interaction and currently, pursuing our Masters’ study in the same respectively.

How it was started?

I and Rachana both had keen interest towards learning the concepts that how humans pursue their interaction with the real world, what are the problems they face and how they can be solved. So, when the project announcements were made, we decided to go for a project that allow us to work on gestures and their conversion. We kept searching on the internet, also did some secondary research and found different projects that had gesture interactions involved. From all those inspiring ideas, we selected this particular project. Obviously, there were many projects which were developed on this line before we started it and so, we decided to go through available research papers and find out the problems faced by such people and tried to give our best to solve them to the extent we could.

The challenge

After going through a few research papers published by IEEE, we came to know about three major issues:

1. Designing of a glove in a way that it can be worn and used at ease.

2. The glove was generating different outputs for different hand sizes as the defined gestures were different for different hand sizes.

3. There was no any alternate through which two speech impaired people can communicate remotely apart from video calls! (For that, they always need a laptop to carry with them! Much obvious reason!)

So, this was the basic motivation behind commencement of the project.

Defining the roles, I was mainly responsible for defining the gestures and writing a Machine Learning Algorithm which can help us to provide an accurate output whereas, Rachana did a great job by making the glove and facilitate it with wireless communication.

The Process

1. Approach and the interview

For starting out this project, we took the User Centred Design Approach (with some of the concepts of Genius Model) to design the user experience, on the basis of two courses of UCD we had taken during our semester studies. So, first we identified the problems (which are mentioned in The Challenges section) from our secondary research and later when we were discussing the problem, we decided to ask to some of the people who can guide us in the right directions. So, we went to one of the schools of specially abled people in Bhavnagar (My hometown) and did an interview with one of the faculties of that school, Mrs Kavita Shah and after talking to her, we found out some of the insights which are mentioned below:

It also becomes difficult for these students to communicate with normal public on the roads and the stores or other public places as majority of people don’t know sign language. The second point was about the emotions. Mrs Shah told us that when they see normal people talking to each other, they naturally feel curiosity as well as jealousy. Whenever they see normal people talking in a normal way, they get curious about what these guys are talking about and jealous about why they cannot talk in the same manner. It affects their mental health a lot. And generally, most of the students coming from rural area, their parents are not literate enough and also not having any idea about the sign language. So, it becomes difficult for these students to communicate with people living with them.

All these points motivated us to pursue this project keenly with a humble intention to solve their problems as much as we can.

2. Design and implementation of the glove:

We decided to use the flex sensors to identify the gestures as they are used as bend sensors to identify the value of bending. Our system is completely based on Arduino Platform where we have used Arduino Nano for the glove.

Block diagram for the glove with transmitter to send signs to the other system.

As it is shown in the block diagram, five flex sensors have been used for five fingers which are connected to the micro-controller (Arduino Nano in this case!) and a transmitter (Bluetooth Module) is connected to it to transmit the signs to the other systems.

System attached to the glove

We implemented k-NN algorithm for Machine Learning concepts and used supervised learning method to train our model. As our primary focus was to deploy simple English Alphabets, we took gesture inputs of 21 people and recorded all the alphabetical gesture instances which were about 33,000 and we used 66.66% of the total instances as our Training Data Set and rest of the instances as our Testing Data Set. And we were surprised enough that, our system worked with 96.9372% accuracy which is the highest in all the research papers we have read. We also tried different algorithms like ZeroR and Neural Network but, k-NN gave us the best result with just 0.03 second model building time.

Weka Analysis of k-NN Algorithm

The third major implementation was of Communication Protocols. We used HC05 Bluetooth Module to establish communication between two systems. As this system is still in the development mode, we are able to achieve one sided communication through the system.

Block diagram for the receiver system

Transmitter and Receiver Systems

And that’s how we designed and implemented our system.

Problems faced

The main problem we have faced while working on this project was related to the design of glove. We were not able to attach the flex sensors on the glove as they were quite loose and were getting slipped easily.

The second problem was about establishing the communication system. HC05 modules are very hard to use especially with Arduino. I still remember that we were able to establish the communication and were able to work with that just two weeks before the final submission deadline.

Failures and Successes:

While working on our project, where we failed the most was that our research paper didn’t get selected for IHCI 2017 conference. This happened just because we could not be able to develop the complete system which is end-to-end communication by the deadline.

The successful outcome that I loved the most about the project is its accuracy and ability to transmit and receive signs from the transmitter to the receiver. Another reasons to feel fortunate were the joy and happiness we experienced while and after doing this project. That were because we could do our bit for the people who actually need it instead of doing a repetitive kind of project usually done on Facebook or BookmyShow or Uber.

What’s next

In future, as an extension to our past work, we wish to develop the end-to-end system that comprises of two-way communication and later we plan to test it with the actual users with a view to receive their feedbacks and iterate our product.

You can see the demo videos on YouTube from the links given below:

General Scenario Demo: https://youtu.be/diUHu5csByY

Bluetooth Demo: https://youtu.be/hkx75P8xdxw

If you want to discuss more about this project, you can connect to me on LinkedIn! We would love to hear your advices or future possibilities for this project.

We would love to hear your loud applause appreciating the idea and outcome if it touched your heart.

Thank you!