Signs of the times: what am I saying? Juice/REX/Shutterstock

Machine translation systems that convert sign language into text and back again are helping people who are deaf or have difficulty hearing to communicate with those who cannot sign.

KinTrans, a start-up based in Dallas, Texas, is trialling its technology in a bank and government offices in the United Arab Emirates, and plans to install it in more places over the next couple of months. SignAll, a company based in Budapest, Hungary, will begin its own trials next year.

KinTrans uses a 3D camera to track the movement of a person’s hands and body as they sign words. A sign language user can approach a bank teller and sign to the KinTrans camera that they’d like assistance, for example. The device then translates these signs into written English or Arabic for the teller to read.


The translation works both ways. Someone who can’t use sign language can type a reply and have it converted into signs recreated by an animated avatar on the KinTrans screen. This is useful because it is often more natural for people who use sign language to interact this way than in text, says Sudeep Sarkar at the University of South Florida.

First language

Around 70 million people sign as a first language and there are more than 100 different dialects used around the world. Word order in sentences can differ between these languages as well as from written text. KinTrans’s machine learning algorithm translates each sign as it is made and then a separate algorithm turns those signs into a sentence that makes grammatical sense.

KinTrans founder Mohamed Elwazer says his system can already recognise thousands of signs in both American and Arabic sign language with 98 per cent accuracy. Future versions will include support for Portuguese Sign Language and Indo-Pakistani Sign Language, he says.

Lack of data has made training the system a challenge, however. Machine learning software typically needs to see many examples to become proficient at a particular task. But there are no huge video databases of people signing different words. But Elwazer says his system only needs to be trained on 10 examples of each sign before can recognise it in real test, he says.

Sudeep isn’t convinced that a system can be well trained sufficiently on such a small dataset. Once KinTrans is put to the test in wider public trials, Sudeep suspects the company will see where improvements can be made with the system.

Different tack

SignAll is taking a different tack. It has teamed up with Gallaudet University University in Washington DC – a university set up specifically for students who are deaf or hard of hearing – to create the largest database of sign language sentences.

SignAll’s system also uses four cameras, including one that records in 3D, to capture data from a signer’s face as well as their hands and body. “In sign language, half of the information is in the face,” says CEO Zsolt Robotka. Raising your eyebrows during a sentence turns it into a question rather than a statement, for example.

So far, Robotka’s system is only able to translate around 300 words from American Sign Language into English. But by the time trials start next year, he hopes it will have learned around 1000 signs.

“It’s great to see innovative technology being developed that could really transform the lives of sign language users,” says Jesal Vishnuram at Action on Hearing Loss, the UK’s largest charity representing people who are deaf. “This provides a great baseline for automated sign language translation and we encourage developers to continue working on this concept in order to offer a truly inclusive experience for sign language users.”

Read more: First evidence that synaesthesia gives colour to sign language

We have clarified which body areas are tracked by the systems mentioned.