When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft Research uses Kinect to translate sign language

Hearing impaired people around the world use sign language to communicate but there are lots of times where an interpreter is needed to translate sign language. Unfortunately, those human interpreters are rather limited in number. Today, Microsoft announced a new project from their Microsoft Research team in Asia that is trying to create a new and better way to translate sign language.

In a blog post, Microsoft states the team, working with another group at the Institute of Computing Technology at the Chinese Academy of Sciences, has created software that uses the Kinect for Windows hardware for its project. The Kinect hardware camera can track the movements of a person using sign language. The software then tries to "read" what the hand and finger movements are spelling. Microsoft adds:

The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.

The software has two modes. The first, Translation Mode, is self explanatory. It translate the sign language from a person into either text or speech. The second, Communications Mode, has a hearing person using the software who types in words that are then displayed on screen as sign language via a 3D avatar. The person on the other end then uses sign language that is then translated to text or speech.

Microsoft says that while this software project only supports American sign language at the moment, it could be updated to include sign languages from other countries.

Source: Microsoft

Report a problem with article
Next Article

Angry Birds Star Wars on Windows Phone 8 gets new levels

Previous Article

Microsoft launches Outlook Web Apps for iPhone and iPad

Join the conversation!

Login or Sign Up to read and post a comment.

14 Comments - Add comment