American Sign Language is making its way to Kinect

Researchers at Georgia Tech are working to move Kinect out of the toy category and move it into that of "useful device."

Originally achieved by pairing Kinect hardware with a set of knitted gloves that contained accelerometers, the researchers have now managed to ditch the gloves and focus directly on movements. By measuring the distances and changes between various body parts, the group has managed to achieve results of no lower than 98.8% accuracy based on tests of increasing difficulty.

The researchers are now working to integrate "hand shape features" rather than just gestural movements, so that they can expand the vocabulary and create a useful ASL tool. The focus of their development is the CopyCat software, which is to teach deaf children how to communicate with ASL.

The required hand shape features may need a higher resolution image than is currently provided, but it is rumoured that Microsoft would only have to push out a firmware update for this to be possible.

This is only the first of many possible medical, academic and accessibility uses of the gaming device. It's unknown as to whether Microsoft had originally known that this type of hacking would take place or that it would continue to grow at such an alarming pace, but so far (maybe with the exception of some more adult implementations), the development has been nothing but good.

Report a problem with article
Previous Story

iPhone prices slashed at Sam's club for the holiday

Next Story

Hitachi creates 7mm thin 500gb laptop hard drive

11 Comments

Commenting is disabled on this article.

This can be very useful in noisy and dirty places where a keyboard or voice recognition is not an option e.g. a hotel kitchen. Another place may be an airport. Very good.

Didn't MS themselves say they where working on Sign language, but it wouldn't be supported at launch but added later.

If they release an ASL tutor for Kinect, I will absolutely be buying my first-ever educational "game."

This could also be used to assist people who use sign language in the real world, not just the class room. Two-way telephone devices that can convert gestures to words for every day calls, and most especially for emergencies. It could also find it's way in to stores, restaurants, doctor's office, hospital... any place where some one using sign language might need assistance but not have a translator handy. Instead of writing everything down, (s)he could stand in front of the sensor, and have either an audio or text output translated to the person helping them. In turn the software could also show video and or text translated from the person helping them. I know, speech recognition still has a way to go, but before long i can see this being used at help desks, with telephones, and who knows where else.