Kinect's Sign-Language Recognition & Translation Project - Microsoft Research

The software tracks hand movements and then uses a process of 3-D motion-trajectory alignment to identify the word being signed. The software has two modes, Translation Mode and Communication Mode. The Translation Mode, as the name suggests translates the sign language into text or speech and the Communication Mode has a 3D avatar that generates sign-language gestures from typed words. This mode will help two-way communication between a person who knows sign language and someone who doesnât. The software for the meantime can only process American sign-language and researchers are confident that it has the potential to work with other languages as well. The team has put up the following video on YouTube that shows a demonstration of American and Chinese sign language translation using Kinect and Bing translator. Have a look.
Source: #-Link-Snipped-#