Mapping gestures to speech using the kinect

2014 
Statistics state that approximately, one in 1000 people are born mute. In a population of 7.046 billion worldwide, the number is a staggering 7 million. Of all the form of disabilities, the mute have the going tough. The inability of fellow mortals to comprehend what they hope to express serves as a constant remainder of the misfortune that had befallen them. This catastrophe often bars them from finding jobs which meet their skills. Speech synthesizers which are commonly used require the usage of EEG signals which are not always portable. Economy is also a factor which acts against them. Hence, these people mostly resign to the usage of sign language to communicate with others. Taking all these factors into consideration, the system under consideration maps the gestures that are made by these people to words. Using a suitable text to speech converter software, the gestures are translated to speech. To map these gestures, the Microsoft Kinect, an economical yet reliable depth sensing camera is used. The proposed system could be deployed in auditoriums, classrooms and other addressing environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []