Lip-Reading via Deep Neural Network Using Appearance-Based Visual Features

2017 
Lip-reading, is visually interpreting lips movements in order to understand speech, when there is no access to the normal sound. Image processing techniques for lip-reading recognition has been widely applied in various kinds of applications. As an application, computer-based video system developed to provide lip-reading instruction to hearing-impaired adults and teenagers. Taking a step toward automating the process, challenges such as coarticulation phenomenon, homophone effect, insufficient training data per class, choice of features and speaker-dependency are faced. Finding a method to overcome these challenges is desirable. This paper describes a lip-reading model, highlighting the feature extraction and recognition parts. Certain arrangement of blocks are considered in a way to achieve optimal appearance-based features for feature extraction part, while a properly structured Deep Belief Network (DBN) is used for the recognition part. The challenging dataset of CUAVE is used in this study, and visual phone recognition (VPR) accuracies are reported on the phone-level. Proposed lip-reading recognizer is unique in its usage for all speakers. Our suggested method outperforms the conventional Hidden Markov Model (HMM)-based recognizer, and the best VPR accuracy of %45.63 is achieved, using the best DBN.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []