Performance-Driven Facial Expression Real-Time Animation Generation

2015 
In view of the reality of facial expression animation and the efficiency of expression reconstruction, a novel method of real-time facial expression reconstruction is proposed. Our pipeline begins with the feature point capture of an actor’s face using a Kinect device. A simple face model has been constructed. 38 feature points for control are manually chosen. Then we track the face of an actor in real-time and reconstruct target model with two different deformation algorithms. Experimental results show that our method can reconstruct facial expression efficiency in low-cost. The facial expression of target model is realistic and synchronizes with the actor.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []