Learning human facial performance: analysis and synthesis
Tuesday, October 30, 2018, 01:00pm
Human faces convey a large range of semantic meaning through facial expressions, which reflect both actions and affective states. More importantly, in the coming age of artificial intelligence and virtual persona, facial expressions can serve as a two-way communicative interface between human and machine. In this dissertation, we study two aspects of face and facial expression modeling: the analysis and reconstruction of human facial expressions via interpretable 3D blendshape representation from different input modalities, and the reversed problem in which we train a model to hallucinate coherent facial expressions directly given any arbitrary portrait and facial action parameters.
First, we present a real-time robust 3D face tracking framework from RGBD videos, driven by an efficient and flexible 3D shape regressor, capable of tracking head pose, facial actions and on-the-fly identity adaptation in extreme instances.
Second, we introduce a series of recurrent neural networks to predict facial action intensities from speech for real-time animation.
Finally, we present a novel deep generative neural network that can directly manipulate image pixels of a portrait to make the unseen subject express various emotions controlled by continuous facial action unit coefficients, while maintaining her personal characteristics. Our model enables flexible, effortless facial expression editing.
Speaker: Hai Pham
Location : CoRE A (301)
Prof. Vladimir Pavlovic (Chair), Prof. Dimitris Metaxas, Prof. Ahmed Elgammal, Prof. Jiebo Luo (University of Rochester)
Event Type: PhD Defense
Dept. of Computer Science