Rutgers University Talking Head

Doug DeCarlo
Matthew Stone

Animation of a talking face affords interactive agents the opportunity to reproduce the functions and behaviors of natural face-to-face conversation. It also offers a methodological tool for developing and testing psycholinguistic theories of those functions and behaviors. Rutgers University Talking Head or RUTH is a new facial animation system designed to support both of these goals. Here's a sample animation created with RUTH:



[369K Microsoft AVI MPEG 4]

[1744K MPEG 1]

[1.9M Quicktime MJPEG A]

Downloads

RUTH 1.0 is now available (on Solaris, Windows and Linux platforms).

Uses

RUTH takes tagged text and realizes it as animation. It works in conjunction with the Festival speech synthesis system (in our examples using the OGI CSTR voices and synthesis package) to derive a sound file togther with animation instructions for corresponding lip, face and head movement. These specifications then guide a synchronized realization of the animated speech, which updates at rates of at least 30 frames per second on reasonable machines with 3D graphics hardware.

In practical applications, such tagged text can be obtained using heuristics to produce elaborations on plain text input, as in Justine Cassell and colleagues' BEAT system. More challenging and more interesting is to derive such tagged text using natural language generation techniques.

A variety of illustrations of RUTH in action are available here.

Research

We're actively using RUTH to explore the distribution and function of facial movements in human conversation, and in virtual conversational agents.

To get a feel for the kinds of research results that RUTH makes possible, look here.

Publications

Credits

Support

Data preparation and analysis

Implementations and prototypes

Initial artistic design



Last updated: January 6, 2003
HOME