The next generation of interactive virtual world applications demand functional, purposeful, heterogeneous autonomous virtual humans, that exhibit rich, believable interactions with their environment and other agents, with the far-reaching goal of complete immersion for end users. However, there are several underlying assumptions in virtual human simulation that are holding us back from entering into the new age of interactive virtual world applications.
In this talk, I will identify key limitations in the representation, control, locomotion, and authoring of autonomous virtual humans. These include simplified particle representations of agents which decouples control and locomotion, the lack of multi-modal perception, the need for multiple levels of control granularity, homogeneity in character animation, and monolithic agent architectures which cannot scale to complex multi-agent interactions and global narrative constraints. I will present 3 potential solutions that address these limitations with the objective of providing the stimulus for an exciting new era of virtual human research. These include: (1) a bio-mechanically based footstep locomotion model that provides a tighter coupling between control and locomotion for richer, human-like navigation behaviors, (2) a sound propagation and perception framework for autonomous agents in dynamic virtual environments, and (3) an event-centric approach to authoring complex multi-actor behaviors using parameterized behavior trees.