Skip to content Skip to navigation

Research Projects

Active Projects

We develop computational tools to assist end users to create and experience compelling, interactive, digital stories. To this end, we have revisited standard representations of interactive narratives and proposed new formalisms that scale independent of story complexity and user interaction. Our computational tools help mitigate the complexity of creating digital stories without sacrificing any authorial precision. 

Our research aims to develop integrated solutions for full-body character animation, planning based control, behavior authoring, and statistical analysis of autonomous virtual human simulations. The far-reaching goal is to provide functional, purposeful embodied virtual humans, that act and interact in meaningful ways to simulate complex, dynamic, narrative-driven, interactive virtual worlds. 

We develop algorithms for multi-agent motion planning in real-time dynamic environments. Our research investigates the use of novel discrete representations of the environment, and the use of anytime dynamic planners that harness multiple domains of representation. We port these algorithms on the GPU to exploit the benefits of massive parallelization, while preserving the original properties of the approach. 

In order to rigorously evaluate and compare crowd models, we have developed tools for quantifying the coverage, quality, and failure of crowd simulators, and its ability to emulate real crowd data. Furthermore, we have developed a framework for automatically optimizing the parameters of a crowd simulation algorithm to meet any user-defined criteria.

We develop models for simulating realistic, believable crowds. To this end, we identify and address fundamental limitations in how individuals in a crowd are represented and controlled. Our research results have widespread application in visual effects, games, urban planning, architecture design, as well as disaster and security simulation.