Human physical interaction is inherently multisensory. The Haptic, Auditory, and Visual Environment (HAVEN) is a novel physical space that supports multisensory human interaction and measurement. It consists of a specially constructed and instrumented chamber in which humans can interact with other humans, physical objects, and computer simulations. The HAVEN provides multisensory display (including 3D projection displays, multichannel auditory displays, and haptics) as well as several sensors for interactive measurement (including a Vicon motion capture system and force sensors). Funded by NSF.
Neuro-musculo-skeletal simulation with contact
Understanding the neural control of movement requires realistic computational models of the underlying musculo-skeletal system. We have developed a physical model based on a new computational primitive called the "muscle strand". It is well suited for interactive simulation of large neuro-musculo-skeletal systems with contact.
We plan to use this model to investigate the neural control of movement in the spinal cord (Collaboration with Prof. Simon Giszter, Drexel University College of Medicine) and Prof. Matthew Tresch, Northwestern University.
Interaction Capture
Interaction Capture and Synthesis
Traditional motion capture techniques in computer animation do not adequately model contact with a physical environment. We have developed a new technique for simultaneously measuring contact forces and movement at a high rate (500 Hz) --- a technique we call ``interaction capture.'' We can use this information to estimate the joint impedance during contact, and resynthesize new motions by physical simulation.
Frictional dynamics
Fast Frictional Dynamics of Rigid Bodies
We are developing a simple, self-consistent model for plausible simulation of rigid body dynamics with friction using the principles of non-smooth mechanics. Our approach is robust and produces many complex behaviors such as rolling, sliding, stacking, tumbling, and shock propagation, as well as the appropriate transitions between them. We are also developing a model for generalized friction in the configuration space of a rigid body which unifies rolling and sliding friction in a natural manner, while producing expected behaviors.
Neural information processing for guiding eye movements to sounds
Directing gaze to the location of a sound stimulus is a complex information processing task requiring the conversion of auditory signals into motor commands to move the eyes. With Prof. Jennifer Groh of Dartmouth, we are investigating how information about sound location is encoded in the spike trains of neurons in the inferior colliculus, auditory cortex, lateral intraparietal cortex, and superior colliculus during saccades to sounds. Funded by NIH.
Interactive Multisensory Character Animation
We have developed a life-sized animated human avatar, based on Rutgers' Scarlet Knight, which observes the audience by using stereo vision and responds in real time. The Knight's shining armour reflects the actual environment, including the audience, as captured by a separate camera-and-mirror system. Our initial experiments at public events (e.g., SCA 04 conference, outreach to high school students) showed the system to be both robust and engaging.
Recent Projects
BD-Tree: Output-Sensitive Collision Detection for Reduced Deformable Models
We introduce the Bounded Deformation Tree, or BD-Tree, which can perform collision detection with reduced deformable models at costs comparable to collision detection with rigid objects.
Quasi Rigid
Quasi-Rigid Objects in Contact
We develop techniques for modeling contact between quasi-rigid objects -- solids that undergo modest deformation in the vicinity of a contact, while the overall object still preserves its basic shape. Our method computes consistent and realistic contact surfaces and traction distributions, which are useful in many applications. We also show how to use point cloud surface representations, for instance obtained from raw laser scans, for modeling rapidly varying, wide area contacts.