The HAVEN is a facility for multisensory modeling and simulation, developed by the Multisensory Computation Laboratory with the support of NSF grants EIA-0215887 and IIS-0308157. In the HAVEN one can interact with other humans, physical objects, and computer simulations. You can read more about the importance of multisensory simulation and interaction in [Pai 03].

Related Sites:
Interaction Capture data acquired using the HAVEN.
Data Repository for Haptic Algorithm Evaluation. Haptic data acquired using the HAVEN.

Window View
The HAVEN consists of a specially constructed chamber. The chamber is designed to provide acoustical as well as optical isolation from ambient noise. The walls are arranged to avoid parallel surfaces that would promote standing sound waves, and the walls and ceiling are baffled with acoustic panels. Two windows (one in the wall and one in the ceiling) allow projectors to illuminate the environment without polluting the interior of the chamber with fan noise.
The HAVEN is designed to be a densely-sensed environment. As human subjects interact in the HAVEN, their position, motion, applied forces, and appearance, (as well as the readings from hand-held tools), can be simultaneously measured using a multitude of sensors located on the walls, the ceiling, the floor, and even attached to the subject's hand.
Vicon Camera
Vicon Motion Capture System
Objects and people in the HAVEN can be tracked by a 6 camera, 1 kHz optical tracking system.
Bumblebee Camera
A ceiling-mounted stereo-vision camera provides location information about subjects in the chamber without the use of the Vicon optical markers.
Vicon Camera
A pressure-sensor pad with 10 thousand capacitative tactels measures the position and pressure of a person's footsteps, allowing reconstruction of the subject's actions.
A hand-held Wireless Haptic Texture sensor, the WHaT can measure probe force and acceleration at locations chosen by the user.
Polhemus FastSCAN
The FastSCAN laser scanner allows the user to acquire high-resolution 3-D models of real objects.
Vicon Camera
Mascaro Asada fingernail force sensors
Finger-pad pressure can be estimated by sensors mounted on the backs of fingernails, allowing force measurement without interfering with grasping.
Mag-Lev Haptic device
We will be acquiring a new high-fidelity magnetically-levitated haptic device (under NSF instrumentation grant EIA-0321057).
Microphone Array
We plan to install an array of sensitive microphones to record spatial acoustic properties of interaction with objects.
Artistic Impression
As well as being densely-sensed, the HAVEN is designed to be a rich multisensory display environment.
For visual display, the chamber contains a rear-projection screen sufficiently large to display human-sized avatars; the screen is illuminated through a window by projectors in the adjacent room. Likewise, projectors located above the ceiling treat the table-top (or the floor) as a front-projection display. Polarized filters on the projectors and worn by users allow interactive stereo visualization of 3-D objects and environments.
Together with the acoustic isolation of the environment, an array of loudspeakers makes the HAVEN a powerful environment for auditory display. As well as being able to render sounds at a wide range of volumes without interference from ambient noise, the distribution of the speakers around the chamber allows sounds to be spatialized with respect to the subject interacting in the environment.
In addition to visual and auditory display, the HAVEN is designed to provide haptic feedback during interaction with the environment. The mag-lev haptic device (mentioned above) will provide 6 degrees of freedom of tactile feedback.
Server Room
The sensors and display devices that make the HAVEN an interactive environment are driven by a cluster of computers interconnected by a gigabit network.
The development of the HAVEN was supported in part by NSF grants EIA-0215887 and IIS-0308157. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.