Skip to content Skip to navigation
Qualifying Exam
1/21/2016 11:00 am
Hill 482

Body Signal Analysis for Development of Co-adaptive Multisensory Interactions

Vilelmini Kalampratsidou, Rutgers University

Examination Committee: Elizabeth B. Torres(advisor), Dimitris Metaxas, Manish Singh and Naftaly Minsky

Abstract

A factor of crucial importance in the design of the Body-Brain Machine Interfaces (BBMI) is the acquisition and analysis of body and sensor signals. Bodily signals include time-series of parameters capturing fluctuations in physiological functions such as motion, temperature, heart rate and electroencephalography, among others. Sensor signals include those registered from the body but also instrumentation noise. Separating noise from the physiological signal is important to drive interfaces that depend on the user’s output, particularly those that are driven by motion. In this talk, I review the use of various such signals in the literature along with the current analytics in use to process them for BBMI. I then introduce a new data type and statistical platform for individualized analyses of natural behaviors with several examples of their use in basic research and patient care.

The new data type integrates motion and temperature signals from wearables while the analytics help to parameterize the statistical signatures of the individual wearing the sensors. Specifically, using temperature dependent motions, defined as the moment-by-moment fluctuations in motor performance for each unit (degree) interval captured by the wearables, we can automatically define boundaries of motor noise isolating spontaneous random noise from well-structured systematic noise with high predictive power. The motion signal with the lowest noise to signal ratio can then be selectively used in the context of stochastic feedback control by closing the loop between the motions of the human and those of an anthropomorphic avatar endowed with the human’s veridical motions and with their noisy variants. The main goal of my future work is to uncover critical aspects of the co-adaptation process between the avatar and the human that may enable accelerated learning of a particular task while turning the motor signatures of the human more predictive and reliable than when starting the interaction. I suggest possible strategies to use the avatar as a proxy to drive the human beneath awareness and explore ways to attain the end-goal of lowering the physiological noise. Finally, I discuss possible clinical applications of this multi-modal co-adaptive setting and of the objective outcome measures assessing efficacy and risks in early intervention programs for children in the spectrum of autism.