Robots are increasingly being used in diverse environments in a variety of tasks, including but not limited to exploration, search-and-rescue and rehabilitation, and also personal applications. For safe, reliable robot operations, close collaboration between humans and robots are of utmost importance, as are robust, intuitive and natural communication methods.
This talk will present an insight into research in sensory human-robot interaction, and present findings from quantitative and qualitative studies, and also from various robot field trials. My research looks at algorithms and interfaces for human-robot interaction and control for autonomous robots in arbitrary environments. In particular, I have investigated vision-based approaches for human-robot interaction, human-motion detection and robust tracking for human-robot collaborative missions, particularly in underwater explorations. Recent work has looked at a quantitative model of human-robot dialog incorporating task cost and communication uncertainty, with the goal of preventing robots from carrying out potentially dangerous and unsafe tasks unless confirmed by its human partner. A network of smart devices, sensors and robots contribute to a distributed model of task cost. This human-robot dialog framework has been evaluated on-board a number of different robotic platforms -- including the Aqua amphibious robot and the Willow Garage PR2 robot. Currently, under the umbrella of the CanWheel project, this mechanism is being evaluated for risk assessment in collaborative control of robotic wheelchairs by older, cognitively impaired adults.