CS Events

Computer Science Department Colloquium

Model Identification for Robotic Manipulation


Download as iCal file

Tuesday, September 15, 2020, 10:30am


Speaker: Dr. Abdeslam Boularias


Abdeslam Boularias is an Assistant Professor of computer science at Rutgers, The State University of New Jersey, where he works on robot learning. Previously, he was a Project Scientist in the Robotics Institute of Carnegie Mellon University, and a Research Scientist at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, in the department of Empirical Inference. He receieved a Ph.D. in computer science from Laval University in Canada, an M.S. in computer science from Paris-Sud University in France, and a B.S. in computer science from the École Nationale Supérieure d’Informatique (ESI) in Algeria. He is broadly interested in the area of reinforcement learning, robotic manipulation, and robot vision. He is a recipient of the NSF CAREER award.

Location : Via YouTube

Event Type: Computer Science Department Colloquium

Abstract: A popular approach in robot learning is model-free reinforcement learning (RL), where a control policy is learned directly from sensory inputs by trial and error without explicitly modeling the effects of the robot’s actions on the controlled objects or system. While this approach has proved to be very effective in learning motor skills, it suffers from several drawbacks in the context of object manipulation due to the fact that types of objects and their arrangements vary significantly across different tasks and environments. An alternative approach that may address these issues more efficiently is model-based RL. A model in RL generally refers to a transition function that maps a state and an action into a probability distribution over all possible future states. In this talk, I will present my recent works on data-efficient physics-driven techniques for identifying models of manipulated objects. To perform a new task in an environment with unknown objects, a robot first identifies from sequences of images the 3D mesh models of the objects, as well as their physical properties such as their mass distributions, moments of inertia and friction coefficients. The robot then reconstructs in a physics simulation the observed scene, and imagines the future motions of the objects when manipulated. The predicted motions are used to select a sequence of appropriate actions to apply on the real objects. The proposed techniques are evaluated on a wide range of applications in robotics, such as grasping, nonprehensile manipulation, rearrangement, and warehouse automation.

Contact  Host: Dr. Matthew Stone

Link to video recording: