Skip to content Skip to navigation
Qualifying Exam
10/25/2018 11:00 am
1 Spring Street, Room 403, New Brunswick

Physics-based Scene-level reasoning for Object Pose Estimation in Clutter

Chaitanya Mitash, Dept. of Computer Science

Examination Committee: Prof. Abdeslam Boularias (Co-advisor), Prof. Kostas Bekris (Co-advisor), Prof. Ahmed Elgammal, Prof. Sudarsun Kannan

Abstract

Progress has been achieved recently in object recognition given advancements in deep learning. Nevertheless, such tools typically require a large amount of training data and significant manual effort to label objects. This limits their applicability in robotics, where solutions must scale to a large number of objects and variety of conditions. Moreover, the combinatorial nature of the scenes that could arise from the placement of multiple objects is hard to capture in the training dataset. Thus, the learned models might not produce the desired level of precision required for tasks, such as robotic manipulation. In this talk, I will present an autonomous process for pose estimation that spans from automated data generation to time-efficient scene-level reasoning and lifelong self-learning. In particular, the proposed framework first generates a labeled dataset for training a Convolutional Neural Network (CNN) for object detection in clutter. These detections are used to guide a scene-level optimization process, which considers the interactions between the different objects present in the clutter to output pose estimates of high precision. Furthermore, confident estimates are used to label online real images from multiple views and re-train the process in a lifelong self-learning pipeline.