Tensegrity Robots: Control via Model-based RL and Modeling via a Differentiable Engine
Friday, April 03, 2020, 03:00pm - 04:00pm
Speaker: Kun Wang
Location : Remote via Webex
Prof. Kostas Bekris (Chair), Prof. Mridul Aanjaneya, Prof. Sungjin Ahn, Prof. Sudarsun Kannan
Event Type: Qualifying Exam
Abstract: Tensegrity robots are hybrid soft-rigid systems that exhibit adaptability and robustness. For instance, NASA has developed a promising exploration rover in the form of a cable-driven tensegrity named SUPERball, which has 6 rods and 24 cable actuators. At the same time, these platforms introduce significant challenges in terms of locomotion as they involve high-dimensional control, oscillatory behavior and complex, difficult to model dynamics. The first component of this work focuses on reinforcement learning (RL) based on the framework of Guided Policy Search (GPS). A key contribution is symmetry reduction during controller design, which moderates data requirements. Adaptations on the GPS frameworks have resulted in sample-efficient RL that allows any-axis rolling and adaptation to different terrain types. Simulated experiments demonstrate smooth, efficient trajectories. Transferring the results to the real system, however, highlights the reality gap of the available simulation tools. This motivated the second component of the work, which corresponds to a differentiable physics engine for cable-driven systems. The objective is to develop an engine that builds on top of first principles but mitigates the reality gap by learning directly from data. Our current evaluation shows the promise of developing a solution that does not have significant data requirements and paves the way for a differentiable engine that can accurately model a physical platform.