CS Events
Research TalkCertifiable Controller Synthesis for Underactuated Robotic Systems: From Convex Optimization to Learning-based Control |
|
||
Tuesday, June 25, 2024, 04:00pm - 05:30pm |
|||
Speaker: Lujie Yang
Bio
: Lujie Yang (https://lujieyang.github.io/) is a 4th year PhD student at MIT EECS, advised by Prof. Russ Tedrake. Her research seeks to enable robots to operate robustly and intelligently, with theoretical guarantees, by combining optimization, control theory and machine learning. Lujie was the recipient of the Presidential Graduate Fellowship, Richard C. Dupont Fellowship and Tung/Lewis Fellowship at MIT. She won the best presentation award at MIT LIDS student conference 2023 and got honorable mention for 2023 T-RO King-Sun Fu Memorial best paper award of the year.
Location : Online talk and SPR-403
:
Event Type: Research Talk
Abstract: Many interesting robotic tasks, such as running, flying, and manipulation, require certifiable controllers to operate robustly and intelligently. In this presentation, I will talk about how to synthesize controllers with optimality and stability guarantees using both convex optimization and learning-based methods.1. Approximate optimal controller synthesis with SOS: I will discuss the potential of sums-of-squares (SOS) optimization in synthesizing certifiable controllers for nonlinear dynamical systems. We demonstrate that SOS optimization can generate dynamic controllers with bounded suboptimal performance for various underactuated robotic systems by finding good approximations of the value function.2. Certifiable Lyapunov-stable neural controller synthesis with learning: I will address the advancements in neural-network-based controllers driven by deep learning. Despite their progress, many of these controllers lack critical stability guarantees, essential for safety-critical applications. We introduce a novel framework for learning neural-network controllers along with Lyapunov stability certificates, bypassing the need for costly SOS, MIP, or SMT solvers. Our framework’s flexibility and efficiency enable the synthesis of Lyapunov-stable output feedback control using NN-based controllers and observers with formal stability guarantees for the first time in literature.3. Contact-rich manipulation: I will also provide a brief overview of our recent work on global planning for contact-rich manipulation. By interpreting the empirical success of reinforcement learning from a model-based perspective, we have developed efficient model-based motion planning algorithms that achieve results comparable to reinforcement learning, but with dramatically less computation.
:
Contact Professor Kostas Bekris