Stability of Transferred Learned Models for Planning and Reinforcement Learning
Friday, April 24, 2020, 11:00am - 12:00pm
Speaker: Liam Schramm
Location : Remote via Webex
Prof. Abdeslam Boularias, Prof. Kostas Bekris, Prof. Jingjin Yu, Prof. Jie Gao
Event Type: Qualifying Exam
Abstract: Dynamics models are useful for task planning, but are often not available when systems become sufficiently complex. In such situations, it is common instead to learn an approximate model, preferably by fine-tuning an existing model with data from the target system. We explore the behavior of dynamics models transferred from one system to another. In particular, we find that1) models of non-chaotic markov systems may become chaotic when transferred to different systems,2) the lyapunov exponent (a measure of how chaotic a system is) of a learned model converges to that of the true system in the high-data limit,3) and the lyapunov exponent of a transferred model that is fine-tuned on a target dataset depends on the size of the target dataset, not the size of the source dataset. Thus, fine-tuning a model with a small amount of data may make it chaotic even if it was not chaotic before. Together, these imply that fine-tuning a model on a target dataset may actually worsen long-horizon predictions (which depend primarily on how chaotic the system is) even as it improves single-step predictions (which depend only on the single-step error). To remedy this problem, we propose two training methods that reduce or eliminate this problem, and evaluate their effectiveness against traditional approaches on a robotic grasping task. We find that both methods significantly outperform their counterparts that only account for single-step errors.