Skip to content Skip to navigation
Pre-Defense
6/19/2014 10:00 am
Hill 482

Improved Empirical Methods in Reinforcement-Learning Evaluation

Vukosi Marivate, Rutgers University

Defense Committee: Michael Littman (Chair), Tina Eliassi-Rad and Amélie Marian

Abstract

The central question addressed in this research is "can we define evaluation methodologies that better connect reinforcement-learning (RL) algorithms with real-life data?" First, we address the problem of overfitting. RL algorithms are often tweaked and tuned to specific environments when applied, calling into question whether learning can truly be considered autonomous in these cases. We propose a methodology to evaluate algorithms on distributions of environments, as opposed to a single environment. We also develop a formal framework for characterizing the "capacity" of a space of parameterized RL algorithms and bound the number of test environments that need to be used to find a learner that works best for the entire distribution. Second, we develop a method for evaluating RL algorithms offline using a static collection of data. Our motivation is that real-life applications of RL often have properties that make online evaluation expensive (such as driving a robot car), unethical (such as treating a disease), or simply impractical (such as challenging a human chess master). We compared several offline evaluation metrics and found our new metric ("relative Bellman update error") addresses shortcomings in more standard approaches. Third, we examine the problem of evaluating behavior policies for individuals using observational data. Our focus is on quantifying the uncertainty that arises from multiple sources: population mismatch, data sparsity, and intrinsic stochasticity. We have applied our method to a collection of HIV treatment data and are seeking out educational data as a second application.