Skip to content Skip to navigation
PhD Defense
12/22/2014 01:00 pm
CoRE A(Room 301)

Improved Empirical Methods in Reinforcement-Learning Evaluation

Vukosi Marivate, Rutgers University

Defense Committee: Michael L. Littman (Chair), Tina Eliassi-Rad, Amélie Marian, Susan Murphy (University of Michigan)

Abstract

The central question addressed in this research is ”can we define evaluation methodologies that encourage reinforcement-learning (RL) algorithms to work effectively with real-life data?” First, we address the problem of overfitting. RL algorithms are often tweaked and tuned to specific environments when applied, calling into question whether learning algorithms that work for one environment will work for others. We propose a methodology to evaluate algorithms on distributions of environments, as opposed to a single environment. We also develop a formal framework for characterizing the ”capacity” of a space of parameterized RL algorithms and bound the generalization error of a set of algorithms on a distribution of RL environments given a sample of environments. Second, we develop a method for evaluating RL algorithms offline using a static collection of data. Our motivation is that real-life applications of RL often have properties that make online evaluation expensive (such as driving a robot car), ethically questionable (such as treating a disease), or simply impractical (such as challenging a human chess master). We compared several offline evaluation metrics and found our new metric (”relative Bellman update error”) addresses shortcomings in more standard approaches. Third, we examine the problem of evaluating behavior policies for individuals using observational data. Our focus is on quantifying the uncertainty that arises from multiple sources: population mismatch, data sparsity, and intrinsic stochasticity. We have applied our method to a collection of HIV treatment and non-profit fund-raising appeals data.