|this space intentionally||left blank|
More and more, machine learning is being explored as a vital component to address challenges in multi-agent systems. For example, many application domains are envisioned in which teams of software agents or robots learn to cooperate amongst each other and with human beings to achieve global objectives. Learning may also be essential in many non-cooperative domains such as economics and finance, where classical game-theoretic solutions are either infeasible or inappropriate.
At the same time, multi-agent learning poses significant theoretical challenges, particularly in understanding how agents can learn and adapt in the presence of other agents that are simultaneously learning and adapting. This is a fertile area of research that seems ripe for progress: the numerous and significant theoretical developments of the 1990s, in fields such as Bayesian, game-theoretic, decision-theoretic, and evolutionary learning, can now be extended to more challenging multi-agent scenarios.
This workshop on theory and practice in multi-agent learning is intended to be broad in scope and informal in style. The goal is to bring together researchers with a variety of perspectives who would likely benefit from open communications, comparing methodologies and sharing insights into recent results, common challenges, and future opportunities for progress in the field.
In keeping with our desire to have a loose and lively workshop, there will be no formal paper submissions or published proceedings. There will be a number of short talks---mainly by invited speakers, with some slots for contributed talks---and plenty of time for open discussion.
Abstracts for the invited and contributed talks will be posted on the web site as they become available. Our invited speakers are drawn from the leading AI/ML researchers in this field, as well as several social scientists (psychology, economics, etc.) who study learning in multi-player human-subject experiments.
|Craig Boutilier||University of Toronto||On the Risks and Rewards of Coordination in Multiagent Reinforcement Learning (abstract, pdf)|
|Michael Bowling||Carnegie Mellon University||Learning, equilibria, limitations, and robots (abstract, pdf, link to movies)|
|Colin Camerer||Caltech||Empirical estimation of hybrid strategic learning models on experimental game data (abstract, powerpoint)|
|Yu-Han Chang||MIT||Learning in networks (abstract, powerpoint)|
|Amy Greenwald||Brown University||The case for learning correlated equilibrium policies in Markov games (abstract, postscript)|
|Carlos Guestrin||Stanford University||Generalizing multiagent plans to new environments in relational MDPs (abstract, powerpoint, avi demo, avi demo2)|
|Jeff Kephart||IBM||Applications of multi-agent learning in e-commerce and autonomic computing (abstract, powerpoint)|
|David Leslie||University of Bristol||Multiple timescales for multiagent learning (abstract, powerpoint)|
|Yoav Shoham||Stanford University||If multi agent learning is the answer, what is the question? (abstract, pdf)|
|Peter Stone||University of Texas--Austin||Scaling reinforcement learning toward Robocup soccer (abstract, postscript)|
|Gerry Tesauro||IBM||Multi-agent learning mini tutorial (abstract, powerpoint)|
|Rakesh V. Vohra||Northwestern University||Learning in games (abstract, pdf)|
|Xiaofeng Wang||Carnegie Mellon University||Reinforcement learning to play an optimal Nash equilibrium in coordination Markov games (abstract, powerpoint)|
|John Langford||IBM||Correlated equilibria in graphical games (postscript)|
|David Leslie||University of Bristol||Multiple timescales for multiagent learning (powerpoint)|
|Michael Littman||Rutgers University||Collaboration in repeated games (powerpoint)|
|Martin Riedmiller||Universitat Dortmund||The Brainstormers---Learning on a tactical level (pdf)|
|Christian Shelton||Stanford||Continuation methods for structured games (powerpoint)|
|Gerry Tesauro||IBM||Pseudo-convergent Q-learning by competitive pricebots (html)|