Loading...
Please wait, while we are loading the content...
Similar Documents
Scalable Bayesian Reinforcement Learning for Multiagent POMDPs
| Content Provider | Semantic Scholar |
|---|---|
| Author | Amato, Christopher Oliehoek, Frans A. |
| Copyright Year | 2013 |
| Abstract | Bayesian methods for reinforcement learning (RL) allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. However, for multiagent systems there have been few such approaches, and none of them apply to problems with state uncertainty. In this paper, we fill this gap by proposing a Bayesian RL framework for multiagent partially observable Markov decision processes that is able to take advantage of structure present in many problems. In this framework, a team of agents operates in a centralized fashion, but has uncertainty about the model of the environment. Fitting many real-world situations, we consider the case where agents learn the appropriate models while acting in an online fashion. Because it can quickly become intractable to choose the optimal action in naı̈ve versions of this online learning problem, we propose a more scalable approach based on samplebased search and factored value functions for the set of agents. Experimental results show that we are able to provide high quality solutions to large problems even with a large amount of initial model uncertainty. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | https://www.fransoliehoek.net/docs/Amato13RLDM.pdf |
| Alternate Webpage(s) | http://people.csail.mit.edu/fao/docs/Amato13RLDM.pdf |
| Alternate Webpage(s) | http://www.fransoliehoek.net/docs/Amato13RLDM.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |