Loading...
Please wait, while we are loading the content...
Similar Documents
Finite-sample analysis of least-squares policy iteration (Tech (2010)
| Content Provider | CiteSeerX |
|---|---|
| Author | Lazaric, Alessandro Ghavamzadeh, Mohammad |
| Abstract | In this paper, we report a performance bound for the widely used least-squares policy iter-ation (LSPI) algorithm. We first consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning method, and report finite-sample analysis for this algorithm. To do so, we first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generaliza-tion bounds in the case when the Markov chain possesses a stationary distribution and is β-mixing. Finally, we analyze how the error at each policy evaluation step is propagated through the iterations of a policy iteration method, and derive a performance bound for the LSPI algorithm. |
| File Format | |
| Journal | Rep.). SEQUEL (INRIA) Lille–Nord Europe |
| Language | English |
| Publisher Date | 2010-01-01 |
| Access Restriction | Open |
| Subject Keyword | Finite-sample Analysis Least-squares Policy Iteration Markov Chain Performance Bound Stationary Distribution Value Function Least-squares Temporal-difference Policy Evaluation Step Least-squares Policy Iter-ation Fixed Policy Reinforcement Learning Policy Evaluation Lspi Algorithm Lstd Solution Generaliza-tion Bound Policy Iteration Method |
| Content Type | Text |
| Resource Type | Article |