Loading...
Please wait, while we are loading the content...
Similar Documents
Transfer of task representation in reinforcement learning using policy-based proto-value functions ∗ abstract.
| Content Provider | CiteSeerX |
|---|---|
| Author | Ferrante, Eliseo |
| Abstract | Reinforcement Learning research is traditionally devoted to solve single-task problems. Therefore, anytime a new task is faced, learning must be restarted from scratch. Recently, several studies have addressed the issue of reusing the knowledge acquired in solving previous related tasks by transferring information about policies and value functions. In this paper, we analyze the use of proto-value functions under the transfer learning perspective. Proto-value functions are effective basis functions for the approximation of value functions defined over the graph obtained by a random walk on the environment. The definition of this graph is a key aspect in transfer transfer problems in which both the reward function and the dynamics change. Therefore, we introduce policy-based proto-value functions, which can be obtained by considering the graph generated by a random walk guided by the optimal policy of one of the tasks at hand. We compare the effectiveness of policy-based and standard proto-value functions, on different transfer problems defined on a simple grid-world environment. |
| File Format | |
| Access Restriction | Open |
| Subject Keyword | Policy-based Proto-value Function Several Study Task Representation Transfer Transfer Problem Reinforcement Learning New Task Proto-value Function Effective Basis Function Standard Proto-value Function Optimal Policy Single-task Problem Simple Grid-world Environment Dynamic Change Random Walk Key Aspect Different Transfer Problem Reward Function Value Function Reinforcement Learning Research |
| Content Type | Text |