Loading...
Please wait, while we are loading the content...
Similar Documents
Continuous-domain reinforcement learning using a learned qualitative state representation (2008)
| Content Provider | CiteSeerX |
|---|---|
| Author | Mugan, Jonathan Kuipers, Benjamin |
| Description | We present a method that allows an agent to learn a qualitative state representation that can be applied to reinforcement learning. By exploring the environment the agent is able to learn an abstraction that consists of landmarks that break the space into qualitative regions, and rules that predict changes in qualitative state. For each predictive rule the agent learns a context consisting of qualitative variables that predicts when the rule will be successful. The regions of this context in with the rule is likely to succeed serve as a natural goals for reinforcement learning. The reinforcement learning problems created by the agent are simple because the learned abstraction provides a mapping from the continuous input and motor variables to discrete states that aligns with the dynamics of the environment. In 22nd International Workshop on Qualitative Reasoning |
| File Format | |
| Language | English |
| Publisher Date | 2008-01-01 |
| Access Restriction | Open |
| Subject Keyword | Context Consisting Learned Abstraction Natural Goal Qualitative State Predictive Rule Motor Variable Reinforcement Learning Learned Qualitative State Representation Qualitative Variable Qualitative Region Continuous-domain Reinforcement Continuous Input Qualitative State Representation |
| Content Type | Text |
| Resource Type | Article |