Loading...
Please wait, while we are loading the content...
Similar Documents
Explorer Two Competing Models of How People Learn in Games
| Content Provider | Semantic Scholar |
|---|---|
| Author | Hopkins, By Ed |
| Copyright Year | 1999 |
| Abstract | Reinforcement learning and stochastic fictitious play are apparent rivals as models of human learning. They embody quite different assumptions about the processing of information and optimization. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behavior. In particular, local stability of mixed equilibria under stochastic fictitious play implies local stability under perturbed reinforcement learning. The main identifiable difference between the two models is speed: stochastic fictitious play gives rise to faster learning. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://homepages.ed.ac.uk/hopkinse/twocom.pdf |
| Alternate Webpage(s) | https://www.research.ed.ac.uk/portal/files/5889655/stim7.pdf |
| Language | English |
| Access Restriction | Open |
| Subject Keyword | Information Mathematical optimization Reinforcement learning |
| Content Type | Text |
| Resource Type | Article |