Loading...
Please wait, while we are loading the content...
Similar Documents
CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics∗ (2013)
| Content Provider | CiteSeerX |
|---|---|
| Author | Roughgarden, Tim |
| Abstract | In this lecture we segue into the third part of the course, which studies the following questions. 1. Do we expect strategic players do reach an equilibrium of a game, even in principle? 2. If so, will it happen quickly? As we’ll see, theoretical computer science is well suited to contribute both positive and negative answers to this question. 3. If so, how does it happen? Which learning algorithms quickly converge to an equilib-rium? Affirmative answers to these questions are important because they justify equilibrium analy-sis. Properties of equilibria, such as a near-optimal objective function value, are not obviously relevant when players fail to find one. More generally, proving that natural learning algo-rithms converge quickly to an equilibrium lends plausibility to the predictive power of an equilibrium concept. To reason about the three questions above, we require a behavioral model — “dynamics” — for players when not at an equilibrium. Thus far, we’ve just assumed that equilibria persist and that non-equilibria don’t. This lecture focuses on variations of “best-response dynamics, ” while the next two lectures study dynamics based on regret-minimization. No concrete model of learning will be fully convincing. We aspire toward simple and natural learning algorithms that lead to concrete and non-trivial predictions. Ideally, we would like to reach the same conclusion via multiple different learning processes. Then, even though we might not literally believe in any of the specific learning algorithms, we can have some confidence that the conclusion is robust, and not an artifact of the details of a particular learning process. ∗ c©2013, Tim Roughgarden. |
| File Format | |
| Publisher Date | 2013-01-01 |
| Access Restriction | Open |
| Content Type | Text |