Learning with minimal information in continuous games
While payoff‐based learning models are almost exclusively devised for finite action games, where players can test every action, it is harder to design such learning processes for continuous games. We construct a stochastic learning rule, designed for games with continuous action sets, which requires no sophistication from the players and is simple to implement: players update their actions according to variations in own payoff between current and previous action. We then analyze its behavior in several classes of continuous games and show that convergence to a stable Nash equilibrium is guaranteed in all games with strategic complements as well as in concave games, while convergence to Nash equilibrium occurs in all locally ordinal potential games as soon as Nash equilibria are isolated.
Year of publication: |
2020
|
---|---|
Authors: | Bervoets, Sebastian ; Bravo, Mario ; Faure, Mathieu |
Published in: |
Theoretical Economics. - The Econometric Society, ISSN 1933-6837, ZDB-ID 2220447-7. - Vol. 15.2020, 4, p. 1471-1508
|
Publisher: |
The Econometric Society |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Learning with minimal information in continuous games
Bervoets, Sebastian, (2020)
-
Learning with minimal information in continuous games
Bervoets, Sebastian, (2020)
-
Reinforcement Learning with Restrictions on the Action Set
Bravo, Mario, (2013)
- More ...