Time to absorption in discounted reinforcement models
Reinforcement schemes are a class of non-Markovian stochastic processes. Their non-Markovian nature allows them to model some kind of memory of the past. One subclass of such models are those in which the past is exponentially discounted or forgotten. Often, models in this subclass have the property of becoming trapped with probability 1 in some degenerate state. While previous work has concentrated on such limit results, we concentrate here on a contrary effect, namely that the time to become trapped may increase exponentially in 1/x as the discount rate, 1-x, approaches 1. As a result, the time to become trapped may easily exceed the lifetime of the simulation or of the physical data being modeled. In such a case, the quasi-stationary behavior is more germane. We apply our results to a model of social network formation based on ternary (three-person) interactions with uniform positive reinforcement.
Year of publication: |
2004
|
---|---|
Authors: | Pemantle, Robin ; Skyrms, Brian |
Published in: |
Stochastic Processes and their Applications. - Elsevier, ISSN 0304-4149. - Vol. 109.2004, 1, p. 1-12
|
Publisher: |
Elsevier |
Keywords: | Network Social network Urn model Friedman urn Stochastic approximation Meta-stable Trap Three-player game Potential well Exponential time Quasi-stationary |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Network formation by reinforcement learning: the long and medium run
Pemantle, Robin, (2004)
-
Learning to signal: Analysis of a micro-level reinforcement model
Argiento, Raffaele, (2009)
-
Skyrms, Brian, (2004)
- More ...