A further remark on dynamic programming for partially observed Markov processes
In (Stochastic Process. Appl. 103 (2003) 293), a pair of dynamic programming inequalities were derived for the 'separated' ergodic control problem for partially observed Markov processes, using the 'vanishing discount' argument. In this note, we strengthen these results to derive a single dynamic programming equation for the same.
Year of publication: |
2004
|
---|---|
Authors: | Borkar, V.S. Vivek S. ; Budhiraja, Amarjit |
Published in: |
Stochastic Processes and their Applications. - Elsevier, ISSN 0304-4149. - Vol. 112.2004, 1, p. 79-93
|
Publisher: |
Elsevier |
Keywords: | Controlled Markov processes Dynamic programming Partial observations Ergodic cost Vanishing discount Pseudo-atom |
Saved in:
Saved in favorites
Similar items by person
-
On the multi-dimensional skew Brownian motion
Atar, Rami, (2015)
-
Exit time and invariant measure asymptotics for small noise constrained diffusions
Biswas, Anup, (2011)
-
Multiscale diffusion approximations for stochastic networks in heavy traffic
Budhiraja, Amarjit, (2011)
- More ...