Markov-achievable payoffs for finite-horizon decision models
Consider the class of n-stage decision models with state space S, action space A, and payoff function g : (S x A)n x S --> R. The function g is Markov-achievable if for any possible set of available randomized actions and all transition laws, each plan has a corresponding Markov plan whose value is at least as good. A condition on g, called the "non-forking linear sections property", is necessary and sufficient for g to be Markov achievable. If g satisfies the slightly stronger "general linear sections property", then g can be written as a sum of products of certain simple neighboring-stage payoffs.
Year of publication: |
1998
|
---|---|
Authors: | Pestien, Victor ; Wang, Xiaobo |
Published in: |
Stochastic Processes and their Applications. - Elsevier, ISSN 0304-4149. - Vol. 73.1998, 1, p. 101-118
|
Publisher: |
Elsevier |
Keywords: | Markov decision model Payoff function Markov plan |
Saved in:
Saved in favorites
Similar items by person
-
Finite-stage reward functions having the Markov adequacy property
Pestien, Victor, (1993)
-
Throughput limits from the asymptotic profile of cyclic networks with state-dependent service rates
Daduna, Hans, (2008)
-
Jacobsen, Joyce, (2006)
- More ...