Showing 1 - 5 of 5
In this paper, we study constrained continuous-time Markov decision processes with a denumerable state space and unbounded reward/cost and transition rates. The criterion to be maximized is the expected average reward, and a constraint is imposed on an expected average cost. We give suitable...
Persistent link: https://www.econbiz.de/10010949960
. In this paper, we consider the nonstationary Markov decision processes (MDP, for short) with average variance criterion on a countable state space, finite action spaces and bounded one-step rewards. From the optimality equations which are provided in this paper, we translate the average...
Persistent link: https://www.econbiz.de/10010950146
This paper is devoted to studying continuous-time Markov decision processes with general state and action spaces, under the long-run expected average reward criterion. The transition rates of the underlying continuous-time Markov processes are allowed to be unbounded, and the reward rates may...
Persistent link: https://www.econbiz.de/10010950284
This paper deals with denumerable discrete-time Markov decision processes with unbounded costs. The criteria to be minimized are both of the limsup and liminf average criteria, instead of only the limsup average criterion widely used in the previous literature. We give another set of conditions...
Persistent link: https://www.econbiz.de/10010950287
This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes,...
Persistent link: https://www.econbiz.de/10010999641