Dynamic Programming for a Stochastic Markovian Process with an Application to the Mean Variance Models
This paper presents a fresh perspective on the Markov reward process. In order to bring Howard's [Howard, R. A. 1969. Dynamic Programing and Markov-Process. The M.I.T. Press, 5th printing.] model closer to practical applicability, two very important aspects of the model are restated: (a) We make the rewards random variables instead of known constants, and (b) we allow for any decision rule over the moment set of the portfolio distribution, rather than assuming maximization of the expected value of the portfolio outcome. These modifications provide a natural setting for the rewards to be normally distributed, and thus, applying the mean variance models becomes possible. An algorithm for solution is presented, and a special case: the mean-variability models decision rule of maximizing (\mu /\sigma) is worked out in detail.
Year of publication: |
1977
|
---|---|
Authors: | Goldwerger, Juval |
Published in: |
Management Science. - Institute for Operations Research and the Management Sciences - INFORMS, ISSN 0025-1909. - Vol. 23.1977, 6, p. 612-620
|
Publisher: |
Institute for Operations Research and the Management Sciences - INFORMS |
Saved in:
Saved in favorites
Similar items by person
-
Goldwerger, Juval, (1977)
-
Toward a new approach to portfolio selection theory : some notes on mean variance
Goldwerger, Juval, (1972)
-
Note--Capital Budgeting of Interdependent Projects: Activity Analysis Approach
Goldwerger, Juval, (1977)
- More ...