Showing 1 - 8 of 8
Persistent link: https://www.econbiz.de/10012194805
Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an...
Persistent link: https://www.econbiz.de/10014117817
We propose a simple non-equilibrium model of a financial market as an open system with a possible exchange of money with an outside world and market frictions (trade impacts) incorporated into asset price dynamics via a feedback mechanism. Using a linear market impact model, this produces a...
Persistent link: https://www.econbiz.de/10012898637
This paper presents a discrete-time option pricing model that is rooted in Reinforcement Learning (RL), and more specifically in the famous Q-Learning method of RL. We construct a risk-adjusted Markov Decision Process for a discrete-time version of the classical Black-Scholes-Merton (BSM) model,...
Persistent link: https://www.econbiz.de/10012900426
Persistent link: https://www.econbiz.de/10009534630
The QLBS model is a discrete-time option hedging and pricing model that is based on Dynamic Programming (DP) and Reinforcement Learning (RL). It combines the famous Q-Learning method for RL with the Black-Scholes (-Merton) model's idea of reducing the problem of option pricing and hedging to the...
Persistent link: https://www.econbiz.de/10012930216
Crowding is widely regarded as one of the most important risk factors in designing portfolio strategies. In this paper, we analyze stock crowding using network analysis of fund holdings, which is used to compute crowding scores for stocks. These scores are used to construct costless long-short...
Persistent link: https://www.econbiz.de/10014350047
We suggest a simple practical method to combine the human and artificial intelligence to both learn best investment practices of fund managers, and provide recommendations to improve them. Our approach is based on a combination of Inverse Reinforcement Learning (IRL) and RL. First, the IRL...
Persistent link: https://www.econbiz.de/10014351666