A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution
Reinforcement learning is explored as a candidate machine learning technique to enhance existing analytical solutions for optimal trade execution with elements from the market microstructure. Given a volume-to-trade, fixed time horizon and discrete trading periods, the aim is to adapt a given volume trajectory such that it is dynamic with respect to favourable/unfavourable conditions during realtime execution, thereby improving overall cost of trading. We consider the standard Almgren-Chriss model with linear price impact as a candidate base model. This model is popular amongst sell-side institutions as a basis for arrival price benchmark execution algorithms. By training a learning agent to modify a volume trajectory based on the market's prevailing spread and volume dynamics, we are able to improve post-trade implementation shortfall by up to 10.3% on average compared to the base model, based on a sample of stocks and trade sizes in the South African equity market.
Year of publication: |
2014-03
|
---|---|
Authors: | Hendricks, Dieter ; Wilcox, Diane |
Institutions: | arXiv.org |
Saved in:
Saved in favorites
Similar items by person
-
High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm
Hendricks, Dieter, (2014)
-
Hierarchical causality in financial economics
Wilcox, Diane, (2014)
-
Serial Correlation, Periodicity and Scaling of Eigenmodes in an Emerging Market
Wilcox, Diane, (2004)
- More ...