当前位置: X-MOL 学术Eur. J. Oper. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep reinforcement learning for the optimal placement of cryptocurrency limit orders
European Journal of Operational Research ( IF 6.0 ) Pub Date : 2021-05-06 , DOI: 10.1016/j.ejor.2021.04.050
Matthias Schnaubelt

This paper presents the first large-scale application of deep reinforcement learning to optimize execution at cryptocurrency exchanges by learning optimal limit order placement strategies. Execution optimization is highly relevant for both professional asset managers and private investors as execution quality affects portfolio performance at economically significant levels and is the target of regulatory supervision. To optimize execution with deep reinforcement learning, we design a problem-specific training environment that introduces a purpose-built reward function, hand-crafted market state features and a virtual limit order exchange. We empirically compare state-of-the-art deep reinforcement learning algorithms to several benchmarks with market data from major cryptocurrency exchanges, which represent an ideal test bed for our study as liquidity costs are relatively high. In total, we leverage 18 months of high-frequency data for several currency pairs with 300 million trades and more than 3.5 million order book states. We find proximal policy optimization to reliably learn superior order placement strategies. By interacting with our simulated limit order exchange, it learns cryptocurrency execution strategies that are empirically known from established markets. Order placement becomes more aggressive in anticipation of lower execution probabilities, which is indicated by trade and order imbalances.



中文翻译:

用于加密货币限价订单最佳放置的深度强化学习

本文介绍了深度强化学习的首次大规模应用,通过学习最佳限价订单放置策略来优化加密货币交易所的执行。执行优化对专业资产经理和私人投资者都高度相关,因为执行质量会在经济上显着影响投资组合的表现,并且是监管监督的目标。为了通过深度强化学习优化执行,我们设计了一个针对特定问题的训练环境,该环境引入了专门构建的奖励功能、手工制作的市场状态特征和虚拟限价订单交换。我们根据经验将最先进的深度强化学习算法与来自主要加密货币交易所的市场数据的几个基准进行比较,由于流动性成本相对较高,这代表了我们研究的理想试验台。总的来说,我们利用了 3 亿笔交易和超过 350 万个订单簿状态的几个货币对的 18 个月高频数据。我们发现近端策略优化可以可靠地学习高级订单放置策略。通过与我们模拟的限价订单交易所进行交互,它可以学习从既定市场经验上已知的加密货币执行策略。在预期较低的执行概率时,下单变得更加积极,这由交易和订单不平衡所表明。我们发现近端策略优化可以可靠地学习高级订单放置策略。通过与我们模拟的限价订单交易所进行交互,它可以学习从既定市场经验上已知的加密货币执行策略。在预期较低的执行概率时,下单变得更加积极,这由交易和订单不平衡所表明。我们发现近端策略优化可以可靠地学习高级订单放置策略。通过与我们模拟的限价订单交易所进行交互,它可以学习从既定市场经验上已知的加密货币执行策略。在预期较低的执行概率时,下单变得更加积极,这由交易和订单不平衡所表明。

更新日期:2021-05-06
down
wechat
bug