当前位置: X-MOL 学术arXiv.cs.CE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using Reinforcement Learning in the Algorithmic Trading Problem
arXiv - CS - Computational Engineering, Finance, and Science Pub Date : 2020-02-26 , DOI: arxiv-2002.11523
Evgeny Ponomarev, Ivan Oseledets, Andrzej Cichocki

The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.

中文翻译:

在算法交易问题中使用强化学习

强化学习方法的发展已将应用扩展到包括算法交易在内的许多领域。在本文中,证券交易所的交易被解释为具有由状态、动作和奖励组成的马尔可夫性质的博弈。提出了一种固定交易量的金融工具系统,并进行了实验测试;这是基于异步优势actor-critic方法,使用了几种神经网络架构。研究了循环层在这种方法中的应用。实验是在真实的匿名数据上进行的。最佳架构展示了 RTS 指数期货 (MOEX:RTSI) 的交易策略,每年的盈利率为 66%,将佣金计算在内。项目源代码可通过以下链接获得:http://github.com/evgps/a3c_trading。
更新日期:2020-02-28
down
wechat
bug