Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis

Information Pub Date : 2024-08-09 DOI:10.3390/info15080473
Leon Tabaro, J. M. V. Kinani, A. J. Rosales-Silva, J. Salgado-Ramírez, Dante Mújica-Vargas, P. J. Escamilla-Ambrosio, Eduardo Ramos-Díaz
{"title":"Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis","authors":"Leon Tabaro, J. M. V. Kinani, A. J. Rosales-Silva, J. Salgado-Ramírez, Dante Mújica-Vargas, P. J. Escamilla-Ambrosio, Eduardo Ramos-Díaz","doi":"10.3390/info15080473","DOIUrl":null,"url":null,"abstract":"In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent’s trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL’s ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems.","PeriodicalId":510156,"journal":{"name":"Information","volume":"32 8","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info15080473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent’s trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL’s ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems.
利用双深度 Q 网络和情感分析进行算法交易
在这项工作中,我们探索了深度强化学习(DRL)在算法交易中的应用。算法交易的重点是使用计算机算法自动执行预定义的交易策略,而在这项工作中,我们训练双深度 Q 网络(DDQN)代理学习自己的最优交易策略,目标是在管理风险的同时实现收益最大化。在这项研究中,我们通过对财务报表进行情感分析来增强马尔可夫决策过程(MDP)的状态,从而扩展了我们的方法,通过这种方法,代理在测试期间的累计奖励最多增加了 70%,卡尔马比率从 0.9 增加到 1.3。实验结果还表明,DDQN 代理的交易策略能够持续超越买入并持有策略设定的基准。此外,我们还进一步研究了代理在决定采取最佳交易行动时所考虑的过去市场数据窗口长度的影响。这项研究的结果验证了 DRL 找到有效解决方案的能力及其在研究市场中代理行为方面的重要性。这项工作为未来的研究人员开发更先进的基于 DRL 的自适应交易系统奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信