基于深度强化学习模型与进化策略融合的数字货币高频量化交易

Q4 Computer Science
Yijun He, Bo Xu, Xinpu Su
{"title":"基于深度强化学习模型与进化策略融合的数字货币高频量化交易","authors":"Yijun He, Bo Xu, Xinpu Su","doi":"10.20532/cit.2024.1005825","DOIUrl":null,"url":null,"abstract":"High-frequency quantitative trading in the emerging digital currency market poses unique challenges due to the lack of established methods for extracting trading information. This paper proposes a deep evolutionary reinforcement learning (DERL) model that combines deep reinforcement learning with evolutionary strategies to address these challenges. Reinforcement learning is applied to data cleaning and factor extraction from a high-frequency, microscopic viewpoint to quantitatively explain the supply and demand imbalance and to create trading strategies. In order to determine whether the algorithm can successfully extract the significant hidden features in the factors when faced with large and complex high-frequency factors, this paper trains the agent in reinforcement learning using three different learning algorithms, including Q-learning, evolutionary strategies, and policy gradient. The experimental dataset, which contains data on sharp up, sharp down, and continuous oscillation situations, was chosen to test Bitcoin in January-February, September, and November of 2022. According to the experimental results, the evolutionary strategies algorithm achieved returns of 59.18%, 25.14%, and 22.72%, respectively. The results demonstrate that deep reinforcement learning based on the evolutionary strategies outperforms Q-learning and policy gradient concerning risk resistance and return capability. The proposed approach offers a robust and adaptive solution for high-frequency trading in the digital currency market, contributing to the development of effective quantitative trading strategies.","PeriodicalId":38688,"journal":{"name":"Journal of Computing and Information Technology","volume":"51 41","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"High-Frequency Quantitative Trading of Digital Currencies Based on Fusion of Deep Reinforcement Learning Models with Evolutionary Strategies\",\"authors\":\"Yijun He, Bo Xu, Xinpu Su\",\"doi\":\"10.20532/cit.2024.1005825\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High-frequency quantitative trading in the emerging digital currency market poses unique challenges due to the lack of established methods for extracting trading information. This paper proposes a deep evolutionary reinforcement learning (DERL) model that combines deep reinforcement learning with evolutionary strategies to address these challenges. Reinforcement learning is applied to data cleaning and factor extraction from a high-frequency, microscopic viewpoint to quantitatively explain the supply and demand imbalance and to create trading strategies. In order to determine whether the algorithm can successfully extract the significant hidden features in the factors when faced with large and complex high-frequency factors, this paper trains the agent in reinforcement learning using three different learning algorithms, including Q-learning, evolutionary strategies, and policy gradient. The experimental dataset, which contains data on sharp up, sharp down, and continuous oscillation situations, was chosen to test Bitcoin in January-February, September, and November of 2022. According to the experimental results, the evolutionary strategies algorithm achieved returns of 59.18%, 25.14%, and 22.72%, respectively. The results demonstrate that deep reinforcement learning based on the evolutionary strategies outperforms Q-learning and policy gradient concerning risk resistance and return capability. The proposed approach offers a robust and adaptive solution for high-frequency trading in the digital currency market, contributing to the development of effective quantitative trading strategies.\",\"PeriodicalId\":38688,\"journal\":{\"name\":\"Journal of Computing and Information Technology\",\"volume\":\"51 41\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computing and Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20532/cit.2024.1005825\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computing and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20532/cit.2024.1005825","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

由于缺乏提取交易信息的成熟方法,新兴数字货币市场的高频量化交易面临着独特的挑战。本文提出了一种深度进化强化学习(DERL)模型,将深度强化学习与进化策略相结合,以应对这些挑战。从高频、微观的角度将强化学习应用于数据清理和因素提取,以定量解释供需失衡并创建交易策略。为了确定该算法在面对庞大而复杂的高频因子时能否成功提取因子中的重要隐藏特征,本文使用三种不同的学习算法(包括 Q-learning、进化策略和策略梯度)对代理进行了强化学习训练。实验数据集包含急涨、急跌和连续震荡情况下的数据,选取了 2022 年 1 月至 2 月、9 月和 11 月的数据对 Bitcoin 进行测试。实验结果显示,进化策略算法的收益率分别达到了59.18%、25.14%和22.72%。结果表明,基于进化策略的深度强化学习在抗风险能力和收益能力方面优于Q-learning和策略梯度。所提出的方法为数字货币市场的高频交易提供了一种稳健的自适应解决方案,有助于开发有效的量化交易策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
High-Frequency Quantitative Trading of Digital Currencies Based on Fusion of Deep Reinforcement Learning Models with Evolutionary Strategies
High-frequency quantitative trading in the emerging digital currency market poses unique challenges due to the lack of established methods for extracting trading information. This paper proposes a deep evolutionary reinforcement learning (DERL) model that combines deep reinforcement learning with evolutionary strategies to address these challenges. Reinforcement learning is applied to data cleaning and factor extraction from a high-frequency, microscopic viewpoint to quantitatively explain the supply and demand imbalance and to create trading strategies. In order to determine whether the algorithm can successfully extract the significant hidden features in the factors when faced with large and complex high-frequency factors, this paper trains the agent in reinforcement learning using three different learning algorithms, including Q-learning, evolutionary strategies, and policy gradient. The experimental dataset, which contains data on sharp up, sharp down, and continuous oscillation situations, was chosen to test Bitcoin in January-February, September, and November of 2022. According to the experimental results, the evolutionary strategies algorithm achieved returns of 59.18%, 25.14%, and 22.72%, respectively. The results demonstrate that deep reinforcement learning based on the evolutionary strategies outperforms Q-learning and policy gradient concerning risk resistance and return capability. The proposed approach offers a robust and adaptive solution for high-frequency trading in the digital currency market, contributing to the development of effective quantitative trading strategies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Computing and Information Technology
Journal of Computing and Information Technology Computer Science-Computer Science (all)
CiteScore
0.60
自引率
0.00%
发文量
16
审稿时长
26 weeks
期刊介绍: CIT. Journal of Computing and Information Technology is an international peer-reviewed journal covering the area of computing and information technology, i.e. computer science, computer engineering, software engineering, information systems, and information technology. CIT endeavors to publish stimulating accounts of original scientific work, primarily including research papers on both theoretical and practical issues, as well as case studies describing the application and critical evaluation of theory. Surveys and state-of-the-art reports will be considered only exceptionally; proposals for such submissions should be sent to the Editorial Board for scrutiny. Specific areas of interest comprise, but are not restricted to, the following topics: theory of computing, design and analysis of algorithms, numerical and symbolic computing, scientific computing, artificial intelligence, image processing, pattern recognition, computer vision, embedded and real-time systems, operating systems, computer networking, Web technologies, distributed systems, human-computer interaction, technology enhanced learning, multimedia, database systems, data mining, machine learning, knowledge engineering, soft computing systems and network security, computational statistics, computational linguistics, and natural language processing. Special attention is paid to educational, social, legal and managerial aspects of computing and information technology. In this respect CIT fosters the exchange of ideas, experience and knowledge between regions with different technological and cultural background, and in particular developed and developing ones.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信