均值回归策略中具有函数性质的深度强化学习

Sophia Gu
{"title":"均值回归策略中具有函数性质的深度强化学习","authors":"Sophia Gu","doi":"10.3905/fjds.2022.1.094","DOIUrl":null,"url":null,"abstract":"Over the past decades, researchers have been pushing the limits of deep reinforcement learning (DRL). Although DRL has attracted substantial interest from practitioners, many are blocked by having to search through a plethora of available methodologies that are seemingly alike, whereas others are still building RL agents from scratch based on classical theories. To address the aforementioned gaps in adopting the latest DRL methods, the author is particularly interested in testing out whether any of the recent technology developed by the leads in the field can be readily applied to a class of optimal trading problems. Unsurprisingly, many prominent breakthroughs in DRL are investigated and tested on strategic games—from AlphaGo to AlphaStar and, at about the same time, OpenAI Five. Thus, in this article, the author shows precisely how to use a DRL library that is initially built for games in a commonly used trading strategy—mean reversion. And by introducing a framework that incorporates economically motivated function properties, they also demonstrate, through the library, a highly performant and convergent DRL solution to decision-making financial problems in general.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Deep Reinforcement Learning with Function Properties in Mean Reversion Strategies\",\"authors\":\"Sophia Gu\",\"doi\":\"10.3905/fjds.2022.1.094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past decades, researchers have been pushing the limits of deep reinforcement learning (DRL). Although DRL has attracted substantial interest from practitioners, many are blocked by having to search through a plethora of available methodologies that are seemingly alike, whereas others are still building RL agents from scratch based on classical theories. To address the aforementioned gaps in adopting the latest DRL methods, the author is particularly interested in testing out whether any of the recent technology developed by the leads in the field can be readily applied to a class of optimal trading problems. Unsurprisingly, many prominent breakthroughs in DRL are investigated and tested on strategic games—from AlphaGo to AlphaStar and, at about the same time, OpenAI Five. Thus, in this article, the author shows precisely how to use a DRL library that is initially built for games in a commonly used trading strategy—mean reversion. And by introducing a framework that incorporates economically motivated function properties, they also demonstrate, through the library, a highly performant and convergent DRL solution to decision-making financial problems in general.\",\"PeriodicalId\":199045,\"journal\":{\"name\":\"The Journal of Financial Data Science\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Financial Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3905/fjds.2022.1.094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Financial Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3905/fjds.2022.1.094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在过去的几十年里,研究人员一直在挑战深度强化学习(DRL)的极限。尽管DRL已经引起了实践者的极大兴趣,但许多人都被必须搜索大量看似相似的可用方法所阻碍,而其他人仍然基于经典理论从零开始构建RL代理。为了解决上述采用最新DRL方法的差距,作者特别感兴趣的是测试该领域的领导者开发的任何最新技术是否可以很容易地应用于一类最优交易问题。不出所料,DRL领域的许多重大突破都是在战略游戏上进行研究和测试的——从AlphaGo到AlphaStar,以及几乎同时进行的OpenAI Five。因此,在本文中,作者精确地展示了如何使用DRL库,该库最初是为一种常用的交易策略-均值回归中的游戏而构建的。通过引入一个包含经济动机函数属性的框架,他们还通过库展示了一个高性能和收敛的DRL解决方案,以解决一般的财务决策问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Reinforcement Learning with Function Properties in Mean Reversion Strategies
Over the past decades, researchers have been pushing the limits of deep reinforcement learning (DRL). Although DRL has attracted substantial interest from practitioners, many are blocked by having to search through a plethora of available methodologies that are seemingly alike, whereas others are still building RL agents from scratch based on classical theories. To address the aforementioned gaps in adopting the latest DRL methods, the author is particularly interested in testing out whether any of the recent technology developed by the leads in the field can be readily applied to a class of optimal trading problems. Unsurprisingly, many prominent breakthroughs in DRL are investigated and tested on strategic games—from AlphaGo to AlphaStar and, at about the same time, OpenAI Five. Thus, in this article, the author shows precisely how to use a DRL library that is initially built for games in a commonly used trading strategy—mean reversion. And by introducing a framework that incorporates economically motivated function properties, they also demonstrate, through the library, a highly performant and convergent DRL solution to decision-making financial problems in general.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信