Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework

Ali Habibnia, Mahdi Soltanzadeh
{"title":"Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework","authors":"Ali Habibnia, Mahdi Soltanzadeh","doi":"arxiv-2408.05382","DOIUrl":null,"url":null,"abstract":"This study presents a Reinforcement Learning (RL)-based portfolio management\nmodel tailored for high-risk environments, addressing the limitations of\ntraditional RL models and exploiting market opportunities through two-sided\ntransactions and lending. Our approach integrates a new environmental\nformulation with a Profit and Loss (PnL)-based reward function, enhancing the\nRL agent's ability in downside risk management and capital optimization. We\nimplemented the model using the Soft Actor-Critic (SAC) agent with a\nConvolutional Neural Network with Multi-Head Attention (CNN-MHA). This setup\neffectively manages a diversified 12-crypto asset portfolio in the Binance\nperpetual futures market, leveraging USDT for both granting and receiving loans\nand rebalancing every 4 hours, utilizing market data from the preceding 48\nhours. Tested over two 16-month periods of varying market volatility, the model\nsignificantly outperformed benchmarks, particularly in high-volatility\nscenarios, achieving higher return-to-risk ratios and demonstrating robust\nprofitability. These results confirm the model's effectiveness in leveraging\nmarket dynamics and managing risks in volatile environments like the\ncryptocurrency market.","PeriodicalId":501045,"journal":{"name":"arXiv - QuantFin - Portfolio Management","volume":"177 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Portfolio Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05382","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study presents a Reinforcement Learning (RL)-based portfolio management model tailored for high-risk environments, addressing the limitations of traditional RL models and exploiting market opportunities through two-sided transactions and lending. Our approach integrates a new environmental formulation with a Profit and Loss (PnL)-based reward function, enhancing the RL agent's ability in downside risk management and capital optimization. We implemented the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention (CNN-MHA). This setup effectively manages a diversified 12-crypto asset portfolio in the Binance perpetual futures market, leveraging USDT for both granting and receiving loans and rebalancing every 4 hours, utilizing market data from the preceding 48 hours. Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks, particularly in high-volatility scenarios, achieving higher return-to-risk ratios and demonstrating robust profitability. These results confirm the model's effectiveness in leveraging market dynamics and managing risks in volatile environments like the cryptocurrency market.
优化双面交易和借贷的投资组合:强化学习框架
本研究提出了一种基于强化学习(RL)的投资组合管理模型,该模型专为高风险环境量身定制,解决了传统 RL 模型的局限性,并通过双面交易和借贷利用了市场机会。我们的方法将新的环境模拟与基于损益(PnL)的奖励函数相结合,增强了 RL 代理在下行风险管理和资本优化方面的能力。我们使用软行为批判者(SAC)代理和多头注意卷积神经网络(CNN-MHA)实现了该模型。这一设置有效地管理了 Binance 永久期货市场上的 12 种加密资产的多样化投资组合,利用美元兑土耳其币(USDT)发放和接收贷款,并利用前 48 小时的市场数据每 4 小时进行一次再平衡。在两个为期 16 个月的不同市场波动期中进行了测试,模型的表现明显优于基准,尤其是在高波动情景中,实现了更高的回报风险比,并表现出稳健的盈利能力。这些结果证实了该模型在利用市场动态和管理加密货币市场等波动环境中的风险方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信