{"title":"Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework","authors":"Ali Habibnia, Mahdi Soltanzadeh","doi":"arxiv-2408.05382","DOIUrl":null,"url":null,"abstract":"This study presents a Reinforcement Learning (RL)-based portfolio management\nmodel tailored for high-risk environments, addressing the limitations of\ntraditional RL models and exploiting market opportunities through two-sided\ntransactions and lending. Our approach integrates a new environmental\nformulation with a Profit and Loss (PnL)-based reward function, enhancing the\nRL agent's ability in downside risk management and capital optimization. We\nimplemented the model using the Soft Actor-Critic (SAC) agent with a\nConvolutional Neural Network with Multi-Head Attention (CNN-MHA). This setup\neffectively manages a diversified 12-crypto asset portfolio in the Binance\nperpetual futures market, leveraging USDT for both granting and receiving loans\nand rebalancing every 4 hours, utilizing market data from the preceding 48\nhours. Tested over two 16-month periods of varying market volatility, the model\nsignificantly outperformed benchmarks, particularly in high-volatility\nscenarios, achieving higher return-to-risk ratios and demonstrating robust\nprofitability. These results confirm the model's effectiveness in leveraging\nmarket dynamics and managing risks in volatile environments like the\ncryptocurrency market.","PeriodicalId":501045,"journal":{"name":"arXiv - QuantFin - Portfolio Management","volume":"177 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Portfolio Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05382","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study presents a Reinforcement Learning (RL)-based portfolio management
model tailored for high-risk environments, addressing the limitations of
traditional RL models and exploiting market opportunities through two-sided
transactions and lending. Our approach integrates a new environmental
formulation with a Profit and Loss (PnL)-based reward function, enhancing the
RL agent's ability in downside risk management and capital optimization. We
implemented the model using the Soft Actor-Critic (SAC) agent with a
Convolutional Neural Network with Multi-Head Attention (CNN-MHA). This setup
effectively manages a diversified 12-crypto asset portfolio in the Binance
perpetual futures market, leveraging USDT for both granting and receiving loans
and rebalancing every 4 hours, utilizing market data from the preceding 48
hours. Tested over two 16-month periods of varying market volatility, the model
significantly outperformed benchmarks, particularly in high-volatility
scenarios, achieving higher return-to-risk ratios and demonstrating robust
profitability. These results confirm the model's effectiveness in leveraging
market dynamics and managing risks in volatile environments like the
cryptocurrency market.