{"title":"What Teaches Robots to Walk, Teaches Them to Trade too -- Regime Adaptive Execution using Informed Data and LLMs","authors":"Raeid Saqur","doi":"arxiv-2406.15508","DOIUrl":null,"url":null,"abstract":"Machine learning techniques applied to the problem of financial market\nforecasting struggle with dynamic regime switching, or underlying correlation\nand covariance shifts in true (hidden) market variables. Drawing inspiration\nfrom the success of reinforcement learning in robotics, particularly in agile\nlocomotion adaptation of quadruped robots to unseen terrains, we introduce an\ninnovative approach that leverages world knowledge of pretrained LLMs (aka.\n'privileged information' in robotics) and dynamically adapts them using\nintrinsic, natural market rewards using LLM alignment technique we dub as\n\"Reinforcement Learning from Market Feedback\" (**RLMF**). Strong empirical\nresults demonstrate the efficacy of our method in adapting to regime shifts in\nfinancial markets, a challenge that has long plagued predictive models in this\ndomain. The proposed algorithmic framework outperforms best-performing SOTA LLM\nmodels on the existing (FLARE) benchmark stock-movement (SM) tasks by more than\n15\\% improved accuracy. On the recently proposed NIFTY SM task, our adaptive\npolicy outperforms the SOTA best performing trillion parameter models like\nGPT-4. The paper details the dual-phase, teacher-student architecture and\nimplementation of our model, the empirical results obtained, and an analysis of\nthe role of language embeddings in terms of Information Gain.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"2012 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.15508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning techniques applied to the problem of financial market
forecasting struggle with dynamic regime switching, or underlying correlation
and covariance shifts in true (hidden) market variables. Drawing inspiration
from the success of reinforcement learning in robotics, particularly in agile
locomotion adaptation of quadruped robots to unseen terrains, we introduce an
innovative approach that leverages world knowledge of pretrained LLMs (aka.
'privileged information' in robotics) and dynamically adapts them using
intrinsic, natural market rewards using LLM alignment technique we dub as
"Reinforcement Learning from Market Feedback" (**RLMF**). Strong empirical
results demonstrate the efficacy of our method in adapting to regime shifts in
financial markets, a challenge that has long plagued predictive models in this
domain. The proposed algorithmic framework outperforms best-performing SOTA LLM
models on the existing (FLARE) benchmark stock-movement (SM) tasks by more than
15\% improved accuracy. On the recently proposed NIFTY SM task, our adaptive
policy outperforms the SOTA best performing trillion parameter models like
GPT-4. The paper details the dual-phase, teacher-student architecture and
implementation of our model, the empirical results obtained, and an analysis of
the role of language embeddings in terms of Information Gain.