AI’s predictable memory in financial analysis

IF 1.8 4区 经济学 Q2 ECONOMICS
Antoine Didisheim , Martina Fraschini , Luciano Somoza
{"title":"AI’s predictable memory in financial analysis","authors":"Antoine Didisheim ,&nbsp;Martina Fraschini ,&nbsp;Luciano Somoza","doi":"10.1016/j.econlet.2025.112602","DOIUrl":null,"url":null,"abstract":"<div><div>Look-ahead bias in Large Language Models (LLMs) arises when information that would not have been available at the time of prediction is included in the training data and inflates prediction performance. This paper proposes a practical methodology to quantify look-ahead bias in financial applications. By prompting LLMs to retrieve historical stock returns without context, we construct a proxy to estimate memorization-driven predictability. We show that the bias varies predictably with data frequency, model size, and aggregation level: smaller models and finer data granularity exhibit negligible bias. Our results help researchers navigate the trade-off between statistical power and bias in LLMs.</div></div>","PeriodicalId":11468,"journal":{"name":"Economics Letters","volume":"256 ","pages":"Article 112602"},"PeriodicalIF":1.8000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Economics Letters","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165176525004392","RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

Look-ahead bias in Large Language Models (LLMs) arises when information that would not have been available at the time of prediction is included in the training data and inflates prediction performance. This paper proposes a practical methodology to quantify look-ahead bias in financial applications. By prompting LLMs to retrieve historical stock returns without context, we construct a proxy to estimate memorization-driven predictability. We show that the bias varies predictably with data frequency, model size, and aggregation level: smaller models and finer data granularity exhibit negligible bias. Our results help researchers navigate the trade-off between statistical power and bias in LLMs.
人工智能在财务分析中的可预测记忆
在大型语言模型(llm)中,当在预测时不可用的信息包含在训练数据中并夸大预测性能时,就会出现前视偏差。本文提出了一种实用的方法来量化金融应用中的前瞻性偏见。通过提示llm在没有上下文的情况下检索历史股票收益,我们构建了一个代理来估计记忆驱动的可预测性。我们表明,偏差随数据频率、模型大小和聚合级别可预测地变化:较小的模型和更细的数据粒度表现出可忽略不计的偏差。我们的研究结果帮助研究人员在法学硕士的统计能力和偏倚之间进行权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Economics Letters
Economics Letters ECONOMICS-
CiteScore
3.20
自引率
5.00%
发文量
348
审稿时长
30 days
期刊介绍: Many economists today are concerned by the proliferation of journals and the concomitant labyrinth of research to be conquered in order to reach the specific information they require. To combat this tendency, Economics Letters has been conceived and designed outside the realm of the traditional economics journal. As a Letters Journal, it consists of concise communications (letters) that provide a means of rapid and efficient dissemination of new results, models and methods in all fields of economic research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信