Simulating lexical decision times with large language models to supplement megastudies and crowdsourcing.

IF 3.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Gonzalo Martínez, Javier Conde, Pedro Reviriego, Marc Brysbaert
{"title":"Simulating lexical decision times with large language models to supplement megastudies and crowdsourcing.","authors":"Gonzalo Martínez, Javier Conde, Pedro Reviriego, Marc Brysbaert","doi":"10.3758/s13428-025-02829-6","DOIUrl":null,"url":null,"abstract":"<p><p>Megastudies and crowdsourcing studies are a rich source of information for word recognition research because they provide processing times for thousands of words. However, the high cost makes it impossible to include all words of interest and all relevant participant groups. This study explores the potential of fine-tuned large language models (LLMs) to generate lexical decision times (RTs) similar to those of humans. Building on recent findings that LLMs can accurately estimate word features, we fine-tuned GPT-4o mini with 3000 words from a megastudy. We then gave the model the task of generating RT estimates for the remaining words in the dataset. Our findings showed a high correlation between AI-generated and observed RTs. We discuss three applications: (1) estimating missing RT data, where AI can fill in gaps for words missing in some megastudies, (2) verifying results of virtual experiments, where AI-generated data can provide an additional layer of validation for results of virtual experiments, and (3) optimizing human data collection, as researchers can run simulations before conducting studies with humans. While AI-generated RTs are not a replacement for human data, they have the potential to increase the flexibility and efficiency of megastudy research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 10","pages":"294"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02829-6","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Megastudies and crowdsourcing studies are a rich source of information for word recognition research because they provide processing times for thousands of words. However, the high cost makes it impossible to include all words of interest and all relevant participant groups. This study explores the potential of fine-tuned large language models (LLMs) to generate lexical decision times (RTs) similar to those of humans. Building on recent findings that LLMs can accurately estimate word features, we fine-tuned GPT-4o mini with 3000 words from a megastudy. We then gave the model the task of generating RT estimates for the remaining words in the dataset. Our findings showed a high correlation between AI-generated and observed RTs. We discuss three applications: (1) estimating missing RT data, where AI can fill in gaps for words missing in some megastudies, (2) verifying results of virtual experiments, where AI-generated data can provide an additional layer of validation for results of virtual experiments, and (3) optimizing human data collection, as researchers can run simulations before conducting studies with humans. While AI-generated RTs are not a replacement for human data, they have the potential to increase the flexibility and efficiency of megastudy research.

用大型语言模型模拟词汇决策时间,以补充大型研究和众包。
大型研究和众包研究是单词识别研究的丰富信息来源,因为它们提供了数千个单词的处理时间。然而,高昂的成本使得不可能包括所有感兴趣的词语和所有相关的参与者群体。本研究探索了微调大型语言模型(llm)产生与人类相似的词汇决策时间(RTs)的潜力。基于最近的发现,法学硕士可以准确地估计单词特征,我们对gpt - 40mini进行了微调,使用了来自大型研究的3000个单词。然后,我们给模型的任务是为数据集中剩余的单词生成RT估计。我们的研究结果表明,人工智能生成的RTs与观察到的RTs之间存在高度相关性。我们讨论了三个应用:(1)估计缺失的RT数据,其中AI可以填补一些大型研究中缺失的单词的空白;(2)验证虚拟实验的结果,其中AI生成的数据可以为虚拟实验的结果提供额外的验证层;(3)优化人类数据收集,因为研究人员可以在进行人类研究之前运行模拟。虽然人工智能生成的RTs不能替代人类数据,但它们有可能提高大型研究的灵活性和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信