Large-scale study of human memory for meaningful narratives.

IF 1.8 4区 医学 Q4 NEUROSCIENCES
Learning & memory Pub Date : 2025-02-21 Print Date: 2025-02-01 DOI:10.1101/lm.054043.124
Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks
{"title":"Large-scale study of human memory for meaningful narratives.","authors":"Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks","doi":"10.1101/lm.054043.124","DOIUrl":null,"url":null,"abstract":"<p><p>The statistical study of human memory requires large-scale experiments, involving many stimulus conditions and test subjects. While this approach has proven to be quite fruitful for meaningless material such as random lists of words, naturalistic stimuli, like narratives, have until now resisted such a large-scale study, due to the quantity of manual labor required to design and analyze such experiments. In this work, we develop a pipeline that uses large language models (LLMs) both to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments, as well as to analyze the results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes. We found that both recall and recognition performance scale linearly with narrative length; however, for longer narratives, people tend to summarize the content rather than recalling precise details. To investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the narratives. Although recall performance declined significantly, recognition remained largely unaffected. Recalls in this condition seem to follow the original narrative order rather than the actual scrambled presentation, pointing to a contextual reconstruction of the story in memory. Finally, using LLM text embeddings, we construct a simple measure for each clause based on semantic similarity to the whole narrative, that shows a strong correlation with recall probability. Overall, our work demonstrates the power of LLMs in accessing new regimes in the study of human memory, as well as suggesting novel psychologically informed benchmarks for LLM performance.</p>","PeriodicalId":18003,"journal":{"name":"Learning & memory","volume":"32 2","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11852912/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learning & memory","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1101/lm.054043.124","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/1 0:00:00","PubModel":"Print","JCR":"Q4","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The statistical study of human memory requires large-scale experiments, involving many stimulus conditions and test subjects. While this approach has proven to be quite fruitful for meaningless material such as random lists of words, naturalistic stimuli, like narratives, have until now resisted such a large-scale study, due to the quantity of manual labor required to design and analyze such experiments. In this work, we develop a pipeline that uses large language models (LLMs) both to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments, as well as to analyze the results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes. We found that both recall and recognition performance scale linearly with narrative length; however, for longer narratives, people tend to summarize the content rather than recalling precise details. To investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the narratives. Although recall performance declined significantly, recognition remained largely unaffected. Recalls in this condition seem to follow the original narrative order rather than the actual scrambled presentation, pointing to a contextual reconstruction of the story in memory. Finally, using LLM text embeddings, we construct a simple measure for each clause based on semantic similarity to the whole narrative, that shows a strong correlation with recall probability. Overall, our work demonstrates the power of LLMs in accessing new regimes in the study of human memory, as well as suggesting novel psychologically informed benchmarks for LLM performance.

大规模研究人类对有意义叙述的记忆。
人类记忆的统计研究需要大规模的实验,涉及许多刺激条件和测试对象。虽然这种方法已被证明对无意义的材料(如随机的单词列表)非常有效,但由于设计和分析此类实验需要大量的体力劳动,自然主义的刺激(如叙述)到目前为止还无法进行如此大规模的研究。在这项工作中,我们开发了一个使用大型语言模型(llm)的管道,既可以为大规模回忆和识别记忆实验设计自然主义的叙事刺激,也可以分析结果。我们对大量的参与者进行了在线记忆实验,收集了不同大小的叙述的识别和回忆数据。我们发现,回忆和识别表现与叙事长度呈线性关系;然而,对于较长的叙述,人们倾向于总结内容,而不是回忆精确的细节。为了研究叙事理解在记忆中的作用,我们使用叙事的打乱版本重复了这些实验。尽管回忆能力显著下降,但识别能力基本未受影响。在这种情况下,回忆似乎遵循最初的叙述顺序,而不是实际的混乱呈现,这表明记忆中故事的上下文重建。最后,使用LLM文本嵌入,我们基于与整个叙述的语义相似度为每个子句构建了一个简单的度量,该度量与召回概率有很强的相关性。总的来说,我们的工作证明了法学硕士在人类记忆研究中获得新制度的力量,并为法学硕士的表现提出了新的心理信息基准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Learning & memory
Learning & memory 医学-神经科学
CiteScore
3.60
自引率
5.00%
发文量
45
审稿时长
6-12 weeks
期刊介绍: The neurobiology of learning and memory is entering a new interdisciplinary era. Advances in neuropsychology have identified regions of brain tissue that are critical for certain types of function. Electrophysiological techniques have revealed behavioral correlates of neuronal activity. Studies of synaptic plasticity suggest that some mechanisms of memory formation may resemble those of neural development. And molecular approaches have identified genes with patterns of expression that influence behavior. It is clear that future progress depends on interdisciplinary investigations. The current literature of learning and memory is large but fragmented. Until now, there has been no single journal devoted to this area of study and no dominant journal that demands attention by serious workers in the area, regardless of specialty. Learning & Memory provides a forum for these investigations in the form of research papers and review articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信