Large-scale study of human memory for meaningful narratives.

IF 1.8 4区 医学 Q4 NEUROSCIENCES
Learning & memory Pub Date : 2025-02-21 Print Date: 2025-02-01 DOI:10.1101/lm.054043.124
Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks
{"title":"Large-scale study of human memory for meaningful narratives.","authors":"Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks","doi":"10.1101/lm.054043.124","DOIUrl":null,"url":null,"abstract":"<p><p>The statistical study of human memory requires large-scale experiments, involving many stimulus conditions and test subjects. While this approach has proven to be quite fruitful for meaningless material such as random lists of words, naturalistic stimuli, like narratives, have until now resisted such a large-scale study, due to the quantity of manual labor required to design and analyze such experiments. In this work, we develop a pipeline that uses large language models (LLMs) both to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments, as well as to analyze the results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes. We found that both recall and recognition performance scale linearly with narrative length; however, for longer narratives, people tend to summarize the content rather than recalling precise details. To investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the narratives. Although recall performance declined significantly, recognition remained largely unaffected. Recalls in this condition seem to follow the original narrative order rather than the actual scrambled presentation, pointing to a contextual reconstruction of the story in memory. Finally, using LLM text embeddings, we construct a simple measure for each clause based on semantic similarity to the whole narrative, that shows a strong correlation with recall probability. Overall, our work demonstrates the power of LLMs in accessing new regimes in the study of human memory, as well as suggesting novel psychologically informed benchmarks for LLM performance.</p>","PeriodicalId":18003,"journal":{"name":"Learning & memory","volume":"32 2","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11852912/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learning & memory","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1101/lm.054043.124","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/1 0:00:00","PubModel":"Print","JCR":"Q4","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The statistical study of human memory requires large-scale experiments, involving many stimulus conditions and test subjects. While this approach has proven to be quite fruitful for meaningless material such as random lists of words, naturalistic stimuli, like narratives, have until now resisted such a large-scale study, due to the quantity of manual labor required to design and analyze such experiments. In this work, we develop a pipeline that uses large language models (LLMs) both to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments, as well as to analyze the results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes. We found that both recall and recognition performance scale linearly with narrative length; however, for longer narratives, people tend to summarize the content rather than recalling precise details. To investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the narratives. Although recall performance declined significantly, recognition remained largely unaffected. Recalls in this condition seem to follow the original narrative order rather than the actual scrambled presentation, pointing to a contextual reconstruction of the story in memory. Finally, using LLM text embeddings, we construct a simple measure for each clause based on semantic similarity to the whole narrative, that shows a strong correlation with recall probability. Overall, our work demonstrates the power of LLMs in accessing new regimes in the study of human memory, as well as suggesting novel psychologically informed benchmarks for LLM performance.

大规模研究人类对有意义叙述的记忆。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Learning & memory
Learning & memory 医学-神经科学
CiteScore
3.60
自引率
5.00%
发文量
45
审稿时长
6-12 weeks
期刊介绍: The neurobiology of learning and memory is entering a new interdisciplinary era. Advances in neuropsychology have identified regions of brain tissue that are critical for certain types of function. Electrophysiological techniques have revealed behavioral correlates of neuronal activity. Studies of synaptic plasticity suggest that some mechanisms of memory formation may resemble those of neural development. And molecular approaches have identified genes with patterns of expression that influence behavior. It is clear that future progress depends on interdisciplinary investigations. The current literature of learning and memory is large but fragmented. Until now, there has been no single journal devoted to this area of study and no dominant journal that demands attention by serious workers in the area, regardless of specialty. Learning & Memory provides a forum for these investigations in the form of research papers and review articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信