工作记忆容量限制依赖于编码粒度:来自普通话的证据

IF 2.9 1区 心理学 Q1 LINGUISTICS
Yunsong Li , Ming Xiang , Suiping Wang
{"title":"工作记忆容量限制依赖于编码粒度:来自普通话的证据","authors":"Yunsong Li ,&nbsp;Ming Xiang ,&nbsp;Suiping Wang","doi":"10.1016/j.jml.2025.104618","DOIUrl":null,"url":null,"abstract":"<div><div>It is well documented that grouping smaller units of information into larger units (i.e. chunking) can help offset working memory (WM) capacity limit. A long standing question is whether working memory capacity limit is determined by the sheer number of chunk-based units, or memory resources are flexibly distributed over multiple chunks depending on tasks and also the resolution level at which memory representations are encoded. Results from previous work in visual WM has shown evidence for both positions. The current work explores the effect of rational resource distribution for storing linguistic information in verbal WM. Two experiments were conducted using a change detection task and Mandarin Chinese words as stimuli. The task in Experiment 1 encouraged the encoding of the lexical representations at relatively lower granularity level, and the task in Experiment 2 targeted more detailed representation of a word (higher granularity). Two findings were noteworthy. First, compared to the memory performance of non-words, the memory performance of words showed clear benefits of chunking. Second, memory performance under more precise encoding (Experiment 2) was worse than under less precise encoding (Experiment 1), but only when memory load was also high. Our findings lend support to the rational resource models that allow dynamic distribution of limited memory resources based on the demand for representational granularity.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"142 ","pages":"Article 104618"},"PeriodicalIF":2.9000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Working memory capacity limit is dependent on encoding granularity: Evidence from Mandarin Chinese\",\"authors\":\"Yunsong Li ,&nbsp;Ming Xiang ,&nbsp;Suiping Wang\",\"doi\":\"10.1016/j.jml.2025.104618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>It is well documented that grouping smaller units of information into larger units (i.e. chunking) can help offset working memory (WM) capacity limit. A long standing question is whether working memory capacity limit is determined by the sheer number of chunk-based units, or memory resources are flexibly distributed over multiple chunks depending on tasks and also the resolution level at which memory representations are encoded. Results from previous work in visual WM has shown evidence for both positions. The current work explores the effect of rational resource distribution for storing linguistic information in verbal WM. Two experiments were conducted using a change detection task and Mandarin Chinese words as stimuli. The task in Experiment 1 encouraged the encoding of the lexical representations at relatively lower granularity level, and the task in Experiment 2 targeted more detailed representation of a word (higher granularity). Two findings were noteworthy. First, compared to the memory performance of non-words, the memory performance of words showed clear benefits of chunking. Second, memory performance under more precise encoding (Experiment 2) was worse than under less precise encoding (Experiment 1), but only when memory load was also high. Our findings lend support to the rational resource models that allow dynamic distribution of limited memory resources based on the demand for representational granularity.</div></div>\",\"PeriodicalId\":16493,\"journal\":{\"name\":\"Journal of memory and language\",\"volume\":\"142 \",\"pages\":\"Article 104618\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-01-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of memory and language\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0749596X25000117\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of memory and language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749596X25000117","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

有充分的证据表明,将较小的信息单元分组为较大的单元(即分块)可以帮助抵消工作记忆(WM)容量限制。一个长期存在的问题是,工作记忆容量限制是由基于块的单元的绝对数量决定的,还是内存资源是根据任务灵活地分布在多个块上,以及记忆表示编码的分辨率水平。以前在视觉WM方面的研究结果已经证明了这两种观点。本研究探讨了语言资源的合理分配对语言信息存储的影响。两个实验采用变化检测任务和汉语普通话单词作为刺激。实验1的任务鼓励在相对较低的粒度水平上对词汇表示进行编码,而实验2的任务则针对更详细的单词表示(更高的粒度)。有两个发现值得注意。首先,与非单词的记忆表现相比,单词的记忆表现出明显的分块优势。第二,较精确编码(实验2)下的记忆性能比较不精确编码(实验1)下的记忆性能差,但仅在记忆负荷较高时才如此。我们的发现为合理的资源模型提供了支持,该模型允许根据对表示粒度的需求动态分配有限的内存资源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Working memory capacity limit is dependent on encoding granularity: Evidence from Mandarin Chinese
It is well documented that grouping smaller units of information into larger units (i.e. chunking) can help offset working memory (WM) capacity limit. A long standing question is whether working memory capacity limit is determined by the sheer number of chunk-based units, or memory resources are flexibly distributed over multiple chunks depending on tasks and also the resolution level at which memory representations are encoded. Results from previous work in visual WM has shown evidence for both positions. The current work explores the effect of rational resource distribution for storing linguistic information in verbal WM. Two experiments were conducted using a change detection task and Mandarin Chinese words as stimuli. The task in Experiment 1 encouraged the encoding of the lexical representations at relatively lower granularity level, and the task in Experiment 2 targeted more detailed representation of a word (higher granularity). Two findings were noteworthy. First, compared to the memory performance of non-words, the memory performance of words showed clear benefits of chunking. Second, memory performance under more precise encoding (Experiment 2) was worse than under less precise encoding (Experiment 1), but only when memory load was also high. Our findings lend support to the rational resource models that allow dynamic distribution of limited memory resources based on the demand for representational granularity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.70
自引率
14.00%
发文量
49
审稿时长
12.7 weeks
期刊介绍: Articles in the Journal of Memory and Language contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes. Special emphasis is given to research articles that provide new theoretical insights based on a carefully laid empirical foundation. The journal generally favors articles that provide multiple experiments. In addition, significant theoretical papers without new experimental findings may be published. The Journal of Memory and Language is a valuable tool for cognitive scientists, including psychologists, linguists, and others interested in memory and learning, language, reading, and speech. Research Areas include: • Topics that illuminate aspects of memory or language processing • Linguistics • Neuropsychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信