人类在学习或生成中心嵌入序列时是否使用下推堆栈?

IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL
Stephen Ferrigno, Samuel J. Cheyette, Susan Carey
{"title":"人类在学习或生成中心嵌入序列时是否使用下推堆栈?","authors":"Stephen Ferrigno,&nbsp;Samuel J. Cheyette,&nbsp;Susan Carey","doi":"10.1111/cogs.70112","DOIUrl":null,"url":null,"abstract":"<p>Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains—natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed A<sup>n</sup>B<sup>n</sup> artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70112","citationCount":"0","resultStr":"{\"title\":\"Do Humans Use Push-Down Stacks When Learning or Producing Center-Embedded Sequences?\",\"authors\":\"Stephen Ferrigno,&nbsp;Samuel J. Cheyette,&nbsp;Susan Carey\",\"doi\":\"10.1111/cogs.70112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains—natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed A<sup>n</sup>B<sup>n</sup> artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.</p>\",\"PeriodicalId\":48349,\"journal\":{\"name\":\"Cognitive Science\",\"volume\":\"49 9\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70112\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70112\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70112","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

复杂的序列在人类的精神生活中无处不在,在许多不同的认知领域构建表征——自然语言、音乐、数学和逻辑,仅举几例。然而,用于学习抽象语法和处理复杂序列的表征和计算机制尚不清楚。在这里,我们使用人工语法学习任务来研究成人如何抽象中心嵌入和跨序列语法,这些语法在训练序列的嵌入水平之外进行推广。我们将未经训练的泛化测试到更长的序列长度,并使用错误模式、项对项响应时间和贝叶斯混合模型来测试两种可能的内存体系结构,它们可能是每种语法序列表示的基础:堆栈和队列。我们发现,成人学习了这两种语法,跨序列语法比匹配的中心嵌入语法更容易学习和生成,并且在序列生成过程中,两种类型序列之间的逐项接触次数存在系统差异。与广泛持有的假设相反,我们发现没有证据表明堆栈架构用于在索引AnBn人工语法中生成中心嵌入序列。相反,数据和建模的结论是,中心嵌入序列和跨序列序列都是使用队列内存架构生成的。在这项研究中,参与者将项目存储在先进先出的内存架构中,然后通过对存储列表的迭代搜索来访问它们,以生成中心嵌入或交叉序列的匹配碱基对。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Do Humans Use Push-Down Stacks When Learning or Producing Center-Embedded Sequences?

Do Humans Use Push-Down Stacks When Learning or Producing Center-Embedded Sequences?

Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains—natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed AnBn artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Science
Cognitive Science PSYCHOLOGY, EXPERIMENTAL-
CiteScore
4.10
自引率
8.00%
发文量
139
期刊介绍: Cognitive Science publishes articles in all areas of cognitive science, covering such topics as knowledge representation, inference, memory processes, learning, problem solving, planning, perception, natural language understanding, connectionism, brain theory, motor control, intentional systems, and other areas of interdisciplinary concern. Highest priority is given to research reports that are specifically written for a multidisciplinary audience. The audience is primarily researchers in cognitive science and its associated fields, including anthropologists, education researchers, psychologists, philosophers, linguists, computer scientists, neuroscientists, and roboticists.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信