对声音外显记忆的跨模态贡献

IF 1.8 4区 心理学 Q3 BIOPHYSICS
Joshua R Tatz, Zehra F Peynircioğlu
{"title":"对声音外显记忆的跨模态贡献","authors":"Joshua R Tatz, Zehra F Peynircioğlu","doi":"10.1163/22134808-bja10116","DOIUrl":null,"url":null,"abstract":"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Modal Contributions to Episodic Memory for Voices.\",\"authors\":\"Joshua R Tatz, Zehra F Peynircioğlu\",\"doi\":\"10.1163/22134808-bja10116\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>\",\"PeriodicalId\":51298,\"journal\":{\"name\":\"Multisensory Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-12-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multisensory Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1163/22134808-bja10116\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"BIOPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multisensory Research","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1163/22134808-bja10116","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BIOPHYSICS","Score":null,"Total":0}
引用次数: 0

摘要

多感官情境通常有助于感知和记忆。事实上,在多感官情境中对项目进行编码,甚至在严格的单感官测试中(即缺乏多感官情境时)也能提高记忆效果。之前持续发现这些多感官促进效应的研究大多采用了多感官情境,在这种情境中,刺激物与记忆的目标项目有意义上的关联(例如,将典型的声音和图像配对)。其他研究则使用无关刺激作为多感官情境。第三种可能的多感官情境是与环境相关的情境,原因很简单,因为这些刺激物在现实世界中经常一起出现。我们预测,遇到这种多感官情境时,也会通过跨模态联想或与之前对此类刺激的多感官体验相关的表征来增强记忆。在两项记忆实验中,我们使用了陌生人物的脸部和声音作为日常刺激物,这些刺激物对个人的感知特征具有丰富的整合经验。我们将参与者分配到人脸识别组或声音识别组,并确保在研究阶段,一半的人脸或声音目标也会遇到另一种模式的信息。最初与人脸一起编码的声音始终记得更牢,这证明跨模态关联可以解释所观察到的多感官促进作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-Modal Contributions to Episodic Memory for Voices.

Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Multisensory Research
Multisensory Research BIOPHYSICS-PSYCHOLOGY
CiteScore
3.50
自引率
12.50%
发文量
15
期刊介绍: Multisensory Research is an interdisciplinary archival journal covering all aspects of multisensory processing including the control of action, cognition and attention. Research using any approach to increase our understanding of multisensory perceptual, behavioural, neural and computational mechanisms is encouraged. Empirical, neurophysiological, psychophysical, brain imaging, clinical, developmental, mathematical and computational analyses are welcome. Research will also be considered covering multisensory applications such as sensory substitution, crossmodal methods for delivering sensory information or multisensory approaches to robotics and engineering. Short communications and technical notes that draw attention to new developments will be included, as will reviews and commentaries on current issues. Special issues dealing with specific topics will be announced from time to time. Multisensory Research is a continuation of Seeing and Perceiving, and of Spatial Vision.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信