{"title":"Attentional misguidance from contextual learning after target location changes in natural scenes","authors":"Markus Conci, Feifei Zhao","doi":"10.1016/j.visres.2025.108591","DOIUrl":null,"url":null,"abstract":"<div><div>Attentional orienting in complex visual environments is supported by statistical learning of regularities. For instance, visual search for a target is faster when a distractor layout is repeatedly encountered, illustrating that learned contextual invariances improve attentional guidance (contextual cueing). Although contextual learning is usually relatively efficient, relocating the target (within an otherwise unchanged layout) typically abolishes contextual cueing, while revealing only a slow recovery of learning. However, such a “lack-of-adaptation” was usually only shown with artificial displays with target/distractor letters. The current study in turn used more realistic natural scene images to determine whether a comparable cost would also be evident in real-life contexts. Two experiments compared initial contextual cueing and the subsequent updating after a change in displays that either presented artificial letters, or natural scenes as contexts. With letter displays, an initial cueing effect was found that was associated with non-explicit, incidental learning, which vanished after the change. Natural scene displays either revealed a rather large cueing effect that was related to explicit memory (Experiment 1), or cueing was less strong and based on incidental learning (Experiment 2), with the size of cueing and the explicitness of the memory representation depending on the variability of the presented scene images. However, these variable initial benefits in scene displays always led to a substantial reduction after the change, comparable to the pattern in letter displays. Together, these findings show that the “richness” of natural scene contexts does not facilitate flexible contextual updating.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"230 ","pages":"Article 108591"},"PeriodicalIF":1.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vision Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0042698925000525","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Attentional orienting in complex visual environments is supported by statistical learning of regularities. For instance, visual search for a target is faster when a distractor layout is repeatedly encountered, illustrating that learned contextual invariances improve attentional guidance (contextual cueing). Although contextual learning is usually relatively efficient, relocating the target (within an otherwise unchanged layout) typically abolishes contextual cueing, while revealing only a slow recovery of learning. However, such a “lack-of-adaptation” was usually only shown with artificial displays with target/distractor letters. The current study in turn used more realistic natural scene images to determine whether a comparable cost would also be evident in real-life contexts. Two experiments compared initial contextual cueing and the subsequent updating after a change in displays that either presented artificial letters, or natural scenes as contexts. With letter displays, an initial cueing effect was found that was associated with non-explicit, incidental learning, which vanished after the change. Natural scene displays either revealed a rather large cueing effect that was related to explicit memory (Experiment 1), or cueing was less strong and based on incidental learning (Experiment 2), with the size of cueing and the explicitness of the memory representation depending on the variability of the presented scene images. However, these variable initial benefits in scene displays always led to a substantial reduction after the change, comparable to the pattern in letter displays. Together, these findings show that the “richness” of natural scene contexts does not facilitate flexible contextual updating.
期刊介绍:
Vision Research is a journal devoted to the functional aspects of human, vertebrate and invertebrate vision and publishes experimental and observational studies, reviews, and theoretical and computational analyses. Vision Research also publishes clinical studies relevant to normal visual function and basic research relevant to visual dysfunction or its clinical investigation. Functional aspects of vision is interpreted broadly, ranging from molecular and cellular function to perception and behavior. Detailed descriptions are encouraged but enough introductory background should be included for non-specialists. Theoretical and computational papers should give a sense of order to the facts or point to new verifiable observations. Papers dealing with questions in the history of vision science should stress the development of ideas in the field.