Stefanie I. Becker, Zachary Hamblin-Frohman, Koralalage Don Raveen Amarasekera
{"title":"Visual search is relational without prior context learning","authors":"Stefanie I. Becker, Zachary Hamblin-Frohman, Koralalage Don Raveen Amarasekera","doi":"10.1016/j.cognition.2025.106132","DOIUrl":null,"url":null,"abstract":"<div><div>The most prominent models of visual attention assume that we tune attention to the specific feature value of a sought-after object (e.g., a specific colour or orientation) to aid search. However, subsequent research has shown that attention is often tuned to the <em>relative</em> feature of the target, that the target has in relation to other items in the surround (e.g., redder/greener, darker/lighter, larger/smaller), in line with a Relational Account of Attention. Previous research is still limited though, as it used repeated-target designs and relatively sparse displays. With this, it is still unknown whether we can indeed tune attention to relative features prior to the first eye movement, or whether this requires context knowledge gained from experience. Moreover, it is unclear how search progresses from one item to the next. The present study tested these questions in a 36-item search display with multiple distractors and variable target and non-target colours. The first fixations on a trial showed that these displays still reliably evoked relational search, even when observers had no knowledge of the context. Moreover, the first five fixations within a trial showed that we tend to select the most extreme items first, followed by the next-extreme, until the target is found, in line with the relational account. These findings show that information about the relative target feature can be rapidly extracted and is used to guide attention in the first fixation(s) of search, whereby attention only hones in on the target colour after multiple fixations on relatively more extreme distractors.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"260 ","pages":"Article 106132"},"PeriodicalIF":2.8000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027725000721","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
The most prominent models of visual attention assume that we tune attention to the specific feature value of a sought-after object (e.g., a specific colour or orientation) to aid search. However, subsequent research has shown that attention is often tuned to the relative feature of the target, that the target has in relation to other items in the surround (e.g., redder/greener, darker/lighter, larger/smaller), in line with a Relational Account of Attention. Previous research is still limited though, as it used repeated-target designs and relatively sparse displays. With this, it is still unknown whether we can indeed tune attention to relative features prior to the first eye movement, or whether this requires context knowledge gained from experience. Moreover, it is unclear how search progresses from one item to the next. The present study tested these questions in a 36-item search display with multiple distractors and variable target and non-target colours. The first fixations on a trial showed that these displays still reliably evoked relational search, even when observers had no knowledge of the context. Moreover, the first five fixations within a trial showed that we tend to select the most extreme items first, followed by the next-extreme, until the target is found, in line with the relational account. These findings show that information about the relative target feature can be rapidly extracted and is used to guide attention in the first fixation(s) of search, whereby attention only hones in on the target colour after multiple fixations on relatively more extreme distractors.
期刊介绍:
Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.