Visual search is relational without prior context learning

IF 2.8 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Stefanie I. Becker, Zachary Hamblin-Frohman, Koralalage Don Raveen Amarasekera
{"title":"Visual search is relational without prior context learning","authors":"Stefanie I. Becker,&nbsp;Zachary Hamblin-Frohman,&nbsp;Koralalage Don Raveen Amarasekera","doi":"10.1016/j.cognition.2025.106132","DOIUrl":null,"url":null,"abstract":"<div><div>The most prominent models of visual attention assume that we tune attention to the specific feature value of a sought-after object (e.g., a specific colour or orientation) to aid search. However, subsequent research has shown that attention is often tuned to the <em>relative</em> feature of the target, that the target has in relation to other items in the surround (e.g., redder/greener, darker/lighter, larger/smaller), in line with a Relational Account of Attention. Previous research is still limited though, as it used repeated-target designs and relatively sparse displays. With this, it is still unknown whether we can indeed tune attention to relative features prior to the first eye movement, or whether this requires context knowledge gained from experience. Moreover, it is unclear how search progresses from one item to the next. The present study tested these questions in a 36-item search display with multiple distractors and variable target and non-target colours. The first fixations on a trial showed that these displays still reliably evoked relational search, even when observers had no knowledge of the context. Moreover, the first five fixations within a trial showed that we tend to select the most extreme items first, followed by the next-extreme, until the target is found, in line with the relational account. These findings show that information about the relative target feature can be rapidly extracted and is used to guide attention in the first fixation(s) of search, whereby attention only hones in on the target colour after multiple fixations on relatively more extreme distractors.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"260 ","pages":"Article 106132"},"PeriodicalIF":2.8000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027725000721","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

The most prominent models of visual attention assume that we tune attention to the specific feature value of a sought-after object (e.g., a specific colour or orientation) to aid search. However, subsequent research has shown that attention is often tuned to the relative feature of the target, that the target has in relation to other items in the surround (e.g., redder/greener, darker/lighter, larger/smaller), in line with a Relational Account of Attention. Previous research is still limited though, as it used repeated-target designs and relatively sparse displays. With this, it is still unknown whether we can indeed tune attention to relative features prior to the first eye movement, or whether this requires context knowledge gained from experience. Moreover, it is unclear how search progresses from one item to the next. The present study tested these questions in a 36-item search display with multiple distractors and variable target and non-target colours. The first fixations on a trial showed that these displays still reliably evoked relational search, even when observers had no knowledge of the context. Moreover, the first five fixations within a trial showed that we tend to select the most extreme items first, followed by the next-extreme, until the target is found, in line with the relational account. These findings show that information about the relative target feature can be rapidly extracted and is used to guide attention in the first fixation(s) of search, whereby attention only hones in on the target colour after multiple fixations on relatively more extreme distractors.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognition
Cognition PSYCHOLOGY, EXPERIMENTAL-
CiteScore
6.40
自引率
5.90%
发文量
283
期刊介绍: Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信