Template-based attentional guidance and generic procedural learning in contextual guided visual search: Evidence from reduced response time variability.
{"title":"Template-based attentional guidance and generic procedural learning in contextual guided visual search: Evidence from reduced response time variability.","authors":"Hongyu Yang, Shasha Zhu, Senlin Liu, Lixia Yuan, Xiaowei Xie, Xuelian Zang","doi":"10.1167/jov.25.4.1","DOIUrl":null,"url":null,"abstract":"<p><p>The contextual cueing effect-where participants search repeated displays faster than novel ones-is often explained by the \"attention guidance\" account, which posits that repeated exposure helps individuals learn the context and attend to the likely target locations. Alternatively, the \"generic procedural learning\" account suggests that a general search strategy is developed for all displays, although repeated contexts play a higher weight in optimizing the strategy due to their higher presented frequency. This makes responses faster for repeated displays than novel displays. The current study examined these two mechanisms using a varied contextual cueing paradigm to analyze response time (RT) variability with the coefficient of variation (CV) and time-frequency analysis of RTs. Experiment 1 involved uninterrupted training with repeated and novel displays presented separately, followed by a test with randomly interleaved repeated and novel displays. Experiment 2 used interleaved displays for training before an uninterrupted test phase. Both experiments revealed faster RTs and reduced template-based variability for repeated displays early in the training, supporting attentional guidance. However, generic procedural learning, indicated by a late onset of lower cross-display variability for repeated displays, required more time and training to validate the cueing effect. These findings suggest that attentional guidance dominates early learning, but both mechanisms contribute to the contextual cueing effect overall.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 4","pages":"1"},"PeriodicalIF":2.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.25.4.1","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
The contextual cueing effect-where participants search repeated displays faster than novel ones-is often explained by the "attention guidance" account, which posits that repeated exposure helps individuals learn the context and attend to the likely target locations. Alternatively, the "generic procedural learning" account suggests that a general search strategy is developed for all displays, although repeated contexts play a higher weight in optimizing the strategy due to their higher presented frequency. This makes responses faster for repeated displays than novel displays. The current study examined these two mechanisms using a varied contextual cueing paradigm to analyze response time (RT) variability with the coefficient of variation (CV) and time-frequency analysis of RTs. Experiment 1 involved uninterrupted training with repeated and novel displays presented separately, followed by a test with randomly interleaved repeated and novel displays. Experiment 2 used interleaved displays for training before an uninterrupted test phase. Both experiments revealed faster RTs and reduced template-based variability for repeated displays early in the training, supporting attentional guidance. However, generic procedural learning, indicated by a late onset of lower cross-display variability for repeated displays, required more time and training to validate the cueing effect. These findings suggest that attentional guidance dominates early learning, but both mechanisms contribute to the contextual cueing effect overall.
期刊介绍:
Exploring all aspects of biological visual function, including spatial vision, perception,
low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.