{"title":"A statistical foundation for derived attention","authors":"Samuel Paskewitz , Matt Jones","doi":"10.1016/j.jmp.2022.102728","DOIUrl":null,"url":null,"abstract":"<div><p><span>According to the theory of derived attention, organisms attend to cues with strong associations. Prior work has shown that – combined with a Rescorla–Wagner style learning mechanism – derived attention explains phenomena such as learned predictiveness, inattention to blocked cues, and value-based salience. We introduce a Bayesian derived attention model that explains a wider array of results than previous models and gives further insight into the principle of derived attention. Our approach combines Bayesian linear regression with the assumption that the associations of any cue with various outcomes share the same prior variance, which can be thought of as the inherent importance of that cue. The new model simultaneously estimates cue–outcome associations and prior variance through approximate Bayesian learning. A significant cue will develop large associations, leading the model to estimate a high prior variance and hence develop larger associations from that cue to novel outcomes. This provides a normative, statistical explanation for derived attention. Through simulation, we show that this Bayesian derived attention model not only explains the same phenomena as previous versions, but also </span>retrospective revaluation<span>. It also makes a novel prediction: inattention after backward blocking. We hope that further development of the Bayesian derived attention model will shed light on the complex relationship between uncertainty and predictiveness effects on attention.</span></p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"112 ","pages":"Article 102728"},"PeriodicalIF":2.2000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10004174/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Psychology","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0022249622000669","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
According to the theory of derived attention, organisms attend to cues with strong associations. Prior work has shown that – combined with a Rescorla–Wagner style learning mechanism – derived attention explains phenomena such as learned predictiveness, inattention to blocked cues, and value-based salience. We introduce a Bayesian derived attention model that explains a wider array of results than previous models and gives further insight into the principle of derived attention. Our approach combines Bayesian linear regression with the assumption that the associations of any cue with various outcomes share the same prior variance, which can be thought of as the inherent importance of that cue. The new model simultaneously estimates cue–outcome associations and prior variance through approximate Bayesian learning. A significant cue will develop large associations, leading the model to estimate a high prior variance and hence develop larger associations from that cue to novel outcomes. This provides a normative, statistical explanation for derived attention. Through simulation, we show that this Bayesian derived attention model not only explains the same phenomena as previous versions, but also retrospective revaluation. It also makes a novel prediction: inattention after backward blocking. We hope that further development of the Bayesian derived attention model will shed light on the complex relationship between uncertainty and predictiveness effects on attention.
期刊介绍:
The Journal of Mathematical Psychology includes articles, monographs and reviews, notes and commentaries, and book reviews in all areas of mathematical psychology. Empirical and theoretical contributions are equally welcome.
Areas of special interest include, but are not limited to, fundamental measurement and psychological process models, such as those based upon neural network or information processing concepts. A partial listing of substantive areas covered include sensation and perception, psychophysics, learning and memory, problem solving, judgment and decision-making, and motivation.
The Journal of Mathematical Psychology is affiliated with the Society for Mathematical Psychology.
Research Areas include:
• Models for sensation and perception, learning, memory and thinking
• Fundamental measurement and scaling
• Decision making
• Neural modeling and networks
• Psychophysics and signal detection
• Neuropsychological theories
• Psycholinguistics
• Motivational dynamics
• Animal behavior
• Psychometric theory