{"title":"Learning to be confident: How agents learn confidence based on prediction errors","authors":"Pierre Le Denmat , Kobe Desender , Tom Verguts","doi":"10.1016/j.cognition.2025.106332","DOIUrl":null,"url":null,"abstract":"<div><div>Decision confidence should normatively reflect the posterior probability of making a correct choice, conditional on relevant information. However, how individuals learn to calibrate their sense of confidence to that probability remains unknown. The standard approach to estimate any quantity is to use trial-by trial samples of that quantity to train a function approximator (such as a neural network) based on the prediction errors (quantity minus prediction of the quantity). We tested whether humans learn about confidence using this principle in a perceptual decision-making experiment where participants repeatedly alternated between two manipulated feedback regimes (negative vs positive) every few blocks of trials. As anticipated, confidence ratings tracked feedback, with confidence gradually increasing when participants received overall positive feedback (and thus positive prediction errors), and decreasing when receiving negative feedback (and thus negative prediction errors). These feedback-induced dynamic changes were specific to confidence, as objective performance was unaffected by the manipulation. We propose a single-layer neural network model for confidence which updates the computation of confidence based on trial-level prediction errors, and demonstrate that it better fits the behavioral data compared to a purely valence-based model. Taken together, these results show that the computation of confidence is dynamic: humans constantly update how they compute confidence based on prediction errors (feedback minus prediction), in a statistically efficient manner.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"266 ","pages":"Article 106332"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027725002732","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Decision confidence should normatively reflect the posterior probability of making a correct choice, conditional on relevant information. However, how individuals learn to calibrate their sense of confidence to that probability remains unknown. The standard approach to estimate any quantity is to use trial-by trial samples of that quantity to train a function approximator (such as a neural network) based on the prediction errors (quantity minus prediction of the quantity). We tested whether humans learn about confidence using this principle in a perceptual decision-making experiment where participants repeatedly alternated between two manipulated feedback regimes (negative vs positive) every few blocks of trials. As anticipated, confidence ratings tracked feedback, with confidence gradually increasing when participants received overall positive feedback (and thus positive prediction errors), and decreasing when receiving negative feedback (and thus negative prediction errors). These feedback-induced dynamic changes were specific to confidence, as objective performance was unaffected by the manipulation. We propose a single-layer neural network model for confidence which updates the computation of confidence based on trial-level prediction errors, and demonstrate that it better fits the behavioral data compared to a purely valence-based model. Taken together, these results show that the computation of confidence is dynamic: humans constantly update how they compute confidence based on prediction errors (feedback minus prediction), in a statistically efficient manner.
期刊介绍:
Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.