认知仲裁模型的“计算理性”方法:一个使用感知元认知的案例研究。

Q1 Social Sciences
Open Mind Pub Date : 2023-09-20 eCollection Date: 2023-01-01 DOI:10.1162/opmi_a_00100
Yingqi Rong, Megan A K Peters
{"title":"认知仲裁模型的“计算理性”方法:一个使用感知元认知的案例研究。","authors":"Yingqi Rong, Megan A K Peters","doi":"10.1162/opmi_a_00100","DOIUrl":null,"url":null,"abstract":"<p><p>Perceptual confidence results from a metacognitive process which evaluates how likely our percepts are to be correct. Many competing models of perceptual metacognition enjoy strong empirical support. Arbitrating these models traditionally proceeds via researchers conducting experiments and then fitting several models to the data collected. However, such a process often includes conditions or paradigms that may not best arbitrate competing models: Many models make similar predictions under typical experimental conditions. Consequently, many experiments are needed, collectively (sub-optimally) sampling the space of conditions to compare models. Here, instead, we introduce a variant of optimal experimental design which we call a <i>computational-rationality</i> approach to generative models of cognition, using perceptual metacognition as a case study. Instead of designing experiments and post-hoc specifying models, we <i>began</i> with comprehensive model comparison among four competing generative models for perceptual metacognition, drawn from literature. By simulating a simple experiment under each model, we identified conditions where these models made <i>maximally diverging predictions</i> for confidence. We then presented these conditions to human observers, and compared the models' capacity to predict choices and confidence. Results revealed two surprising findings: (1) two models previously reported to differently predict confidence to different degrees, with one predicting better than the other, appeared to predict confidence in a direction <i>opposite</i> to previous findings; and (2) two other models previously reported to equivalently predict confidence showed stark differences in the conditions tested here. Although preliminary with regards to which model is actually 'correct' for perceptual metacognition, our findings reveal the promise of this <i>computational-rationality</i> approach to maximizing experimental utility in model arbitration while minimizing the number of experiments necessary to reveal the winning model, both for perceptual metacognition and in other domains.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"7 ","pages":"652-674"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575558/pdf/","citationCount":"0","resultStr":"{\"title\":\"Toward 'Computational-Rationality' Approaches to Arbitrating Models of Cognition: A Case Study Using Perceptual Metacognition.\",\"authors\":\"Yingqi Rong, Megan A K Peters\",\"doi\":\"10.1162/opmi_a_00100\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Perceptual confidence results from a metacognitive process which evaluates how likely our percepts are to be correct. Many competing models of perceptual metacognition enjoy strong empirical support. Arbitrating these models traditionally proceeds via researchers conducting experiments and then fitting several models to the data collected. However, such a process often includes conditions or paradigms that may not best arbitrate competing models: Many models make similar predictions under typical experimental conditions. Consequently, many experiments are needed, collectively (sub-optimally) sampling the space of conditions to compare models. Here, instead, we introduce a variant of optimal experimental design which we call a <i>computational-rationality</i> approach to generative models of cognition, using perceptual metacognition as a case study. Instead of designing experiments and post-hoc specifying models, we <i>began</i> with comprehensive model comparison among four competing generative models for perceptual metacognition, drawn from literature. By simulating a simple experiment under each model, we identified conditions where these models made <i>maximally diverging predictions</i> for confidence. We then presented these conditions to human observers, and compared the models' capacity to predict choices and confidence. Results revealed two surprising findings: (1) two models previously reported to differently predict confidence to different degrees, with one predicting better than the other, appeared to predict confidence in a direction <i>opposite</i> to previous findings; and (2) two other models previously reported to equivalently predict confidence showed stark differences in the conditions tested here. Although preliminary with regards to which model is actually 'correct' for perceptual metacognition, our findings reveal the promise of this <i>computational-rationality</i> approach to maximizing experimental utility in model arbitration while minimizing the number of experiments necessary to reveal the winning model, both for perceptual metacognition and in other domains.</p>\",\"PeriodicalId\":32558,\"journal\":{\"name\":\"Open Mind\",\"volume\":\"7 \",\"pages\":\"652-674\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575558/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Open Mind\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1162/opmi_a_00100\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Mind","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/opmi_a_00100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

感知信心源于元认知过程,该过程评估我们的感知正确的可能性。许多相互竞争的感知元认知模型都得到了强有力的实证支持。传统上,仲裁这些模型是通过研究人员进行实验,然后将几个模型与收集的数据进行拟合来进行的。然而,这样的过程通常包括可能不能最好地仲裁竞争模型的条件或范式:许多模型在典型的实验条件下做出类似的预测。因此,需要进行许多实验,对条件空间进行集体(次优)采样以比较模型。相反,在这里,我们引入了一种优化实验设计的变体,我们称之为认知生成模型的计算理性方法,使用感知元认知作为案例研究。我们没有设计实验和事后指定模型,而是从文献中提取的四个相互竞争的感知元认知生成模型之间的全面模型比较开始。通过在每个模型下模拟一个简单的实验,我们确定了这些模型对置信度进行最大偏差预测的条件。然后,我们向人类观察者展示了这些条件,并比较了模型预测选择和信心的能力。结果揭示了两个令人惊讶的发现:(1)之前报道的两个模型在不同程度上预测置信度不同,其中一个预测得比另一个好,似乎预测置信度的方向与之前的发现相反;以及(2)先前报道的另外两个等效预测置信度的模型在这里测试的条件中显示出明显的差异。尽管对于哪种模型实际上对感知元认知是“正确的”是初步的,但我们的研究结果揭示了这种计算理性方法的前景,即在模型仲裁中最大限度地提高实验效用,同时在感知元认知和其他领域中尽可能减少揭示获胜模型所需的实验数量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Toward 'Computational-Rationality' Approaches to Arbitrating Models of Cognition: A Case Study Using Perceptual Metacognition.

Toward 'Computational-Rationality' Approaches to Arbitrating Models of Cognition: A Case Study Using Perceptual Metacognition.

Toward 'Computational-Rationality' Approaches to Arbitrating Models of Cognition: A Case Study Using Perceptual Metacognition.

Toward 'Computational-Rationality' Approaches to Arbitrating Models of Cognition: A Case Study Using Perceptual Metacognition.

Perceptual confidence results from a metacognitive process which evaluates how likely our percepts are to be correct. Many competing models of perceptual metacognition enjoy strong empirical support. Arbitrating these models traditionally proceeds via researchers conducting experiments and then fitting several models to the data collected. However, such a process often includes conditions or paradigms that may not best arbitrate competing models: Many models make similar predictions under typical experimental conditions. Consequently, many experiments are needed, collectively (sub-optimally) sampling the space of conditions to compare models. Here, instead, we introduce a variant of optimal experimental design which we call a computational-rationality approach to generative models of cognition, using perceptual metacognition as a case study. Instead of designing experiments and post-hoc specifying models, we began with comprehensive model comparison among four competing generative models for perceptual metacognition, drawn from literature. By simulating a simple experiment under each model, we identified conditions where these models made maximally diverging predictions for confidence. We then presented these conditions to human observers, and compared the models' capacity to predict choices and confidence. Results revealed two surprising findings: (1) two models previously reported to differently predict confidence to different degrees, with one predicting better than the other, appeared to predict confidence in a direction opposite to previous findings; and (2) two other models previously reported to equivalently predict confidence showed stark differences in the conditions tested here. Although preliminary with regards to which model is actually 'correct' for perceptual metacognition, our findings reveal the promise of this computational-rationality approach to maximizing experimental utility in model arbitration while minimizing the number of experiments necessary to reveal the winning model, both for perceptual metacognition and in other domains.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Open Mind
Open Mind Social Sciences-Linguistics and Language
CiteScore
3.20
自引率
0.00%
发文量
15
审稿时长
53 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信