Data-driven studies in face identity processing rely on the quality of the tests and data sets

IF 3.2 2区 心理学 Q1 BEHAVIORAL SCIENCES
Anna K. Bobak , Alex L. Jones , Zoe Hilker , Natalie Mestry , Sarah Bate , Peter J.B. Hancock
{"title":"Data-driven studies in face identity processing rely on the quality of the tests and data sets","authors":"Anna K. Bobak ,&nbsp;Alex L. Jones ,&nbsp;Zoe Hilker ,&nbsp;Natalie Mestry ,&nbsp;Sarah Bate ,&nbsp;Peter J.B. Hancock","doi":"10.1016/j.cortex.2023.05.018","DOIUrl":null,"url":null,"abstract":"<div><p>There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., <em>confirmation</em> of identity match or <em>elimination</em> of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine <em>factors underpinning performance</em>. Importantly, we examined the <em>reliability</em> of these tests, <em>relationships between them</em>, and quantified <em>participant consistency across tests</em>. Our findings show that participants’ performance can be split into two factors (called here <em>confirmation</em> and <em>elimination</em> of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.</p></div>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"166 ","pages":"Pages 348-364"},"PeriodicalIF":3.2000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cortex","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010945223001557","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants’ performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.

人脸识别处理的数据驱动研究依赖于测试和数据集的质量
人们对数据驱动的方法如何帮助理解人脸识别处理(FIP)中的个体差异越来越感兴趣。然而,研究人员可以互换使用各种FIP测试,目前尚不清楚这些测试1)测量相同的基本能力和过程(例如,确认身份匹配或消除身份匹配)2)是否可靠,3)在在线和实验室测试中为个人提供一致的表现。这些因素加在一起会影响数据驱动分析的结果。在这里,我们要求211名参与者进行文献中经常报道的八项测试。我们使用主成分分析和聚集聚类来确定支撑性能的因素。重要的是,我们检查了这些测试的可靠性、它们之间的关系,并量化了参与者在测试中的一致性。我们的研究结果表明,参与者的表现可以分为两个因素(这里称为身份匹配的确认和消除),参与者根据他们在其中一个因素上是否强大或在两个因素上都一样来进行聚类。我们发现,这些测试的可靠性充其量是中等的,它们之间的相关性很弱,参与者在测试中的表现一致性很低。制定可靠有效的FIP指标并持续审查现有指标,将是从数据驱动的研究中得出有意义结论的关键。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cortex
Cortex 医学-行为科学
CiteScore
7.00
自引率
5.60%
发文量
250
审稿时长
74 days
期刊介绍: CORTEX is an international journal devoted to the study of cognition and of the relationship between the nervous system and mental processes, particularly as these are reflected in the behaviour of patients with acquired brain lesions, normal volunteers, children with typical and atypical development, and in the activation of brain regions and systems as recorded by functional neuroimaging techniques. It was founded in 1964 by Ennio De Renzi.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信