{"title":"人类在深度伪造检测中的表现:系统综述","authors":"Klaire Somoray, Dan J. Miller, Mary Holmes","doi":"10.1155/hbe2/1833228","DOIUrl":null,"url":null,"abstract":"<p><i>Deepfakes</i> refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2025 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/1833228","citationCount":"0","resultStr":"{\"title\":\"Human Performance in Deepfake Detection: A Systematic Review\",\"authors\":\"Klaire Somoray, Dan J. Miller, Mary Holmes\",\"doi\":\"10.1155/hbe2/1833228\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><i>Deepfakes</i> refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.</p>\",\"PeriodicalId\":36408,\"journal\":{\"name\":\"Human Behavior and Emerging Technologies\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-08-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/1833228\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Behavior and Emerging Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/1833228\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/1833228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
Deepfakes指的是一系列由计算机生成的合成媒体,其中一个人的外表或肖像被改变成与另一个人相似。这篇系统综述的目的是概述现有的关于人们检测深度伪造的能力的研究。5个数据库(IEEE, ProQuest, PubMed, Web of Science, Scopus)检索截止到2023年12月。纳入以下研究:(1)是原创研究;(2)以英文报道;(3)检验人们对深度造假的检测;(4)检查干预、策略或变量对深度伪造检测的影响;(5)报告评价检测精度所需的相关数据。来自30份独特记录的40项独立研究纳入了该综述。结果以叙述性的方式总结,并根据综述的研究问题组织主要发现。研究使用了不同的表现衡量标准,因此很难比较文献中的结果。检测准确率差异很大,一些研究表明人类的表现优于人工智能模型,而另一些研究则表明相反。检测性能还受到个人水平(如认知能力、分析思维)和刺激水平因素(如深度伪造的质量、对主题的熟悉程度)的影响。提高人们深度识别能力的干预措施产生了好坏参半的结果。人类和基于人工智能的检测模型在检测时关注的方面不同,这表明了人类与人工智能合作的潜力。这些发现强调了影响人类深度伪造检测的因素之间复杂的相互作用,以及需要进一步研究以制定有效的深度伪造检测策略。
Human Performance in Deepfake Detection: A Systematic Review
Deepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.