{"title":"A (Mid)journey Through Reality: Assessing Accuracy, Impostor Bias, and Automation Bias in Human Detection of AI-Generated Images","authors":"Mirko Casu, Luca Guarnera, Ignazio Zangara, Pasquale Caponnetto, Sebastiano Battiato","doi":"10.1155/hbe2/9977058","DOIUrl":null,"url":null,"abstract":"<p>While the challenge of distinguishing AI-generated from real images is widely acknowledged, the specific cognitive biases that systematically shape human judgment in this domain remain poorly understood. It is particularly unclear how a general awareness of AI capabilities fosters novel biases, like a pervasive skepticism (“impostor bias”), and how this interacts with established phenomena like “automation bias”. This study addresses this gap by providing the first quantitative analysis of how these two biases operate across five distinct experimental variants designed to test the context-dependency of human perception. Through a mixed-methods study with 746 participants, we demonstrate that human authentication accuracy hovered around chance levels (ranging from 47.0% to 55.5%). However, our analysis provides robust evidence for the systematic operation of cognitive biases. We validate the presence of “impostor bias” through a consistent pattern of higher doubt for AI-generated images and confirm “automation bias” through significant opinion changes following algorithmic suggestions. Our findings reveal that these biases are not uniform across populations: gender was a consistent predictor of automation bias, with males in all five variants showing a significantly stronger and more consistent tendency (Cohen’s <i>d</i> = 0.254–0.683) to be influenced by algorithmic suggestions. In contrast, age and academic background had minimal and highly localized effects. Furthermore, we identified a significant interaction between experimental stimuli and performance over time, isolating a pronounced fatigue effect to a single questionnaire variant where accuracy progressively declined (by approximately 1.7% per trial). By integrating human feedback with Grad-CAM visualizations, we confirm a divergence between human holistic evaluation and the localized focus of machine learning models. These findings carry direct implications for policy, as discussed within the context of the European AI Act, and inform the design of human–AI systems and media literacy programs aimed at mitigating these critical cognitive vulnerabilities.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2025 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/9977058","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/9977058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
While the challenge of distinguishing AI-generated from real images is widely acknowledged, the specific cognitive biases that systematically shape human judgment in this domain remain poorly understood. It is particularly unclear how a general awareness of AI capabilities fosters novel biases, like a pervasive skepticism (“impostor bias”), and how this interacts with established phenomena like “automation bias”. This study addresses this gap by providing the first quantitative analysis of how these two biases operate across five distinct experimental variants designed to test the context-dependency of human perception. Through a mixed-methods study with 746 participants, we demonstrate that human authentication accuracy hovered around chance levels (ranging from 47.0% to 55.5%). However, our analysis provides robust evidence for the systematic operation of cognitive biases. We validate the presence of “impostor bias” through a consistent pattern of higher doubt for AI-generated images and confirm “automation bias” through significant opinion changes following algorithmic suggestions. Our findings reveal that these biases are not uniform across populations: gender was a consistent predictor of automation bias, with males in all five variants showing a significantly stronger and more consistent tendency (Cohen’s d = 0.254–0.683) to be influenced by algorithmic suggestions. In contrast, age and academic background had minimal and highly localized effects. Furthermore, we identified a significant interaction between experimental stimuli and performance over time, isolating a pronounced fatigue effect to a single questionnaire variant where accuracy progressively declined (by approximately 1.7% per trial). By integrating human feedback with Grad-CAM visualizations, we confirm a divergence between human holistic evaluation and the localized focus of machine learning models. These findings carry direct implications for policy, as discussed within the context of the European AI Act, and inform the design of human–AI systems and media literacy programs aimed at mitigating these critical cognitive vulnerabilities.
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.