Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman
{"title":"Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG","authors":"Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman","doi":"10.1016/j.chbah.2025.100185","DOIUrl":null,"url":null,"abstract":"<div><div>Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100185"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.