真实性通知是否会影响人们对人工智能生成的政治图像的看法?脑电认知调查

Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman
{"title":"真实性通知是否会影响人们对人工智能生成的政治图像的看法?脑电认知调查","authors":"Colin Conrad ,&nbsp;Anika Nissen ,&nbsp;Kya Masoumi ,&nbsp;Mayank Ramchandani ,&nbsp;Rafael Fecury Braga ,&nbsp;Aaron J. Newman","doi":"10.1016/j.chbah.2025.100185","DOIUrl":null,"url":null,"abstract":"<div><div>Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100185"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG\",\"authors\":\"Colin Conrad ,&nbsp;Anika Nissen ,&nbsp;Kya Masoumi ,&nbsp;Mayank Ramchandani ,&nbsp;Rafael Fecury Braga ,&nbsp;Aaron J. Newman\",\"doi\":\"10.1016/j.chbah.2025.100185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"5 \",\"pages\":\"Article 100185\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000696\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对于民主国家来说,政治错误信息是一个日益严重的问题,部分原因是可广泛获取的人工智能生成内容(AIGC)的兴起。作为回应,社交媒体平台越来越多地考虑明确的人工智能内容标签,尽管支持这种方法有效性的证据参差不齐。在本文中,我们讨论了两项研究,这些研究揭示了先前的认知过程,这些过程有助于解释为什么以及如何在人工智能生成的政治图像的特定背景下,AIGC标签影响用户评价。在第一项研究中,我们对26名参与者进行了神经生理学实验,使用脑电图事件相关电位(EEG event- associated potential, ERPs)和自我报告测量来深入了解与人工生成的政治图像和AIGC标签评估相关的大脑过程。在第二项研究中,我们将脑电图研究中的一些刺激嵌入到复制的YouTube推荐中,并在线管理276名参与者。这两项研究的结果表明,人工智能生成的政治图像与更高的注意力和情感处理有关。这些反应与人们对人性和可信度的看法有关。重要的是,有效的AIGC标签可以影响可信度感知。我们发现影响可以追溯到大脑的后期执行网络活动,正如P300和晚期正电位(LPP)成分的模式所反映的那样。我们的研究结果表明,当设计经过仔细考虑时,AIGC标签可以成为解决在线错误信息的有效方法。未来的研究可以通过将更逼真的刺激与生态有效的社交媒体任务和多模态观察技术相结合来扩展这些结果,以改进标签设计和跨人口细分的个性化干预。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG
Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信