How do people evaluate the accuracy of video posts when a warning indicates they were generated by AI?

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Yuya Shibuya, Tomoka Nakazato, Soichiro Takagi
{"title":"How do people evaluate the accuracy of video posts when a warning indicates they were generated by AI?","authors":"Yuya Shibuya,&nbsp;Tomoka Nakazato,&nbsp;Soichiro Takagi","doi":"10.1016/j.ijhcs.2025.103485","DOIUrl":null,"url":null,"abstract":"<div><div>Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"199 ","pages":"Article 103485"},"PeriodicalIF":5.1000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925000424","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.
当警告表明视频是由人工智能生成时,人们如何评估视频帖子的准确性?
鉴于人们对生成式人工智能(GenAI)驱动的错误信息的担忧日益加剧,谷歌、Meta和TikTok等主要平台已经实施了新的政策,警告用户注意人工智能生成的内容。然而,我们还没有完全理解这种泄露人工智能制作内容的用户界面设计对用户感知的影响。这项研究调查了当人们被告知视频内容是由GenAI制作时,他们是如何评估视频内容的准确性的。我们在美国进行了一项在线实验(14930次观察),其中一半的参与者在社交媒体上观看真实和虚假视频内容的模型之前和之后都展示了关于人工智能的警告信息,而另一半则只观看了没有警告信息的相同视频。结果表明,只有那些对人工智能有积极看法的人,警告信息才会对辨别真假内容的能力产生影响。相反,那些对人工智能持负面看法的人倾向于认为所有人工智能制作的视频帖子,包括那些不包含虚假信息的视频,当他们知道这些视频是由人工智能制作的时,就不那么准确了。这些结果表明,仅仅依靠简单的警告来减轻基于基因的错误信息的局限性。未来需要对设计超越简单警告的接口进行持续调查研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Human-Computer Studies
International Journal of Human-Computer Studies 工程技术-计算机:控制论
CiteScore
11.50
自引率
5.60%
发文量
108
审稿时长
3 months
期刊介绍: The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities. Research areas relevant to the journal include, but are not limited to: • Innovative interaction techniques • Multimodal interaction • Speech interaction • Graphic interaction • Natural language interaction • Interaction in mobile and embedded systems • Interface design and evaluation methodologies • Design and evaluation of innovative interactive systems • User interface prototyping and management systems • Ubiquitous computing • Wearable computers • Pervasive computing • Affective computing • Empirical studies of user behaviour • Empirical studies of programming and software engineering • Computer supported cooperative work • Computer mediated communication • Virtual reality • Mixed and augmented Reality • Intelligent user interfaces • Presence ...
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信