{"title":"How do people evaluate the accuracy of video posts when a warning indicates they were generated by AI?","authors":"Yuya Shibuya, Tomoka Nakazato, Soichiro Takagi","doi":"10.1016/j.ijhcs.2025.103485","DOIUrl":null,"url":null,"abstract":"<div><div>Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"199 ","pages":"Article 103485"},"PeriodicalIF":5.1000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925000424","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...