Diverse misinformation: impacts of human biases on detection of deepfakes on networks

Juniper Lovato, Jonathan St-Onge, Randall Harp, Gabriela Salazar Lopez, Sean P. Rogers, Ijaz Ul Haq, Laurent Hébert-Dufresne, Jeremiah Onaolapo
{"title":"Diverse misinformation: impacts of human biases on detection of deepfakes on networks","authors":"Juniper Lovato, Jonathan St-Onge, Randall Harp, Gabriela Salazar Lopez, Sean P. Rogers, Ijaz Ul Haq, Laurent Hébert-Dufresne, Jeremiah Onaolapo","doi":"10.1038/s44260-024-00006-y","DOIUrl":null,"url":null,"abstract":"Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation. To investigate how users’ biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: (1) their classification as misinformation is more objective; (2) we can control the demographics of the personas presented; (3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N = 2016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide “herd correction” where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.","PeriodicalId":501707,"journal":{"name":"npj Complexity","volume":" ","pages":"1-13"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44260-024-00006-y.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"npj Complexity","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44260-024-00006-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation. To investigate how users’ biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: (1) their classification as misinformation is more objective; (2) we can control the demographics of the personas presented; (3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N = 2016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide “herd correction” where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.

Abstract Image

多样化的错误信息:人类偏见对网络深度伪造检测的影响
社交媒体平台通常认为,用户可以对错误信息进行自我纠正。然而,社交媒体用户并非同样容易受到所有误导信息的影响,因为他们的偏见会影响哪些类型的误导信息可能大行其道,哪些人可能面临风险。我们称 "多样化的错误信息 "为人类偏见与错误信息所代表的人口统计之间的复杂关系。为了研究用户的偏见如何影响他们的易感性和相互纠正的能力,我们分析了作为多样化错误信息一种类型的深度假新闻的分类。我们选择深度假新闻作为案例研究的原因有三:(1)将其归类为错误信息更加客观;(2)我们可以控制所呈现的角色的人口统计学特征;(3)深度假新闻是现实世界中的一个问题,其相关危害必须得到更好的理解。我们的论文介绍了一项观察性调查(N = 2016),参与者在不知道有些视频可能是深度伪造的情况下,接触视频并被问及有关视频属性的问题。我们的分析调查了不同用户上当受骗的程度,以及哪些感知到的深度伪造角色的人口统计学特征容易产生误导。我们发现,准确率因人口统计学而异,参与者一般更善于对符合自己的视频进行分类。我们从这些结果中推断出这些偏差对人群的潜在影响,并利用一个数学模型对不同的错误信息和人群纠正之间的相互作用进行了分析。我们的模型表明,不同的联系人可能会提供 "群体校正",朋友之间可以相互保护。总之,人类的偏见和错误信息的属性非常重要,但拥有一个多样化的社会群体可能有助于降低对错误信息的易感性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信