“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video

Q2 Social Sciences
Travis L. Wagner, Ashley Blewer
{"title":"“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video","authors":"Travis L. Wagner, Ashley Blewer","doi":"10.1515/opis-2019-0003","DOIUrl":null,"url":null,"abstract":"Abstract It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to “read” the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are “deepfakes”: a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around “fake news.” Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women’s faces into pornographic videos. The implication here is a reification of women’s bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used to perfect deepfakes were men. This paper explores how the emergence and distribution of deepfakes continues to enforce gendered disparities within visual information. This paper, however, rejects the inevitability of deepfakes arguing that feminist oriented approaches to artificial intelligence building and a critical approaches to visual information literacy can stifle the distribution of violently sexist deepfakes.","PeriodicalId":32626,"journal":{"name":"Open Information Science","volume":"10 1","pages":"32 - 46"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/opis-2019-0003","citationCount":"37","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Information Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/opis-2019-0003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 37

Abstract

Abstract It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to “read” the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are “deepfakes”: a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around “fake news.” Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women’s faces into pornographic videos. The implication here is a reification of women’s bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used to perfect deepfakes were men. This paper explores how the emergence and distribution of deepfakes continues to enforce gendered disparities within visual information. This paper, however, rejects the inevitability of deepfakes arguing that feminist oriented approaches to artificial intelligence building and a critical approaches to visual information literacy can stifle the distribution of violently sexist deepfakes.
“真实的世界不再真实”:人工智能视频的深度造假、性别和挑战
对于图像的普通消费者来说,如果没有对如何“阅读”数字图像的敏锐理解,几乎不可能对数字改变的图像进行认证。就像Photoshop对照片的修改一样,人工智能和计算机图形学的进步使得视频的无缝修改在未经训练的人看来是真实的。用来描述这些视频的通俗说法是“深度造假”:这是深度学习人工智能和伪造图像的合成词。这些视频作为真实再现的含义很重要,尤其是在围绕“假新闻”的修辞中。然而,这种可以通过高端编辑软件和免费移动应用程序部署的修改软件仍处于严格审查之下。深度造假的一个令人不安的例子是将女性的脸叠加到色情视频中。这里的暗示是将女性的身体物化为一种视觉消费的东西,在这里规避了同意。考虑到用于完美深度假的身体是男性,这种用法令人困惑。本文探讨了深度造假的出现和分布如何继续在视觉信息中执行性别差异。然而,本文拒绝了深度造假的必然性,认为以女权主义为导向的人工智能构建方法和视觉信息素养的批判性方法可以扼杀暴力性别歧视的深度造假的分布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Open Information Science
Open Information Science Social Sciences-Library and Information Sciences
CiteScore
1.40
自引率
0.00%
发文量
7
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信