Sexualized Deepfake Abuse: Perpetrator and Victim Perspectives on the Motivations and Forms of Non-Consensually Created and Shared Sexualized Deepfake Imagery.
Asher Flynn,Anastasia Powell,Asia Eaton,Adrian J Scott
{"title":"Sexualized Deepfake Abuse: Perpetrator and Victim Perspectives on the Motivations and Forms of Non-Consensually Created and Shared Sexualized Deepfake Imagery.","authors":"Asher Flynn,Anastasia Powell,Asia Eaton,Adrian J Scott","doi":"10.1177/08862605251368834","DOIUrl":null,"url":null,"abstract":"Advances in digital technologies provide new opportunities for harm, including sexualized deepfake abuse-the non-consensual creation, distribution, or threat to create/distribute an image or video of another person that had been altered in a nude or sexual way. Since 2017, there has been a proliferation of shared open-source technologies to facilitate deepfake creation and dissemination, and a corresponding increase in cases of sexualized deepfake abuse. There is a substantive risk that the increased accessibility of easy-to-use tools, the normalization of non-consensually sexualizing others, and the minimization of harms experienced by those who have their images created and/or shared may impact prevention and response efforts. This article reports on findings from 25 qualitative interviews conducted with perpetrators (n = 10) and victims (n = 15) of sexualized deepfake abuse in Australia. It provides insights into sexualized deepfake abuse, patterns in perpetration and motivations, and explores theoretical explanations that may shed light on how perpetrators justify and minimize their behavior. Ultimately, the study finds some similarities with other forms of technology-facilitated sexual violence, but identifies a need for responses that recognize the accessibility and ease with which deepfakes can be created, and which capture the diversity of experiences, motivations, and consequences. The article argues that responses should expand beyond criminalization to include cross-national collaborations to regulate deepfake tool availability, searches, and advertisements.","PeriodicalId":16289,"journal":{"name":"Journal of Interpersonal Violence","volume":"16 1","pages":"8862605251368834"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Interpersonal Violence","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/08862605251368834","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CRIMINOLOGY & PENOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Advances in digital technologies provide new opportunities for harm, including sexualized deepfake abuse-the non-consensual creation, distribution, or threat to create/distribute an image or video of another person that had been altered in a nude or sexual way. Since 2017, there has been a proliferation of shared open-source technologies to facilitate deepfake creation and dissemination, and a corresponding increase in cases of sexualized deepfake abuse. There is a substantive risk that the increased accessibility of easy-to-use tools, the normalization of non-consensually sexualizing others, and the minimization of harms experienced by those who have their images created and/or shared may impact prevention and response efforts. This article reports on findings from 25 qualitative interviews conducted with perpetrators (n = 10) and victims (n = 15) of sexualized deepfake abuse in Australia. It provides insights into sexualized deepfake abuse, patterns in perpetration and motivations, and explores theoretical explanations that may shed light on how perpetrators justify and minimize their behavior. Ultimately, the study finds some similarities with other forms of technology-facilitated sexual violence, but identifies a need for responses that recognize the accessibility and ease with which deepfakes can be created, and which capture the diversity of experiences, motivations, and consequences. The article argues that responses should expand beyond criminalization to include cross-national collaborations to regulate deepfake tool availability, searches, and advertisements.
期刊介绍:
The Journal of Interpersonal Violence is devoted to the study and treatment of victims and perpetrators of interpersonal violence. It provides a forum of discussion of the concerns and activities of professionals and researchers working in domestic violence, child sexual abuse, rape and sexual assault, physical child abuse, and violent crime. With its dual focus on victims and victimizers, the journal will publish material that addresses the causes, effects, treatment, and prevention of all types of violence. JIV only publishes reports on individual studies in which the scientific method is applied to the study of some aspect of interpersonal violence. Research may use qualitative or quantitative methods. JIV does not publish reviews of research, individual case studies, or the conceptual analysis of some aspect of interpersonal violence. Outcome data for program or intervention evaluations must include a comparison or control group.