Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes最新文献

筛选
英文 中文
A Face Pre-Processing Approach to Evade Deepfake Detector 一种人脸预处理方法逃避Deepfake检测器
Taejune Kim, Jeongho Kim, J. Kim, Simon S. Woo
{"title":"A Face Pre-Processing Approach to Evade Deepfake Detector","authors":"Taejune Kim, Jeongho Kim, J. Kim, Simon S. Woo","doi":"10.1145/3494109.3527190","DOIUrl":"https://doi.org/10.1145/3494109.3527190","url":null,"abstract":"Recently, various image synthesis technologies have increased the prevalence of impersonation attacks. With the development of such technologies, damages to people such as defamation or fake news have also increased. Deepfakes have already evolved to the point, where people cannot easily distinguish fake from real. This leads to an urgent need for developing detection methods. Currently, in order to detect deepfakes, many deepfake datasets are widely used in deep neural networks. And several methods have been proposed and demonstrated to be effective in detecting deepfakes. In this work, we present pre-processing techniques such as face restoration, edge smoothing, face beautification to mitigate the artifacts of deepfakes and makes them appear more natural to humans, while lowering the deepfake detection performance. Through extensive experiments, our method can significantly lower the performance of the state-of-the-art deepfake detectors and expose the vulnerability of deployed detectors.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115124103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discussion Paper: The Integrity of Medical AI 讨论文件:医疗人工智能的完整性
Yisroel Mirsky
{"title":"Discussion Paper: The Integrity of Medical AI","authors":"Yisroel Mirsky","doi":"10.1145/3494109.3527191","DOIUrl":"https://doi.org/10.1145/3494109.3527191","url":null,"abstract":"Deep learning has proven itself to be an incredible asset to the medical community. However, with offensive AI, the technology can be turned against medical community; adversarial samples can be used to cause misdiagnosis and medical deepfakes can be used fool both radiologists and machines alike. In this short discussion paper, we talk about the issue of offensive AI and from the perspective of healthcare. We discuss how defense researchers in this domain have responded to the threat and their the current challenges. We conclude by arguing that conventional security mechanisms are a better approach towards mitigating these threats over algorithm based solutions.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Zoom-DF: A Dataset for Video Conferencing Deepfake Zoom-DF:视频会议深度造假数据集
Geon-Woo Park, Eun-Ju Park, Simon S. Woo
{"title":"Zoom-DF: A Dataset for Video Conferencing Deepfake","authors":"Geon-Woo Park, Eun-Ju Park, Simon S. Woo","doi":"10.1145/3494109.3527195","DOIUrl":"https://doi.org/10.1145/3494109.3527195","url":null,"abstract":"With the rapid growth of deep learning methods, AI technologies for generating deepfake videos also have been significantly advanced. Nowadays, the manipulated videos such as deepfakes are so sophisticated that one cannot easily differentiate between real and fake, and one can create such videos with little effort. However, such technologies can be likely to be abused by people with malicious intents. To address this issue, approaches and efforts to detect deepfakes have been researched significantly. However, the performances of the detectors in general depends on the quantity and quality of the training data. In this paper, we introduce a new deepfake dataset, Zoom-DF, which can be injected during the remote meeting and video conferencing, to create a sequence of fake participant images. While most deepfake datasets focus on the face area, our dataset primarily targets for the remote meeting, and manipulates movements of the participants. We evaluate existing deepfake detectors on our new Zoom-DF dataset and present the performance results.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extracting a Minimal Trigger for an Efficient Backdoor Poisoning Attack Using the Activation Values of a Deep Neural Network 利用深度神经网络的激活值提取有效后门投毒攻击的最小触发点
Hyunsik Na, D. Choi
{"title":"Extracting a Minimal Trigger for an Efficient Backdoor Poisoning Attack Using the Activation Values of a Deep Neural Network","authors":"Hyunsik Na, D. Choi","doi":"10.1145/3494109.3527192","DOIUrl":"https://doi.org/10.1145/3494109.3527192","url":null,"abstract":"A backdoor poisoning attack is an approach that threatens the security of artificial intelligence by injecting a predefined backdoor trigger into a training dataset to induce misbehavior in the classification model. In this paper, we discuss an approach to extract the critical regions of the backdoor trigger for a more efficient backdoor poisoning attack. Through this approach, an attacker can attempt a more invisible and powerful attack and minimize image falsification. We first describe how to detect neurons, which are more affected by the backdoor trigger in the classification model. Then, we discuss how to iteratively update the noises using the activation values of the neurons. Here, the difference between the activation values of the critical neurons affected by the backdoor trigger and those by the fully updated noises is minimized and the noises may be used as a minimal trigger. In the future, we will draw new insights for the backdoor poisoning attack using our proposed approach.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134399340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Detection: State-of-the-art and Future Directions 深度假检测:最新技术和未来方向
L. Verdoliva
{"title":"Deepfake Detection: State-of-the-art and Future Directions","authors":"L. Verdoliva","doi":"10.1145/3494109.3527197","DOIUrl":"https://doi.org/10.1145/3494109.3527197","url":null,"abstract":"In recent years there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches it is now possible to generate data with a high level of realism. While this opens up new opportunities for the entertainment industry, it simultaneously undermines the reliability of multimedia content and supports the spread of false or manipulated information on the Internet. This is especially true for human faces, allowing to easily create new identities or change only some specific attributes of a real face in a video, so-called deepfakes. In this context, it is important to develop automated tools to detect manipulated media in a reliable and timely manner. This talk will describe the most reliable deep learning-based approaches for detecting deepfakes, with a focus on those that enable domain generalization [1]. The results will be presented on challenging datasets [2,3] with reference to realistic scenarios, such as the dissemination of manipulated images and videos on social networks. Finally, new possible directions will be outlined.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127677108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Machine Learning Techniques to Detect Various Types of Deepfakes 先进的机器学习技术检测各种类型的深度伪造
Simon S. Woo
{"title":"Advanced Machine Learning Techniques to Detect Various Types of Deepfakes","authors":"Simon S. Woo","doi":"10.1145/3494109.3527196","DOIUrl":"https://doi.org/10.1145/3494109.3527196","url":null,"abstract":"Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Also, it is challenging to detect different types of deepfake images simultaneously. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In addition, we also discuss the continual learning and domain adaptation methods to detect various types of deepfakes simultaneously.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124010705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation 基于序列的深度假检测模型的对抗扰动鲁棒性评估
S. A. Shahriyar, M. Wright
{"title":"Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation","authors":"S. A. Shahriyar, M. Wright","doi":"10.1145/3494109.3527194","DOIUrl":"https://doi.org/10.1145/3494109.3527194","url":null,"abstract":"Deepfake videos are getting better in quality and can be used for dangerous disinformation campaigns. The pressing need to detect these videos has motivated researchers to develop different types of detection models. Among them, the models that utilize temporal information (i.e., sequence-based models) are more effective at detection than the ones that only detect intra-frame discrepancies. Recent work has shown that the latter detection models can be fooled with adversarial examples, leveraging the rich literature on crafting adversarial (still) images. It is less clear, however, how well these attacks will work on sequence-based models that operate on information taken over multiple frames. In this paper, we explore the effectiveness of the Fast Gradient Sign Method (FGSM) and the Carlini-Wagner L2-norm attack to fool sequence-based deepfake detector models in both the white-box and black-box settings. The experimental results show that the attacks are effective with a maximum success rate of 99.72% and 67.14% in the white-box and black-box attack scenarios, respectively. This highlights the importance of developing more robust sequence-based deepfake detectors and opens up directions for future research.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115407807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Negative Adversarial Example Generation Against Naver's Celebrity Recognition API 针对Naver名人识别API的负面对抗示例生成
Keeyoung Kim, Simon S. Woo
{"title":"Negative Adversarial Example Generation Against Naver's Celebrity Recognition API","authors":"Keeyoung Kim, Simon S. Woo","doi":"10.1145/3494109.3527193","DOIUrl":"https://doi.org/10.1145/3494109.3527193","url":null,"abstract":"Deep Neural Networks (DNNs) are very effective in image classification, detection and recognition due to a large number of available data. However, they can be easily fooled by adversarial examples and produce incorrect results, which can cause problems for many applications. In this work, we focus on generating adversarial images and exploring and assessing possible negative impacts caused by these examples. As a case study, we create adversarial images against Naver's celebrity recognition (NCR) API, as Naver is the leading machine learning APIs service provider in South Korea. We demonstrate that it is extremely easy to fool the online DNN-based APIs using adversarial examples and discuss possible negative impacts resulting from these adversarial examples.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131110849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Detection for Fake Images with Facemasks 带面罩的假图像的深度伪造检测
Sangjun Lee, D. Ko, Jin-Yeol Park, Saebyeol Shin, Do-Soon Hong, Simon S. Woo
{"title":"Deepfake Detection for Fake Images with Facemasks","authors":"Sangjun Lee, D. Ko, Jin-Yeol Park, Saebyeol Shin, Do-Soon Hong, Simon S. Woo","doi":"10.1145/3494109.3527189","DOIUrl":"https://doi.org/10.1145/3494109.3527189","url":null,"abstract":"Hyper-realistic face image generation and manipulation have given rise to numerous unethical social issues, e.g., invasion of privacy, threat of security, and malicious political maneuvering, which resulted in the development of recent deepfake detection methods with the rising demands of deepfake forensics. Proposed deepfake detection methods to date have shown remarkable detection performance and robustness. However, none of the suggested deepfake detection methods assessed the performance of deepfakes with the facemask during the pandemic crisis after the outbreak of the COVID-19. In this paper, we thoroughly evaluate the performance of state-of-the-art deepfake detection models on the deepfakes with the facemask. Our result shows that fake facial images with facemask can deceive well-known deepfake detection models, thereby evading the real-world security systems.","PeriodicalId":140739,"journal":{"name":"Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126640994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信