使用视觉可解释性方法的可解释深度假检测

Badhrinarayan Malolan, Ankit Parekh, F. Kazi
{"title":"使用视觉可解释性方法的可解释深度假检测","authors":"Badhrinarayan Malolan, Ankit Parekh, F. Kazi","doi":"10.1109/ICICT50521.2020.00051","DOIUrl":null,"url":null,"abstract":"Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":"{\"title\":\"Explainable Deep-Fake Detection Using Visual Interpretability Methods\",\"authors\":\"Badhrinarayan Malolan, Ankit Parekh, F. Kazi\",\"doi\":\"10.1109/ICICT50521.2020.00051\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.\",\"PeriodicalId\":445000,\"journal\":{\"name\":\"2020 3rd International Conference on Information and Computer Technologies (ICICT)\",\"volume\":\"199 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 3rd International Conference on Information and Computer Technologies (ICICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICT50521.2020.00051\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT50521.2020.00051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

摘要

深度造假因其潜在的爆炸性后果而引发了全世界的担忧。一个反乌托邦的未来,所有形式的数字媒体都可能受到损害,公众对政府的信任稀缺,这似乎并不遥远。如果不采取必要的严肃措施,局势很容易失控。目前的Deep-Fake检测方法旨在准确地解决手头的问题,但可能无法让外行人相信其可靠性,因此缺乏公众的信任。由于基本问题围绕着赢得人类代理的信任,因此构建可解释且易于解释的模型势在必行。我们提出了一个使用深度学习方法检测这些深度伪造视频的框架:我们在从facefrenics的DeepFakeDetection数据集提取的人脸数据库上训练了一个卷积神经网络架构。此外,我们在各种可解释的人工智能技术(如LRP和LIME)上测试了该模型,以提供模型所关注的图像突出区域的清晰可视化。有前景的和难以实现的目标是定位面部操作引起的换脸。我们希望用这种方法在人工智能和人类代理之间建立信任,并展示XAI在各种现实场景中的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explainable Deep-Fake Detection Using Visual Interpretability Methods
Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信