{"title":"ExplaNET:利用可解释原型检测深度伪造的描述性框架","authors":"Fatima Khalid;Ali Javed;Khalid Mahmood Malik;Aun Irtaza","doi":"10.1109/TBIOM.2024.3407650","DOIUrl":null,"url":null,"abstract":"The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"486-497"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes\",\"authors\":\"Fatima Khalid;Ali Javed;Khalid Mahmood Malik;Aun Irtaza\",\"doi\":\"10.1109/TBIOM.2024.3407650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.\",\"PeriodicalId\":73307,\"journal\":{\"name\":\"IEEE transactions on biometrics, behavior, and identity science\",\"volume\":\"6 4\",\"pages\":\"486-497\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biometrics, behavior, and identity science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10542403/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10542403/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes
The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.