{"title":"集成可解释的人工智能与合成生物特征数据,以增强计算机视觉系统中的图像合成和隐私","authors":"Hamad Aldawsari , Saad Alammar","doi":"10.1016/j.imavis.2025.105726","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating Explainable AI (XAI) with synthetic biometric data improves image synthesis and privacy in computer vision systems by generating high-quality images while ensuring interpretability. This integration enhances trust and transparency in AI-driven biometric applications. However, traditional biometric data collection methods face challenges such as privacy risks, data scarcity, biases, and regulatory constraints, limiting their effectiveness in authentication and identity verification. To address these limitations, we propose a Generative Adversarial Networks with Explainable AI (GAN-EAI) framework for privacy-preserving biometric image synthesis. This framework utilizes GANs to generate high-fidelity synthetic biometric images while incorporating XAI techniques to interpret and validate the generated outputs, ensuring fairness, robustness, and bias mitigation. The proposed method enables secure, privacy-conscious biometric image synthesis, making it suitable for applications in authentication, healthcare, and identity verification. By leveraging explainability, it ensures that the model's decision-making process is interpretable, reducing the risk of biased or adversarial outputs. Experimental results demonstrate that GAN-EAI achieves superior image quality, enhances privacy protection, and reduces bias in synthetic biometric datasets, making it a reliable solution for real-world biometric applications. This research highlights the potential of integrating explainability with generative models to advance privacy-preserving AI in computer vision.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105726"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Integrating explainable AI with synthetic biometric data for enhanced image synthesis and privacy in computer vision systems\",\"authors\":\"Hamad Aldawsari , Saad Alammar\",\"doi\":\"10.1016/j.imavis.2025.105726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Integrating Explainable AI (XAI) with synthetic biometric data improves image synthesis and privacy in computer vision systems by generating high-quality images while ensuring interpretability. This integration enhances trust and transparency in AI-driven biometric applications. However, traditional biometric data collection methods face challenges such as privacy risks, data scarcity, biases, and regulatory constraints, limiting their effectiveness in authentication and identity verification. To address these limitations, we propose a Generative Adversarial Networks with Explainable AI (GAN-EAI) framework for privacy-preserving biometric image synthesis. This framework utilizes GANs to generate high-fidelity synthetic biometric images while incorporating XAI techniques to interpret and validate the generated outputs, ensuring fairness, robustness, and bias mitigation. The proposed method enables secure, privacy-conscious biometric image synthesis, making it suitable for applications in authentication, healthcare, and identity verification. By leveraging explainability, it ensures that the model's decision-making process is interpretable, reducing the risk of biased or adversarial outputs. Experimental results demonstrate that GAN-EAI achieves superior image quality, enhances privacy protection, and reduces bias in synthetic biometric datasets, making it a reliable solution for real-world biometric applications. This research highlights the potential of integrating explainability with generative models to advance privacy-preserving AI in computer vision.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"162 \",\"pages\":\"Article 105726\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625003142\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625003142","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Integrating explainable AI with synthetic biometric data for enhanced image synthesis and privacy in computer vision systems
Integrating Explainable AI (XAI) with synthetic biometric data improves image synthesis and privacy in computer vision systems by generating high-quality images while ensuring interpretability. This integration enhances trust and transparency in AI-driven biometric applications. However, traditional biometric data collection methods face challenges such as privacy risks, data scarcity, biases, and regulatory constraints, limiting their effectiveness in authentication and identity verification. To address these limitations, we propose a Generative Adversarial Networks with Explainable AI (GAN-EAI) framework for privacy-preserving biometric image synthesis. This framework utilizes GANs to generate high-fidelity synthetic biometric images while incorporating XAI techniques to interpret and validate the generated outputs, ensuring fairness, robustness, and bias mitigation. The proposed method enables secure, privacy-conscious biometric image synthesis, making it suitable for applications in authentication, healthcare, and identity verification. By leveraging explainability, it ensures that the model's decision-making process is interpretable, reducing the risk of biased or adversarial outputs. Experimental results demonstrate that GAN-EAI achieves superior image quality, enhances privacy protection, and reduces bias in synthetic biometric datasets, making it a reliable solution for real-world biometric applications. This research highlights the potential of integrating explainability with generative models to advance privacy-preserving AI in computer vision.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.