Muhammad Mustafa Ali Usmani , Muhammad Atif Tahir , Humna Faisal , Muhammad Rafi
{"title":"基于扩散性噪声注入的联合学习","authors":"Muhammad Mustafa Ali Usmani , Muhammad Atif Tahir , Humna Faisal , Muhammad Rafi","doi":"10.1016/j.inffus.2025.103796","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advances in machine learning and deep learning have transformed daily life by using user data to extract patterns and insights. As data privacy concerns rise, the “right to be forgotten” has become increasingly important, driving the development of machine unlearning–a technique to remove specific data contributions from trained models. Most existing unlearning research assumes a centralized setting where the data resides on a central server. However, this assumption breaks in federated learning (FL), where data remains decentralized across clients who train a shared model without exposing raw data. This decentralized architecture introduces significant challenges for unlearning, such as identifying and removing specific data contributions, preserving global model performance, and ensuring privacy. Addressing these issues, we propose a client-level machine unlearning framework based on Diffusive Noise Injection (DNI). DNI gradually perturbs training inputs with structured noise to steer the model away from memorizing specific samples or classes, followed by a global model healing phase to restore accuracy and stability. The proposed approach is evaluated using Convolutional Neural Networks (CNNs) and Vision Transformers on standard FL benchmarks including CIFAR-10, CIFAR-100, and MNIST, as well as the KVASIR medical image dataset. Experimental results show that our method effectively unlearns target data while maintaining high accuracy, achieving performance comparable to state-of-the-art unlearning techniques across all datasets and model architectures.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103796"},"PeriodicalIF":15.5000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated unlearning using diffusive noise injection\",\"authors\":\"Muhammad Mustafa Ali Usmani , Muhammad Atif Tahir , Humna Faisal , Muhammad Rafi\",\"doi\":\"10.1016/j.inffus.2025.103796\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent advances in machine learning and deep learning have transformed daily life by using user data to extract patterns and insights. As data privacy concerns rise, the “right to be forgotten” has become increasingly important, driving the development of machine unlearning–a technique to remove specific data contributions from trained models. Most existing unlearning research assumes a centralized setting where the data resides on a central server. However, this assumption breaks in federated learning (FL), where data remains decentralized across clients who train a shared model without exposing raw data. This decentralized architecture introduces significant challenges for unlearning, such as identifying and removing specific data contributions, preserving global model performance, and ensuring privacy. Addressing these issues, we propose a client-level machine unlearning framework based on Diffusive Noise Injection (DNI). DNI gradually perturbs training inputs with structured noise to steer the model away from memorizing specific samples or classes, followed by a global model healing phase to restore accuracy and stability. The proposed approach is evaluated using Convolutional Neural Networks (CNNs) and Vision Transformers on standard FL benchmarks including CIFAR-10, CIFAR-100, and MNIST, as well as the KVASIR medical image dataset. Experimental results show that our method effectively unlearns target data while maintaining high accuracy, achieving performance comparable to state-of-the-art unlearning techniques across all datasets and model architectures.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"127 \",\"pages\":\"Article 103796\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525008589\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525008589","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Federated unlearning using diffusive noise injection
Recent advances in machine learning and deep learning have transformed daily life by using user data to extract patterns and insights. As data privacy concerns rise, the “right to be forgotten” has become increasingly important, driving the development of machine unlearning–a technique to remove specific data contributions from trained models. Most existing unlearning research assumes a centralized setting where the data resides on a central server. However, this assumption breaks in federated learning (FL), where data remains decentralized across clients who train a shared model without exposing raw data. This decentralized architecture introduces significant challenges for unlearning, such as identifying and removing specific data contributions, preserving global model performance, and ensuring privacy. Addressing these issues, we propose a client-level machine unlearning framework based on Diffusive Noise Injection (DNI). DNI gradually perturbs training inputs with structured noise to steer the model away from memorizing specific samples or classes, followed by a global model healing phase to restore accuracy and stability. The proposed approach is evaluated using Convolutional Neural Networks (CNNs) and Vision Transformers on standard FL benchmarks including CIFAR-10, CIFAR-100, and MNIST, as well as the KVASIR medical image dataset. Experimental results show that our method effectively unlearns target data while maintaining high accuracy, achieving performance comparable to state-of-the-art unlearning techniques across all datasets and model architectures.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.