Xiaohan Yuan;Jiqiang Liu;Bin Wang;Guorong Chen;Xiangrui Xu;Junyong Wang;Tao Li;Wei Wang
{"title":"协同智能交通系统中的高效联合学习","authors":"Xiaohan Yuan;Jiqiang Liu;Bin Wang;Guorong Chen;Xiangrui Xu;Junyong Wang;Tao Li;Wei Wang","doi":"10.1109/TIFS.2025.3583231","DOIUrl":null,"url":null,"abstract":"In cooperative intelligent transportation systems (CITS), federated learning enables vehicles to train a global model without sharing private data. However, the lack of an unlearning mechanism to remove the influence of vehicle-specified data from the global model potentially violates data protection regulations regarding the right to be forgotten. While the existing federated unlearning (FU) methods exhibit promising unlearning effects, their practicality in CITS is hindered due to the time-consuming retraining steps required by other vehicles and the non-negligible performance sacrifice on the un-forgotten data. Therefore, achieving effective unlearning without extensive retraining, while minimizing performance degradation on the un-forgotten data remains a challenge. In this work, we propose FedEditor, an efficient and effective FU framework in CITS that addresses the above challenge by reconfiguring the global model’s representation space to remove critical classification-related knowledge from the unlearned data. Firstly, FedEditor enables vehicles to perform the unlearning process locally on the global model, eliminating the participation of other vehicles and improving efficiency. Secondly, FedEditor captures and aligns the representations of the unlearned data with those of the nearest incorrect class centroid derived from non-training data, ensuring effective unlearning while preserving the un-forgotten data’s knowledge relatively intact for achieving competitive model performance. Finally, FedEditor refines the global model’s output distributions using the vehicles’ remaining data and incorporates a drift-mitigating regularization term, minimizing the negative impact of unlearning operations on model performance. Experimental results show that FedEditor reduces the unlearning rate by up to 99.64% without time-consuming retraining, while limiting the predictive performance loss of the resulting global model to less than 3.88% across five models and seven datasets.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"6560-6575"},"PeriodicalIF":8.0000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedEditor: Efficient and Effective Federated Unlearning in Cooperative Intelligent Transportation Systems\",\"authors\":\"Xiaohan Yuan;Jiqiang Liu;Bin Wang;Guorong Chen;Xiangrui Xu;Junyong Wang;Tao Li;Wei Wang\",\"doi\":\"10.1109/TIFS.2025.3583231\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In cooperative intelligent transportation systems (CITS), federated learning enables vehicles to train a global model without sharing private data. However, the lack of an unlearning mechanism to remove the influence of vehicle-specified data from the global model potentially violates data protection regulations regarding the right to be forgotten. While the existing federated unlearning (FU) methods exhibit promising unlearning effects, their practicality in CITS is hindered due to the time-consuming retraining steps required by other vehicles and the non-negligible performance sacrifice on the un-forgotten data. Therefore, achieving effective unlearning without extensive retraining, while minimizing performance degradation on the un-forgotten data remains a challenge. In this work, we propose FedEditor, an efficient and effective FU framework in CITS that addresses the above challenge by reconfiguring the global model’s representation space to remove critical classification-related knowledge from the unlearned data. Firstly, FedEditor enables vehicles to perform the unlearning process locally on the global model, eliminating the participation of other vehicles and improving efficiency. Secondly, FedEditor captures and aligns the representations of the unlearned data with those of the nearest incorrect class centroid derived from non-training data, ensuring effective unlearning while preserving the un-forgotten data’s knowledge relatively intact for achieving competitive model performance. Finally, FedEditor refines the global model’s output distributions using the vehicles’ remaining data and incorporates a drift-mitigating regularization term, minimizing the negative impact of unlearning operations on model performance. Experimental results show that FedEditor reduces the unlearning rate by up to 99.64% without time-consuming retraining, while limiting the predictive performance loss of the resulting global model to less than 3.88% across five models and seven datasets.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"6560-6575\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11050972/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11050972/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
FedEditor: Efficient and Effective Federated Unlearning in Cooperative Intelligent Transportation Systems
In cooperative intelligent transportation systems (CITS), federated learning enables vehicles to train a global model without sharing private data. However, the lack of an unlearning mechanism to remove the influence of vehicle-specified data from the global model potentially violates data protection regulations regarding the right to be forgotten. While the existing federated unlearning (FU) methods exhibit promising unlearning effects, their practicality in CITS is hindered due to the time-consuming retraining steps required by other vehicles and the non-negligible performance sacrifice on the un-forgotten data. Therefore, achieving effective unlearning without extensive retraining, while minimizing performance degradation on the un-forgotten data remains a challenge. In this work, we propose FedEditor, an efficient and effective FU framework in CITS that addresses the above challenge by reconfiguring the global model’s representation space to remove critical classification-related knowledge from the unlearned data. Firstly, FedEditor enables vehicles to perform the unlearning process locally on the global model, eliminating the participation of other vehicles and improving efficiency. Secondly, FedEditor captures and aligns the representations of the unlearned data with those of the nearest incorrect class centroid derived from non-training data, ensuring effective unlearning while preserving the un-forgotten data’s knowledge relatively intact for achieving competitive model performance. Finally, FedEditor refines the global model’s output distributions using the vehicles’ remaining data and incorporates a drift-mitigating regularization term, minimizing the negative impact of unlearning operations on model performance. Experimental results show that FedEditor reduces the unlearning rate by up to 99.64% without time-consuming retraining, while limiting the predictive performance loss of the resulting global model to less than 3.88% across five models and seven datasets.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features