{"title":"联邦学习扭曲微调特征和鲁棒性不佳的风险。","authors":"Mengyao Du,Miao Zhang,Yuwen Pu,Qingming Li,Shouling Ji,Quanjun Yin","doi":"10.1109/tnnls.2025.3585063","DOIUrl":null,"url":null,"abstract":"To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning (FT) has emerged as a practical solution. However, our findings reveal that federated learning has the risk of skewing FT features and compromising the out-of-distribution (OOD) robustness of pretrained models. By introducing three robustness indicators and conducting experiments across diverse robust datasets, we elucidate these phenomena by scrutinizing the ability of data representations, transferability, and deviations within the model. To mitigate the negative impact of practical federated learning on model robustness, we introduce a general noisy projection (GNP)-based robust algorithm, ensuring no deterioration of accuracy on the target distribution. Specifically, the key strategy for enhancing model robustness entails the transfer of robustness from the pretrained model to the fine-tuned model, coupled with adding a small amount of Gaussian noise to augment the representative capacity of the model. The comprehensive experimental results demonstrate that our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient FT (PEFT) methods and confronting different levels of label distribution skew and quantity distribution skew.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"48 1","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Robustness.\",\"authors\":\"Mengyao Du,Miao Zhang,Yuwen Pu,Qingming Li,Shouling Ji,Quanjun Yin\",\"doi\":\"10.1109/tnnls.2025.3585063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning (FT) has emerged as a practical solution. However, our findings reveal that federated learning has the risk of skewing FT features and compromising the out-of-distribution (OOD) robustness of pretrained models. By introducing three robustness indicators and conducting experiments across diverse robust datasets, we elucidate these phenomena by scrutinizing the ability of data representations, transferability, and deviations within the model. To mitigate the negative impact of practical federated learning on model robustness, we introduce a general noisy projection (GNP)-based robust algorithm, ensuring no deterioration of accuracy on the target distribution. Specifically, the key strategy for enhancing model robustness entails the transfer of robustness from the pretrained model to the fine-tuned model, coupled with adding a small amount of Gaussian noise to augment the representative capacity of the model. The comprehensive experimental results demonstrate that our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient FT (PEFT) methods and confronting different levels of label distribution skew and quantity distribution skew.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"48 1\",\"pages\":\"\"},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2025-07-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2025.3585063\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3585063","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Robustness.
To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning (FT) has emerged as a practical solution. However, our findings reveal that federated learning has the risk of skewing FT features and compromising the out-of-distribution (OOD) robustness of pretrained models. By introducing three robustness indicators and conducting experiments across diverse robust datasets, we elucidate these phenomena by scrutinizing the ability of data representations, transferability, and deviations within the model. To mitigate the negative impact of practical federated learning on model robustness, we introduce a general noisy projection (GNP)-based robust algorithm, ensuring no deterioration of accuracy on the target distribution. Specifically, the key strategy for enhancing model robustness entails the transfer of robustness from the pretrained model to the fine-tuned model, coupled with adding a small amount of Gaussian noise to augment the representative capacity of the model. The comprehensive experimental results demonstrate that our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient FT (PEFT) methods and confronting different levels of label distribution skew and quantity distribution skew.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.