{"title":"数字孪生应用的联邦学习:一种保护隐私和低延迟的方法。","authors":"Jie Li, Dong Wang","doi":"10.7717/peerj-cs.2877","DOIUrl":null,"url":null,"abstract":"<p><p>The digital twin (DT) concept has recently gained widespread application for mapping the state of physical entities, enabling real-time analysis, prediction, and optimization, thereby enhancing the management and control of physical systems. However, when sensitive information is extracted from physical entities, it faces potential leakage risks, as DT service providers are typically honest yet curious. Federated learning (FL) offers a new distributed learning paradigm that protects privacy by transmitting model updates from edge servers to local devices, allowing training on local datasets. Nevertheless, the training parameters communicated between local mobile devices and edge servers may contain raw data that malicious adversaries could exploit. Furthermore, variations in mapping bias across local devices and the presence of malicious clients can degrade FL training accuracy. To address these security and privacy threats, this paper proposes the FL-FedDT scheme-a privacy-preserving and low-latency FL method that employs an enhanced Paillier homomorphic encryption algorithm to safeguard the privacy of local device parameters without transmitting data to the server. Our approach introduces an improved Paillier encryption method with a new hyperparameter and pre-calculates multiple random intermediate values during the key generation stage, significantly reducing encryption time and thereby expediting model training. Additionally, we implement a trusted FL global aggregation method that incorporates learning quality and interaction records to identify and mitigate malicious updates, dynamically adjusting weights to counteract the threat of malicious clients. To evaluate the efficiency of our proposed scheme, we conducted extensive experiments, with results validating that our approach achieves training accuracy and security on par with baseline methods, while substantially reducing FL iteration time. This enhancement contributes to improved DT mapping and service quality for physical entities. (The code for this study is publicly available on GitHub at: https://github.com/fujianU/federated-learning. The URL address of the MNIST dataset is: https://gitcode.com/Resource-Bundle-Collection/d47b0/overview?utm_source=pan_gitcode&index=top&type=href&;.).</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2877"},"PeriodicalIF":2.5000,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453653/pdf/","citationCount":"0","resultStr":"{\"title\":\"Federated learning for digital twin applications: a privacy-preserving and low-latency approach.\",\"authors\":\"Jie Li, Dong Wang\",\"doi\":\"10.7717/peerj-cs.2877\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The digital twin (DT) concept has recently gained widespread application for mapping the state of physical entities, enabling real-time analysis, prediction, and optimization, thereby enhancing the management and control of physical systems. However, when sensitive information is extracted from physical entities, it faces potential leakage risks, as DT service providers are typically honest yet curious. Federated learning (FL) offers a new distributed learning paradigm that protects privacy by transmitting model updates from edge servers to local devices, allowing training on local datasets. Nevertheless, the training parameters communicated between local mobile devices and edge servers may contain raw data that malicious adversaries could exploit. Furthermore, variations in mapping bias across local devices and the presence of malicious clients can degrade FL training accuracy. To address these security and privacy threats, this paper proposes the FL-FedDT scheme-a privacy-preserving and low-latency FL method that employs an enhanced Paillier homomorphic encryption algorithm to safeguard the privacy of local device parameters without transmitting data to the server. Our approach introduces an improved Paillier encryption method with a new hyperparameter and pre-calculates multiple random intermediate values during the key generation stage, significantly reducing encryption time and thereby expediting model training. Additionally, we implement a trusted FL global aggregation method that incorporates learning quality and interaction records to identify and mitigate malicious updates, dynamically adjusting weights to counteract the threat of malicious clients. To evaluate the efficiency of our proposed scheme, we conducted extensive experiments, with results validating that our approach achieves training accuracy and security on par with baseline methods, while substantially reducing FL iteration time. This enhancement contributes to improved DT mapping and service quality for physical entities. (The code for this study is publicly available on GitHub at: https://github.com/fujianU/federated-learning. The URL address of the MNIST dataset is: https://gitcode.com/Resource-Bundle-Collection/d47b0/overview?utm_source=pan_gitcode&index=top&type=href&;.).</p>\",\"PeriodicalId\":54224,\"journal\":{\"name\":\"PeerJ Computer Science\",\"volume\":\"11 \",\"pages\":\"e2877\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453653/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PeerJ Computer Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.7717/peerj-cs.2877\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2877","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Federated learning for digital twin applications: a privacy-preserving and low-latency approach.
The digital twin (DT) concept has recently gained widespread application for mapping the state of physical entities, enabling real-time analysis, prediction, and optimization, thereby enhancing the management and control of physical systems. However, when sensitive information is extracted from physical entities, it faces potential leakage risks, as DT service providers are typically honest yet curious. Federated learning (FL) offers a new distributed learning paradigm that protects privacy by transmitting model updates from edge servers to local devices, allowing training on local datasets. Nevertheless, the training parameters communicated between local mobile devices and edge servers may contain raw data that malicious adversaries could exploit. Furthermore, variations in mapping bias across local devices and the presence of malicious clients can degrade FL training accuracy. To address these security and privacy threats, this paper proposes the FL-FedDT scheme-a privacy-preserving and low-latency FL method that employs an enhanced Paillier homomorphic encryption algorithm to safeguard the privacy of local device parameters without transmitting data to the server. Our approach introduces an improved Paillier encryption method with a new hyperparameter and pre-calculates multiple random intermediate values during the key generation stage, significantly reducing encryption time and thereby expediting model training. Additionally, we implement a trusted FL global aggregation method that incorporates learning quality and interaction records to identify and mitigate malicious updates, dynamically adjusting weights to counteract the threat of malicious clients. To evaluate the efficiency of our proposed scheme, we conducted extensive experiments, with results validating that our approach achieves training accuracy and security on par with baseline methods, while substantially reducing FL iteration time. This enhancement contributes to improved DT mapping and service quality for physical entities. (The code for this study is publicly available on GitHub at: https://github.com/fujianU/federated-learning. The URL address of the MNIST dataset is: https://gitcode.com/Resource-Bundle-Collection/d47b0/overview?utm_source=pan_gitcode&index=top&type=href&;.).
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.