Muawya Al Dalaien , Ruzat Ullah , Qasem Abu Al-Haija
{"title":"在物联网中加强联邦学习抵御中毒攻击的双聚合方法","authors":"Muawya Al Dalaien , Ruzat Ullah , Qasem Abu Al-Haija","doi":"10.1016/j.array.2025.100520","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning is gaining much popularity for edge devices. It offers a decentralized approach with strong privacy-preserving capabilities. It has been widely used to secure many edge devices. IoTs also utilize federated learning for an extensive range of security applications. Nevertheless, federated learning itself is also vulnerable to security threats. One such threat is poisoning attacks. Researchers have proposed many models for addressing the issue of poisoning attacks. Most of these approaches come with models based on some external technique (cryptographic or authentication technique), which adds overhead. This paper proposes a dual aggregation approach for securing federated learning. The proposed technique leverages existing machine learning techniques without introducing additional computational overhead. The approach utilizes ensemble learning, where individual client models first aggregate predictions from random forest and gradient boosting, and then the results of all the clients are further aggregated into a global model. Experimental results demonstrate that the proposed method achieves an accuracy of 91 %, highlighting its resilience against model poisoning attacks. The proposed solution provides a lightweight and efficient framework for securing IoT systems, enhancing their resilience against adversarial threats.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100520"},"PeriodicalIF":4.5000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dual-aggregation approach to fortify federated learning against poisoning attacks in IoTs\",\"authors\":\"Muawya Al Dalaien , Ruzat Ullah , Qasem Abu Al-Haija\",\"doi\":\"10.1016/j.array.2025.100520\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning is gaining much popularity for edge devices. It offers a decentralized approach with strong privacy-preserving capabilities. It has been widely used to secure many edge devices. IoTs also utilize federated learning for an extensive range of security applications. Nevertheless, federated learning itself is also vulnerable to security threats. One such threat is poisoning attacks. Researchers have proposed many models for addressing the issue of poisoning attacks. Most of these approaches come with models based on some external technique (cryptographic or authentication technique), which adds overhead. This paper proposes a dual aggregation approach for securing federated learning. The proposed technique leverages existing machine learning techniques without introducing additional computational overhead. The approach utilizes ensemble learning, where individual client models first aggregate predictions from random forest and gradient boosting, and then the results of all the clients are further aggregated into a global model. Experimental results demonstrate that the proposed method achieves an accuracy of 91 %, highlighting its resilience against model poisoning attacks. The proposed solution provides a lightweight and efficient framework for securing IoT systems, enhancing their resilience against adversarial threats.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"28 \",\"pages\":\"Article 100520\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S259000562500147X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S259000562500147X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
A dual-aggregation approach to fortify federated learning against poisoning attacks in IoTs
Federated learning is gaining much popularity for edge devices. It offers a decentralized approach with strong privacy-preserving capabilities. It has been widely used to secure many edge devices. IoTs also utilize federated learning for an extensive range of security applications. Nevertheless, federated learning itself is also vulnerable to security threats. One such threat is poisoning attacks. Researchers have proposed many models for addressing the issue of poisoning attacks. Most of these approaches come with models based on some external technique (cryptographic or authentication technique), which adds overhead. This paper proposes a dual aggregation approach for securing federated learning. The proposed technique leverages existing machine learning techniques without introducing additional computational overhead. The approach utilizes ensemble learning, where individual client models first aggregate predictions from random forest and gradient boosting, and then the results of all the clients are further aggregated into a global model. Experimental results demonstrate that the proposed method achieves an accuracy of 91 %, highlighting its resilience against model poisoning attacks. The proposed solution provides a lightweight and efficient framework for securing IoT systems, enhancing their resilience against adversarial threats.