Computer Communications最新文献

筛选
英文 中文
Evaluating Conditional handover for 5G networks with dynamic obstacles
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-21 DOI: 10.1016/j.comcom.2025.108067
Souvik Deb , Megh Rathod , Rishi Balamurugan , Shankar K. Ghosh , Rajeev Kumar Singh , Samriddha Sanyal
{"title":"Evaluating Conditional handover for 5G networks with dynamic obstacles","authors":"Souvik Deb ,&nbsp;Megh Rathod ,&nbsp;Rishi Balamurugan ,&nbsp;Shankar K. Ghosh ,&nbsp;Rajeev Kumar Singh ,&nbsp;Samriddha Sanyal","doi":"10.1016/j.comcom.2025.108067","DOIUrl":"10.1016/j.comcom.2025.108067","url":null,"abstract":"<div><div>To enhance seamless connectivity in millimetre wave New Radio networks, Conditional handover has evolved as a promising solution. Unlike A3 handover where handover execution is certain after receiving handover command from the serving access network, in Conditional handover, handover execution is <em>conditional</em> on Reference signal received power measurements from current and target access networks, as well as on handover parameters such as preparation and execution offsets. Presence of dynamic obstacles may block the signal from serving and (or) target access networks, which results in violation of the conditions for handover preparation/execution. Moreover, signal blockage by dynamic obstacles may cause radio link failure, which may cause handover failure as well. Analytic evaluation of Conditional handover in the presence of dynamic obstacles is quite limited in the existing literature. In this work, handover performance of Conditional handover has been analysed in terms of handover latency, handover packet loss and handover failure probability. A Markov model accounting the effect of dynamic obstacles, handover parameters (e.g., execution offset, preparation offset, time-to-preparation and time-to-execution), user velocity and channel fading characteristics has been proposed to characterize handover failure. Results obtained from the proposed analytic model have been validated against simulation results. Our study reveals that optimal configuration of handover parameters is actually conditional on the presence of dynamic obstacles, user velocity and fading characteristics. This study will be helpful for the mobile operators to configure handover parameters for New Radio systems where dynamic obstacles are present.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108067"},"PeriodicalIF":4.5,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-agent enhanced DDPG method for federated learning resource allocation in IoT
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-21 DOI: 10.1016/j.comcom.2025.108066
Yue Sun , Hui Xia , Chuxiao Su , Rui Zhang , Jieru Wang , Kunkun Jia
{"title":"A multi-agent enhanced DDPG method for federated learning resource allocation in IoT","authors":"Yue Sun ,&nbsp;Hui Xia ,&nbsp;Chuxiao Su ,&nbsp;Rui Zhang ,&nbsp;Jieru Wang ,&nbsp;Kunkun Jia","doi":"10.1016/j.comcom.2025.108066","DOIUrl":"10.1016/j.comcom.2025.108066","url":null,"abstract":"<div><div>In the Internet of Things (IoT), federated learning (FL) is a distributed machine learning method that significantly improves model performance by utilizing local device data for collaborative training. However, applying FL in IoT also presents new challenges: the significant differences in computing and communication capabilities among IoT devices and the limited resources make efficient resource allocation crucial. This paper proposes a multi-agent enhanced deep deterministic policy gradient method (MAEDDPG) based on deep reinforcement learning to obtain the optimal resource allocation strategy. Firstly, MAEDDPG introduces long short-term memory networks to address the local observation problem in multi-agent settings. Secondly, noise networks are employed during training to enhance exploration, preventing the model from getting stuck in local optima. Finally, an enhanced double critic network is designed to reduce the error in value function estimation. MAEDDPG effectively obtains the optimal resource allocation strategy, coordinating the computing and communication resources of various IoT devices, thereby balancing FL training time and IoT device energy consumption. The experimental results show that the proposed MAEDDPG method outperforms the state-of-the-art method in IoT, reducing the average system cost by 12.4%.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108066"},"PeriodicalIF":4.5,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143131571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing healthcare infrastructure resilience through agent-based simulation methods
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-20 DOI: 10.1016/j.comcom.2025.108070
David Carramiñana, Ana M. Bernardos, Juan A. Besada, José R. Casar
{"title":"Enhancing healthcare infrastructure resilience through agent-based simulation methods","authors":"David Carramiñana,&nbsp;Ana M. Bernardos,&nbsp;Juan A. Besada,&nbsp;José R. Casar","doi":"10.1016/j.comcom.2025.108070","DOIUrl":"10.1016/j.comcom.2025.108070","url":null,"abstract":"<div><div>Critical infrastructures face demanding challenges due to natural and human-generated threats, such as pandemics, workforce shortages or cyber-attacks, which might severely compromise service quality. To improve system resilience, decision-makers would need intelligent tools for quick and efficient resource allocation. This article explores an agent-based simulation model that intends to capture a part of the complexity of critical infrastructure systems, particularly considering the interdependencies of healthcare systems with information and telecommunication systems. Such a model enables to implement a simulation-based optimization approach in which the exposure of critical systems to risks is evaluated, while comparing the mitigation effects of multiple tactical and strategical decision alternatives to enhance their resilience. The proposed model is designed to be parameterizable, to enable adapting it to risk scenarios with different severity, and it facilitates the compilation of relevant performance indicators enabling monitoring at both agent level and system level. To validate the agent-based model, a literature-supported methodology has been used to perform cross-validation, sensitivity analysis and test the usefulness of the proposed model through a use case. The use case analyzes the impact of a concurrent pandemic and a cyber-attack on a hospital and compares different resiliency-enhancing countermeasures using contingency tables. Overall, the use case illustrates the feasibility and versatility of the proposed approach to enhance resiliency.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"234 ","pages":"Article 108070"},"PeriodicalIF":4.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPS-IIoT: Non-interactive zero-knowledge proof-inspired access control towards information-centric Industrial Internet of Things
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-20 DOI: 10.1016/j.comcom.2025.108065
Dun Li , Noel Crespi , Roberto Minerva , Wei Liang , Kuan-Ching Li , Joanna Kołodziej
{"title":"DPS-IIoT: Non-interactive zero-knowledge proof-inspired access control towards information-centric Industrial Internet of Things","authors":"Dun Li ,&nbsp;Noel Crespi ,&nbsp;Roberto Minerva ,&nbsp;Wei Liang ,&nbsp;Kuan-Ching Li ,&nbsp;Joanna Kołodziej","doi":"10.1016/j.comcom.2025.108065","DOIUrl":"10.1016/j.comcom.2025.108065","url":null,"abstract":"<div><div>The advancements in 5G/6G communication technologies have enabled the rapid development and expanded application of the Industrial Internet of Things (IIoT). However, the limitations of traditional host-centric networks are becoming increasingly evident, especially in meeting the growing demands of the IIoT for higher data speeds, enhanced privacy protections, and improved resilience to disruptions. In this work, we present the ZK-CP-ABE algorithm, a novel security framework designed to enhance security and efficiency in distributing content within the IIoT. By integrating a non-interactive zero-knowledge proof (ZKP) protocol for user authentication and data validation into the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE), the ZK-CP-ABE algorithm substantially improves privacy protections while efficiently managing bandwidth usage. Furthermore, we propose the Distributed Publish-Subscribe Industrial Internet of Things (DPS-IIoT) system, which uses Hyperledger Fabric blockchain technology to deploy access policies and ensure the integrity of ZKP from tampering and cyber-attacks, thus enhancing the security and reliability of IIoT networks. To validate the effectiveness of our approach, extensive experiments were conducted, demonstrating that the proposed ZK-CP-ABE algorithm significantly reduces bandwidth consumption, while maintaining robust security against unauthorized access. Experimental evaluation shows that the ZK-CP-ABE algorithm and DPS-IIoT system significantly enhance bandwidth efficiency and overall throughput in IIoT environments.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108065"},"PeriodicalIF":4.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Just a little human intelligence feedback! Unsupervised learning assisted supervised learning data poisoning based backdoor removal
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-18 DOI: 10.1016/j.comcom.2025.108052
Ting Luo , Huaibing Peng , Anmin Fu , Wei Yang , Lihui Pang , Said F. Al-Sarawi , Derek Abbott , Yansong Gao
{"title":"Just a little human intelligence feedback! Unsupervised learning assisted supervised learning data poisoning based backdoor removal","authors":"Ting Luo ,&nbsp;Huaibing Peng ,&nbsp;Anmin Fu ,&nbsp;Wei Yang ,&nbsp;Lihui Pang ,&nbsp;Said F. Al-Sarawi ,&nbsp;Derek Abbott ,&nbsp;Yansong Gao","doi":"10.1016/j.comcom.2025.108052","DOIUrl":"10.1016/j.comcom.2025.108052","url":null,"abstract":"<div><div>Backdoor attacks on deep learning (DL) models are recognized as one of the most alarming security threats, particularly in security-critical applications. A primary source of backdoor introduction is data outsourcing such as when data is aggregated from third parties or end Internet of Things (IoT) devices, which are susceptible to various attacks. Significant efforts have been made to counteract backdoor attacks through defensive measures. However, the majority of them are ineffective to either evolving trigger types or backdoor types. This study proposes a poisoned data detection method, termed as <span>LABOR</span> (unsupervised <strong>L</strong>earning <strong>A</strong>ssisted supervised learning data poisoning based <strong>B</strong>ackd <strong>O</strong>r <strong>R</strong>emoval), by incorporating a little human intelligence feedback. <span>LABOR</span> is specifically devised to counter backdoor induced by dirty-label data poisoning on the most common classification tasks. The key insight is that regardless of the underlying trigger types (e.g., patch or imperceptible triggers) and intended backdoor types (e.g., universal or partial backdoor), the poisoned samples still preserve the semantic features of their original classes. By clustering these poisoned samples based on their original categories through unsupervised learning, with category identification assisted by human intelligence, <span>LABOR</span> can detect and remove poisoned samples by identifying discrepancies between cluster categories and classification model predictions. Extensive experiments on eight benchmark datasets, including an intrusion detection dataset relevant to IoT device protection, validate <span>LABOR</span>’s effectiveness in combating dirty-label poisoning-based backdoor attacks. <span>LABOR</span>’s robustness is further demonstrated across various trigger and backdoor types, as well as diverse data modalities, including image, audio and text.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108052"},"PeriodicalIF":4.5,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COIChain: Blockchain scheme for privacy data authentication in cross-organizational identification
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-16 DOI: 10.1016/j.comcom.2025.108054
Zhexuan Yang , Xiao Qu , Zeng Chen , Guozi Sun
{"title":"COIChain: Blockchain scheme for privacy data authentication in cross-organizational identification","authors":"Zhexuan Yang ,&nbsp;Xiao Qu ,&nbsp;Zeng Chen ,&nbsp;Guozi Sun","doi":"10.1016/j.comcom.2025.108054","DOIUrl":"10.1016/j.comcom.2025.108054","url":null,"abstract":"<div><div>In cross-institutional user authentication, users’ personal privacy information is often exposed to the risk of disclosure and abuse. Users should have the right to decide on their own data, and others should not be able to use user data without users’ permission. In this study, we adopted a user-centered framework, so that users can obtain authorization among different resource owners through qualification proof, avoiding the dissemination of users’ personal privacy data. We have developed a blockchain-based cross-institutional authorization architecture where users can obtain identity authentication between different entities by structuring transactions. Through the selective disclosure algorithm, the user’s private information is hidden during the user identity authentication, and the authenticity of the user’s private information is verified by disclosing the user’s non-private information and authentication credentials. The architecture supports the generation of identity credentials of constant size based on atomic properties. We prototype the system on Ethereum. The prototype of the system is tested. The experiment proves that the sum of user information processing and verification time is about 80ms, and the time fluctuation of user information processing is very small. The results show that our data flow scheme can effectively avoid the privacy leakage problem in the user cross-agency authentication scenario with a small cost.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108054"},"PeriodicalIF":4.5,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-11 DOI: 10.1016/j.comcom.2025.108051
Liying Li , Yifei Gao , Peiwen Xia , Sijie Lin , Peijin Cong , Junlong Zhou
{"title":"Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing","authors":"Liying Li ,&nbsp;Yifei Gao ,&nbsp;Peiwen Xia ,&nbsp;Sijie Lin ,&nbsp;Peijin Cong ,&nbsp;Junlong Zhou","doi":"10.1016/j.comcom.2025.108051","DOIUrl":"10.1016/j.comcom.2025.108051","url":null,"abstract":"<div><div>Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical to system performance and efficiency. Existing works on task offloading and resource allocation for multi-intelligent vehicles in GECC systems focus on designing static methods, which offload tasks once or a fixed number of times. This offloading manner may lead to low resource utilization due to congestion on edge servers and is not suitable for ITS with dynamically changing parameters such as bandwidth. To solve the above problems, we present a dynamic task offloading and resource allocation method, which allows tasks to be offloaded an arbitrary number of times under time and resource constraints. Specifically, we consider the characteristics of tasks and propose a remaining model to obtain the states of vehicles and tasks in real-time. Then we present a task offloading and resource allocation method considering both time and energy according to a designed real-time multi-agent deep deterministic policy gradient (RT-MADDPG) model. Our approach can offload tasks in arbitrary number of times under resource and time constraints, and can dynamically adjust the task offloading and resource allocation solutions according to changing system states to maximize system utility, which considers both task processing time and energy. Extensive simulation results indicate that the proposed RT-MADDPG method can effectively improve the utility of ITS compared to 2 benchmarking methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108051"},"PeriodicalIF":4.5,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GNNetSlice: A GNN-based performance model to support network slicing in B5G networks
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-11 DOI: 10.1016/j.comcom.2025.108044
Miquel Farreras , Jordi Paillissé , Lluís Fàbrega , Pere Vilà
{"title":"GNNetSlice: A GNN-based performance model to support network slicing in B5G networks","authors":"Miquel Farreras ,&nbsp;Jordi Paillissé ,&nbsp;Lluís Fàbrega ,&nbsp;Pere Vilà","doi":"10.1016/j.comcom.2025.108044","DOIUrl":"10.1016/j.comcom.2025.108044","url":null,"abstract":"<div><div>Network slicing is gaining traction in Fifth Generation (5G) deployments and Beyond 5G (B5G) designs. In a nutshell, network slicing virtualizes a single physical network into multiple virtual networks or slices, so that each slice provides a desired network performance to the set of traffic flows (source–destination pairs) mapped to it. The network performance, defined by specific Quality of Service (QoS) parameters (latency, jitter and losses), is tailored to different use cases, such as manufacturing, automotive or smart cities. A network controller determines whether a new slice request can be safely granted without degrading the performance of existing slices, and therefore fast and accurate models are needed to efficiently allocate network resources to slices. Although there is a large body of work of network slicing modeling and resource allocation in the Radio Access Network (RAN), there are few works that deal with the implementation and modeling of network slicing in the core and transport network.</div><div>In this paper, we present GNNetSlice, a model that predicts the performance of a given configuration of network slices and traffic requirements in the core and transport network. The model is built leveraging Graph Neural Networks (GNNs), a kind of Neural Network specifically designed to deal with data structured as graphs. We have chosen a data-driven approach instead of classical modeling techniques, such as Queuing Theory or packet-level simulations due to their balance between prediction speed and accuracy. We detail the structure of GNNetSlice, the dataset used for training, and show how our model can accurately predict the delay, jitter and losses of a wide range of scenarios, achieving a Symmetric Mean Average Percentage Error (SMAPE) of 5.22%, 1.95% and 2.04%, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108044"},"PeriodicalIF":4.5,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based malware detection in IoT networks within smart cities: A survey
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-10 DOI: 10.1016/j.comcom.2025.108055
Mustafa J.M. Alhamdi , Jose Manuel Lopez-Guede , Jafar AlQaryouti , Javad Rahebi , Ekaitz Zulueta , Unai Fernandez-Gamiz
{"title":"AI-based malware detection in IoT networks within smart cities: A survey","authors":"Mustafa J.M. Alhamdi ,&nbsp;Jose Manuel Lopez-Guede ,&nbsp;Jafar AlQaryouti ,&nbsp;Javad Rahebi ,&nbsp;Ekaitz Zulueta ,&nbsp;Unai Fernandez-Gamiz","doi":"10.1016/j.comcom.2025.108055","DOIUrl":"10.1016/j.comcom.2025.108055","url":null,"abstract":"<div><div>The exponential expansion of Internet of Things (IoT) applications in smart cities has significantly pushed smart city development forward. Intelligent applications have the potential to enhance systems' efficiency, service quality, and overall performance. Smart cities, intelligent transportation networks, and other influential infrastructure are the main targets of cyberattacks. These attacks have the potential to undercut the security of important government, commercial, and personal information, placing privacy and confidentiality at risk. Multiple scientific studies indicate that Smart City cyberattacks can result in millions of euros in financial losses due to data compromise and loss. The importance of anomaly detection rests in its ability to identify and analyze illegitimacy within IoT data. Unprotected, infected, or suspicious devices may be unsafe for intrusion attacks, which have the potential to enter several machines within a network. This interferes with the network's provision of customer service in terms of privacy and safety. The objective of this study is to assess procedures for detecting malware in the IoT using artificial intelligence (AI) approaches. To identify and prevent threats and malicious programs, current methodologies use AI algorithms such as support vector machines, decision trees, and deep neural networks. We explore existing studies that propose several methods to address malware in IoT using AI approaches. Finally, the survey highlights current issues in this context, including the accuracy of detection and the cost of security concerns in terms of detection performance and energy consumption.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108055"},"PeriodicalIF":4.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage federated learning method for personalization via selective collaboration
IF 4.5 3区 计算机科学
Computer Communications Pub Date : 2025-01-10 DOI: 10.1016/j.comcom.2025.108053
Jiuyun Xu , Liang Zhou , Yingzhi Zhao , Xiaowen Li , Kongshang Zhu , Xiangrui Xu , Qiang Duan , RuRu Zhang
{"title":"A two-stage federated learning method for personalization via selective collaboration","authors":"Jiuyun Xu ,&nbsp;Liang Zhou ,&nbsp;Yingzhi Zhao ,&nbsp;Xiaowen Li ,&nbsp;Kongshang Zhu ,&nbsp;Xiangrui Xu ,&nbsp;Qiang Duan ,&nbsp;RuRu Zhang","doi":"10.1016/j.comcom.2025.108053","DOIUrl":"10.1016/j.comcom.2025.108053","url":null,"abstract":"<div><div>As an emerging distributed learning method, Federated learning has received much attention recently. Traditional federated learning aims to train a global model on a decentralized dataset, but in the case of uneven data distribution, a single global model may not be well adapted to each client, and even the local training performance of some clients may be superior to the global model. Under this background, clustering resemblance clients into the same group is a common approach. However, there is still some heterogeneity of clients within the same group, and general clustering methods usually assume that clients belong to a specific class only, but in real-world scenarios, it is difficult to accurately categorize clients into one class due to the complexity of data distribution. To solve these problems, we propose a two-stage <strong>fed</strong>erated learning method for personalization via <strong>s</strong>elective <strong>c</strong>ollaboration (FedSC). Different from previous clustering methods, we focus on how to independently exclude other clients with significant distributional differences for each client and break the restriction that clients can only belong to one category. We tend to select collaborators for each client who are more conducive to achieving local mission goals and build a collaborative group for them independently, and every client engages in a federated learning process only with group members to avoid negative knowledge transfer. Furthermore, FedSC performs finer-grained processing within each group, using an adaptive hierarchical fusion strategy of group and local models instead of the traditional approach’s scheme of directly overriding local models. Extensive experiments show that our proposed method considerably increases model performance under different heterogeneity scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108053"},"PeriodicalIF":4.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信