Computer Networks最新文献

筛选
英文 中文
Workload distribution with rateless encoding: A low-latency computation offloading method within edge networks 无速率编码的工作负载分配:边缘网络中的低延迟计算卸载方法
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-06-01 DOI: 10.1016/j.comnet.2025.111381
Zhongfu Guo , Xinsheng Ji , Wei You , Hai Guo , Yang Zhang , Yu Zhao , Mingyan Xu , Yi Bai
{"title":"Workload distribution with rateless encoding: A low-latency computation offloading method within edge networks","authors":"Zhongfu Guo ,&nbsp;Xinsheng Ji ,&nbsp;Wei You ,&nbsp;Hai Guo ,&nbsp;Yang Zhang ,&nbsp;Yu Zhao ,&nbsp;Mingyan Xu ,&nbsp;Yi Bai","doi":"10.1016/j.comnet.2025.111381","DOIUrl":"10.1016/j.comnet.2025.111381","url":null,"abstract":"<div><div>In the era of ubiquitous intelligence, user elements offload data-intensive computations to edge network computing clusters, leveraging the efficiency and reliability advantages of distributed computing. However, the delays and failures caused by stragglers significantly hinder system performance. Coded distributed computing combines coding theory with distributed computing, introducing effective redundant computations to accommodate stragglers. Yet, current research often focuses on a fixed number of stragglers with minimal redundancy, lacking a systematic design that considers the inherent heterogeneity in computation, communication, and storage across computing nodes. This paper introduces Rateless Encoding Distributed Computing (REDC), a comprehensive strategy for offloading random arrival computing tasks to distributed computing. REDC devises a rateless coding method for matrix multiplication operations, generating continuous redundant tasks to accommodate random node failures. The proposed queuing theory model requires minimal feedback to update node statuses, dynamically adapting to fluctuations in cluster performance. Simulation results demonstrate that REDC effectively leverages the computing power of clusters with heterogeneous and time-varying characteristics, achieving a resource utilization rate of 93.11%. Moreover, REDC reduces task execution delays by 6.32% compared to the latest baseline, significantly reducing sequential execution delays of computing tasks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"269 ","pages":"Article 111381"},"PeriodicalIF":4.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of 5G base station deployment based on quantum genetic algorithm in outdoor 3D map 基于量子遗传算法的室外3D地图5G基站部署优化
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-06-01 DOI: 10.1016/j.comnet.2025.111431
Jianpo Li , Jinjian Pang , Binfeng Jiang , Qi Xu , Enyuan Zhang
{"title":"Optimization of 5G base station deployment based on quantum genetic algorithm in outdoor 3D map","authors":"Jianpo Li ,&nbsp;Jinjian Pang ,&nbsp;Binfeng Jiang ,&nbsp;Qi Xu ,&nbsp;Enyuan Zhang","doi":"10.1016/j.comnet.2025.111431","DOIUrl":"10.1016/j.comnet.2025.111431","url":null,"abstract":"<div><div>To solve the problems of unreasonable deployment and high construction costs caused by the rapid increase of the fifth generation (5 G) base stations, this article proposes a 5 G base station deployment optimization method that considers coverage and cost weights for certain areas in Kowloon, Hong Kong. Initially, we utilize three-dimensional (3D) maps and ray-tracing models to simulate signal propagation, incorporating population density data to distribute users across the street of the map randomly. We select suitable candidate locations for building base stations on the ground and rooftop, and set restrictions on the height of base station towers. The use of existing base station locations is considered to reduce construction costs. Moreover, we propose a dynamically adjusted quantum genetic algorithm (DAQGA) to optimize base station layout, with coverage and construction cost as objective functions. A signal reception strength metric is introduced to evaluate the effectiveness of the optimal layout. Simulation results demonstrate that this optimization method effectively identifies coverage blind spots within the planning area and reveals connectivity issues caused by building obstructions or areas beyond coverage. This method achieves an optimal balance in base station deployment when coverage and cost weights are set at 0.7 and 0.3, respectively. Compared to four other algorithms, the proposed improved algorithm shows significant advantages in convergence speed and stability.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"269 ","pages":"Article 111431"},"PeriodicalIF":4.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRLO: Optimizing edge server placement in dynamic MEC scenarios using deep reinforcement learning DRLO:使用深度强化学习优化动态MEC场景中的边缘服务器布局
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-31 DOI: 10.1016/j.comnet.2025.111377
Yingya Guo, Cen Chen
{"title":"DRLO: Optimizing edge server placement in dynamic MEC scenarios using deep reinforcement learning","authors":"Yingya Guo,&nbsp;Cen Chen","doi":"10.1016/j.comnet.2025.111377","DOIUrl":"10.1016/j.comnet.2025.111377","url":null,"abstract":"<div><div>As an emerging computing paradigm, Mobile Edge Computing (MEC) significantly enhances user experience and alleviates network congestion by strategically deploying edge servers in close proximity to mobile users. However, the effectiveness of MEC hinges on the precise placement of these edge servers, a critical factor in determining the Quality of Experience (QoE) for mobile users. While existing studies predominantly focus on optimizing edge server placement in static scenarios, they often fall short when faced with user mobility, resulting in a degradation of QoE. To address this challenge, we propose an adaptive edge server placement approach that leverages Deep Reinforcement Learning (DRL) to select the base stations for placing edge servers in a dynamic MEC environment. Our objective is to minimize access delay by optimizing edge server placement for adapting to dynamic environment. To tackle the vast action space associated with edge server placement, we introduce a novel activation function in the actor neural network for efficient exploration. Furthermore, to enhance the adaptability of the derived edge server placement strategy, we meticulously design a new reward function, which takes into account the minimization of total access delay within dynamic MEC scenarios. Finally, to validate the effectiveness of our proposed method, extensive experiments were conducted using the Shanghai Telecom dataset. The results demonstrate that our approach outperforms baseline methods in minimizing access delay for users in dynamic MEC scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111377"},"PeriodicalIF":4.4,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144195842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectory design of cellular-connected UAV patrol and mobile edge computing system: A deep reinforcement learning approach 蜂窝连接无人机巡逻和移动边缘计算系统的轨迹设计:一种深度强化学习方法
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-31 DOI: 10.1016/j.comnet.2025.111384
Zhijie Wang , Wei Zhang , Dingcheng Yang, Fahui Wu, Yu Xu, Lin Xiao
{"title":"Trajectory design of cellular-connected UAV patrol and mobile edge computing system: A deep reinforcement learning approach","authors":"Zhijie Wang ,&nbsp;Wei Zhang ,&nbsp;Dingcheng Yang,&nbsp;Fahui Wu,&nbsp;Yu Xu,&nbsp;Lin Xiao","doi":"10.1016/j.comnet.2025.111384","DOIUrl":"10.1016/j.comnet.2025.111384","url":null,"abstract":"<div><div>This paper investigates an Unmanned Aerial Vehicle (UAV)-based detection system deployed in an urban environment. Cellular network-connected UAVs collect data from multiple inspection points scattered in urban buildings and upload the data to a ground base station (GBS). Our goal is to minimize the energy consumption of the UAVs while accomplishing the data uploading task by designing the UAV inspection sequence, UAV path planning, and UAV correlation rate. To solve this intractable non-convex problem, we propose the EEGA-TD3 algorithm. First, an adaptive genetic algorithm is proposed to obtain the optimal inspection sequence by considering the energy consumption and throughput of the UAV and the transmission task. Subsequently, we utilize the dual-delay deep deterministic policy gradient (TD3) algorithm to optimize the UAV flight trajectory. The scheme is able to realize continuous control and guide the UAV to dynamically adjust its flight strategy according to the amount of data. Simulation results show that the proposed algorithm is able to flexibly select the trajectory scheme that accomplishes the data transmission task and saves energy compared to the traditional algorithm.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111384"},"PeriodicalIF":4.4,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144195843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating point cloud analytics on resource-constrained edge devices 在资源受限的边缘设备上加速点云分析
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-30 DOI: 10.1016/j.comnet.2025.111382
Jingzong Li , Yik Hong Cai , Libin Liu , Yu Mao , Chun Jason Xue , Hong Xu
{"title":"Accelerating point cloud analytics on resource-constrained edge devices","authors":"Jingzong Li ,&nbsp;Yik Hong Cai ,&nbsp;Libin Liu ,&nbsp;Yu Mao ,&nbsp;Chun Jason Xue ,&nbsp;Hong Xu","doi":"10.1016/j.comnet.2025.111382","DOIUrl":"10.1016/j.comnet.2025.111382","url":null,"abstract":"<div><div>3D object detection is crucial in various applications, particularly in the fields of autonomous driving and robotics. These applications are typically installed on edge devices to quickly interact with the environment and often necessitate nearly instantaneous reaction. Executing 3D detection on the edge using complicated neural networks is daunting due to the constrained computational resources. Conventional methods like offloading tasks to the cloud result in substantial delays because of the extensive volume of point cloud data being transmitted. In order to address the conflict between constrained edge devices and demanding inference tasks, we investigate the potential of empowering rapid 2D detection to extrapolate 3D bounding boxes. To achieve this goal, we introduce Moby, an innovative system that showcases the practicality and promise of our methodology. We propose a lightweight transformation to efficiently and accurately produces 3D bounding boxes using 2D detection results, eliminating the need for heavy 3D detectors. In addition, we develop a frame offloading scheduler that determines the optimal timing to activate the 3D detector in the cloud, preventing the accumulation of errors. Our evaluations conducted on the NVIDIA Jetson TX2 using real autonomous driving dataset show that Moby provides a latency improvement of up to 91.9% with only minimal decrease in accuracy.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"269 ","pages":"Article 111382"},"PeriodicalIF":4.4,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Schedulability analysis of time aware shaper with preemption supported in time-sensitive networks 时间敏感网络中支持抢占的时间感知整形器的可调度性分析
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-30 DOI: 10.1016/j.comnet.2025.111424
Feng Luo , Lei Zhu , Zitong Wang , Haotian Gan , Yunpeng Li , Zhenyu Yang , Dengcheng Liu
{"title":"Schedulability analysis of time aware shaper with preemption supported in time-sensitive networks","authors":"Feng Luo ,&nbsp;Lei Zhu ,&nbsp;Zitong Wang ,&nbsp;Haotian Gan ,&nbsp;Yunpeng Li ,&nbsp;Zhenyu Yang ,&nbsp;Dengcheng Liu","doi":"10.1016/j.comnet.2025.111424","DOIUrl":"10.1016/j.comnet.2025.111424","url":null,"abstract":"<div><div>To establish a unified networking technology for time- and safety-critical applications such as industrial control systems, the Time-Sensitive Networking (TSN) Working Group has proposed a series of protocols that introduce new features for TSN-enabled switches and end stations. Notably, the IEEE 802.1Qbv and Qbu standards incorporate the Time-Aware Shaper (TAS) and frame preemption mechanisms, which collectively provide low-latency guarantees for time-sensitive traffic. Deterministic communication constitutes a fundamental requirement for real-time critical systems. However, jitter at end stations remains unavoidable under certain circumstances. This necessitates the establishment of worst-case latency bounds for all network flows. Against this background, this paper presents a formal timing analysis method for TAS networks with preemption under different configurations. The proposed methodology is subsequently refined by accounting for the impacts of multi-hop architectures. Finally, a scheduler is built to determine the delay upper bound and the performance of this approach is validated through comparative evaluations with OMNeT++ simulation results across three distinct scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"269 ","pages":"Article 111424"},"PeriodicalIF":4.4,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROEN: Universal dynamic topology-adaptive evolution model for multi-modal mixed network traffic detection 多模态混合网络流量检测的通用动态拓扑自适应演化模型
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-29 DOI: 10.1016/j.comnet.2025.111380
Linghao Ren , Sijia Wang , Shengwei Zhong , Yiyuan Li , Bo Tang
{"title":"ROEN: Universal dynamic topology-adaptive evolution model for multi-modal mixed network traffic detection","authors":"Linghao Ren ,&nbsp;Sijia Wang ,&nbsp;Shengwei Zhong ,&nbsp;Yiyuan Li ,&nbsp;Bo Tang","doi":"10.1016/j.comnet.2025.111380","DOIUrl":"10.1016/j.comnet.2025.111380","url":null,"abstract":"<div><div>Modern network traffic detection systems face significant challenges in accurately classifying sophisticated cyber attacks. Traditional approaches relying on static traffic features (e.g., port numbers and packet sizes) prove inadequate for capturing the dynamic topological evolution inherent in Advanced Persistent Threats (APTs) and complex intrusions. This limitation stems from overlooking temporal correlations and structural dynamics within network traffic flows. Our investigation identifies this oversight as the primary cause of suboptimal performance in multi-modal traffic recognition, hybrid attack detection, and analysis with incomplete or anomalous data. To address this critical gap, we propose a novel dynamic topology-based method that quantifies evolving network structures through traffic pattern distribution transformations. Departing from traditional attention-based anomaly detection paradigms, our streamlined architecture introduces a dual-thread framework with multi-level feature fusion. This innovative design effectively integrates explicit statistical features with implicit dynamic topology information, achieving improved intrusion detection accuracy while reducing computational complexity. By modeling intrinsic interactions between statistical and topological characteristics, our method reveals latent intrusion patterns through three key innovations: (1) quantitative modeling of network topological dynamics, (2) a lightweight dual-thread architecture for efficient feature fusion, and (3) robust detection mechanisms under data scarcity. To our knowledge, this represents the first universal network intrusion detection framework that efficiently combines dynamic topological analysis with conventional statistical features. Extensive benchmark evaluations demonstrate state-of-the-art performance with significant improvements in AUC (5.8%<span><math><mi>↑</mi></math></span>) and macro-averaged AUC (7.2%<span><math><mi>↑</mi></math></span>) over existing methods, while maintaining a 23% lower computational overhead. Our solution establishes a foundation for next-generation intrusion detection systems, providing a generalizable and resource-efficient approach to counter evolving cyber threats. <strong>The code and dataset are available at</strong> <span><span>https://github.com/vjkgll/ROEN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111380"},"PeriodicalIF":4.4,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN-DAG-Editor: A Convolutional Neural Network offloading analyzer with Multi-Objective Dynamic Adaptive Resource Competitive Swarm Optimization CNN-DAG-Editor:基于多目标动态自适应资源竞争群优化的卷积神经网络卸载分析器
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-27 DOI: 10.1016/j.comnet.2025.111374
Bobo Ju , Yang Liu , Jing Liu , Peng Sun , Liang Song
{"title":"CNN-DAG-Editor: A Convolutional Neural Network offloading analyzer with Multi-Objective Dynamic Adaptive Resource Competitive Swarm Optimization","authors":"Bobo Ju ,&nbsp;Yang Liu ,&nbsp;Jing Liu ,&nbsp;Peng Sun ,&nbsp;Liang Song","doi":"10.1016/j.comnet.2025.111374","DOIUrl":"10.1016/j.comnet.2025.111374","url":null,"abstract":"<div><div>With the rapid development of artificial intelligence applications on mobile devices, there are increasing demands for optimizing the runtime, energy consumption, and cost-effectiveness of Convolutional Neural Networks (CNNs). These objectives often cannot be simultaneously optimized in real-world applications. The most effective way to enhance CNN performance on mobile devices is through CNN offloading while existing research often considers only a single network architecture with a single optimization objective, without addressing runtime, energy consumption, and cost-effectiveness as a multi-objective optimization problem. In this paper, we propose a CNN offloading analysis tool called CNN-DAG-Editor and introduce a Multi-Objective Dynamic Adaptive Resource Competitive Swarm Optimization (MDARCSO) algorithm within CNN-DAG-Editor for optimizing CNN offloading across devices, edge servers, and cloud servers. Experiments show that our Edge-Cloud-Server Collaborative Offloading (ECESOPS) strategy, based on MDARCSO, outperforms other strategies like No Offloading Policy (NOPS), Cloud-Server Full Offloading Policy (CFOPS), and Hybrid Offloading Policy (HOPSO) in terms of fitness performance, task energy consumption, and leasing costs. Furthermore, to verify the performance of the MDARCSO algorithm, we compared it with six state-of-the-art LSMOEA algorithms on a public benchmark (LSMOP). The results demonstrate that MDARCSO achieves the best overall performance on LSMOP.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111374"},"PeriodicalIF":4.4,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-enabled dispersed computing paradigm in Web 3.0 metaverse Web 3.0元宇宙中支持区块链的分散计算范式
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-27 DOI: 10.1016/j.comnet.2025.111378
Zhonghui Wu , Changqiao Xu , Mu Wang , Yunxiao Ma , Zicong Huang , Jingtian Liu , Han Xiao , Lujie Zhong , Luigi Alfredo Grieco
{"title":"Blockchain-enabled dispersed computing paradigm in Web 3.0 metaverse","authors":"Zhonghui Wu ,&nbsp;Changqiao Xu ,&nbsp;Mu Wang ,&nbsp;Yunxiao Ma ,&nbsp;Zicong Huang ,&nbsp;Jingtian Liu ,&nbsp;Han Xiao ,&nbsp;Lujie Zhong ,&nbsp;Luigi Alfredo Grieco","doi":"10.1016/j.comnet.2025.111378","DOIUrl":"10.1016/j.comnet.2025.111378","url":null,"abstract":"<div><div>The metaverse is rapidly gaining momentum, thanks to its inherent capabilities to create an immersive virtual environment that runs in parallel to the physical world. At the same time, it raises new technical challenges due to its unique characteristics. On the one hand, metaverse applications are composed of multiple computing sub-tasks, and performing these sub-tasks sequentially can hinder computational efficiency. On the other hand, current centralized task offloading for metaverse applications is contrary to the core concept of Web 3.0. Moreover, fair incentives for all participants are not fully considered. To address these issues, a new computing paradigm for metaverse is required. Hence, we propose a Blockchain-enabled Intelligent Dispersed Computing Framework (BIDC). In this paper, we first design a two-layered architecture and model the sub-tasks as a directed acyclic graph (DAG) by utilizing their dependent relations. Inspired by the interconnection of blocks in blockchain, BIDC transforms the execution process of sub-tasks into a mining process, cleverly integrating task computation with mining. On this basis, Mining Mechanism and Main Chain Confirmation Mechanisms are presented, to ensure the efficiency of task offloading and the fairness of reward distribution. Then BIDC transforms the overhead time minimization problem into a multi-party mining problem. By leveraging Actor-Critic-based Multi-Agent Reinforcement Learning, every device can dynamically adjust its own mining strategy to achieve the lowest latency. At last, experimental results demonstrate BIDC’s reliability, scalability, and superior service quality compared to existing state-of-the-art solutions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111378"},"PeriodicalIF":4.4,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144190051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task offloading and resource allocation in hybrid-powered WPT MEC system: An enhanced deep reinforcement learning method 混合动力WPT MEC系统的任务卸载和资源分配:一种增强的深度强化学习方法
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2025-05-27 DOI: 10.1016/j.comnet.2025.111312
Ziqi Liu , Gaochao Xu , Bo Liu , Xu Xu , Long Li
{"title":"Task offloading and resource allocation in hybrid-powered WPT MEC system: An enhanced deep reinforcement learning method","authors":"Ziqi Liu ,&nbsp;Gaochao Xu ,&nbsp;Bo Liu ,&nbsp;Xu Xu ,&nbsp;Long Li","doi":"10.1016/j.comnet.2025.111312","DOIUrl":"10.1016/j.comnet.2025.111312","url":null,"abstract":"<div><div>Recently, the integration of mobile edge computing (MEC) and wireless power transfer (WPT) technologies presents a transformative approach to overcoming the energy limitations of wireless devices (WDs), thereby enhancing both the sustainability and operational efficiency of mobile networks. This paper introduces a novel green-prioritized hybrid energy supply system that harnesses both renewable and grid energy, which aims at optimizing energy use and computational power in mobile networks under dynamic conditions. Specifically, we formulate a long-term average grid energy minimization problem (LAGEMP) to reduce grid energy consumption while maintaining robust and efficient network operations. To solve the complex and dynamic LAGEMP, we propose an action space reduction scheme and an enhanced deep deterministic policy gradient (EDDPG) algorithm, which incorporates the cross-entropy method (CEM). These introduced enhanced approaches not only reduce the computational load but also expedite the convergence of network training, thereby optimizing both energy usage and task offloading strategies. Simulation results reveal that the EDDPG algorithm significantly outperforms existing strategies and algorithms, and achieves near-optimal task offloading efficiency with reduced grid energy.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"268 ","pages":"Article 111312"},"PeriodicalIF":4.4,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信