{"title":"Multi-Agent Systems for Collaborative Inference Based on Deep Policy Q-Inference Network","authors":"Shangshang Wang, Yuqin Jing, Kezhu Wang, Xue Wang","doi":"10.1007/s10723-024-09750-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09750-w","url":null,"abstract":"<p>This study tackles the problem of increasing efficiency and scalability in deep neural network (DNN) systems by employing collaborative inference, an approach that is gaining popularity because to its ability to maximize computational resources. It involves splitting a pre-trained DNN model into two parts and running them separately on user equipment (UE) and edge servers. This approach is advantageous because it results in faster and more energy-efficient inference, as computation can be offloaded to edge servers rather than relying solely on UEs. However, a significant challenge of collaborative belief is the dynamic coupling of DNN layers, which makes it difficult to separate and run the layers independently. To address this challenge, we proposed a novel approach to optimize collaborative inference in a multi-agent scenario where a single-edge server coordinates the assumption of multiple UEs. Our proposed method suggests using an autoencoder-based technique to reduce the size of intermediary features and constructing tasks using the deep policy inference Q-inference network’s overhead (DPIQN). To optimize the collaborative inference, employ the Deep Recurrent Policy Inference Q-Network (DRPIQN) technique, which allows for a hybrid action space. The results of the tests demonstrate that this approach can significantly reduce inference latency by up to 56% and energy usage by up to 72% on various networks. Overall, this proposed approach provides an efficient and effective method for implementing collaborative inference in multi-agent scenarios, which could have significant implications for developing DNN systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"77 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks","authors":"Haotian Pang, Zhanwei Wang","doi":"10.1007/s10723-024-09752-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09752-8","url":null,"abstract":"<p>Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"34 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization","authors":"Yanli Xing","doi":"10.1007/s10723-024-09746-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09746-6","url":null,"abstract":"<p>Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"29 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Multi-Resource Fair Allocation with Elastic Demands","authors":"Hao Guo, Weidong Li","doi":"10.1007/s10723-024-09754-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09754-6","url":null,"abstract":"<p>In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"3 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu
{"title":"Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles","authors":"Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu","doi":"10.1007/s10723-024-09741-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09741-x","url":null,"abstract":"<p>The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"46 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things","authors":"Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi","doi":"10.1007/s10723-024-09751-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09751-9","url":null,"abstract":"<p>This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"14 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09753-7","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"40 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities","authors":"","doi":"10.1007/s10723-023-09733-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09733-3","url":null,"abstract":"<h3>Abstract</h3> <p>The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods","authors":"","doi":"10.1007/s10723-023-09726-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09726-2","url":null,"abstract":"<h3>Abstract</h3> <p>Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"2 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things","authors":"Xucheng Wan","doi":"10.1007/s10723-024-09749-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09749-3","url":null,"abstract":"<p>The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}