Journal of Grid Computing最新文献

筛选
英文 中文
Dynamic Multi-Resource Fair Allocation with Elastic Demands 具有弹性需求的动态多资源公平分配
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-27 DOI: 10.1007/s10723-024-09754-6
Hao Guo, Weidong Li
{"title":"Dynamic Multi-Resource Fair Allocation with Elastic Demands","authors":"Hao Guo, Weidong Li","doi":"10.1007/s10723-024-09754-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09754-6","url":null,"abstract":"<p>In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles 基于分布式深度强化学习遗传优化算法的车联网联合任务卸载
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-26 DOI: 10.1007/s10723-024-09741-x
Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu
{"title":"Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles","authors":"Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu","doi":"10.1007/s10723-024-09741-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09741-x","url":null,"abstract":"<p>The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things 云工业物联网区块链上基于人工智能的去中心化任务分配
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-24 DOI: 10.1007/s10723-024-09751-9
Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi
{"title":"Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things","authors":"Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi","doi":"10.1007/s10723-024-09751-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09751-9","url":null,"abstract":"<p>This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework 多队列雾系统中的概率截止时间感知应用卸载:最大熵框架
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-22 DOI: 10.1007/s10723-024-09753-7
{"title":"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09753-7","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities 利用 RNN 和 Petri 网防范智能城市中的边缘计算威胁
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-22 DOI: 10.1007/s10723-023-09733-3
{"title":"Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities","authors":"","doi":"10.1007/s10723-023-09733-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09733-3","url":null,"abstract":"<h3>Abstract</h3> <p>The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods 边缘计算助力智能医疗:利用深度学习方法进行监测和诊断
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-21 DOI: 10.1007/s10723-023-09726-2
{"title":"Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods","authors":"","doi":"10.1007/s10723-023-09726-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09726-2","url":null,"abstract":"<h3>Abstract</h3> <p>Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things 边缘智能支持 MEC 中的动态资源管理,实现智慧城市物联网
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-13 DOI: 10.1007/s10723-024-09749-3
Xucheng Wan
{"title":"Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things","authors":"Xucheng Wan","doi":"10.1007/s10723-024-09749-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09749-3","url":null,"abstract":"<p>The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependent Task Scheduling Using Parallel Deep Neural Networks in Mobile Edge Computing 在移动边缘计算中使用并行深度神经网络调度依赖性任务
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-12 DOI: 10.1007/s10723-024-09744-8
Sheng Chai, Jimmy Huang
{"title":"Dependent Task Scheduling Using Parallel Deep Neural Networks in Mobile Edge Computing","authors":"Sheng Chai, Jimmy Huang","doi":"10.1007/s10723-024-09744-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09744-8","url":null,"abstract":"<p>Conventional detection techniques aimed at intelligent devices rely primarily on deep learning algorithms, which, despite their high precision, are hindered by significant computer power and energy requirements. This work proposes a novel solution to these constraints using mobile edge computing (MEC). We present the Dependent Task-Offloading technique (DTOS), a deep reinforcement learning-based technique for optimizing task offloading to numerous heterogeneous edge servers in intelligent prosthesis applications. By expressing the task offloading problem as a Markov decision process, DTOS addresses the dual challenge of lowering network service latency and power utilisation. DTOS employs a weighted sum optimisation method in this approach to find the best policy. The technique uses parallel deep neural networks (DNNs), which not only create offloading possibilities but also cache the most successful options for further iterations. Furthermore, the DTOS modifies DNN variables using a prioritized experience replay method, which improves learning by focusing on valuable experiences. The use of DTOS in a real-world MEC scenario, where a deep learning-based movement intent detection algorithm is deployed on intelligent prostheses, demonstrates its applicability and effectiveness. The experimental results show that DTOS consistently makes optimal decisions in work offloading and planning, demonstrating its potential to improve the operational efficiency of intelligent prostheses significantly. Thus, the study introduces a novel approach that combines the characteristics of deep reinforcement learning with MEC, demonstrating a substantial development in the field of intelligent prostheses through optimal task offloading and reduced resource usage.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Task Offloading and Multi-Task Offloading Based on NOMA Enhanced Internet of Vehicles in Edge Computing 边缘计算中基于 NOMA 增强型车联网的联合任务卸载和多任务卸载
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-12 DOI: 10.1007/s10723-024-09748-4
Jie Zhao, Ahmed M. El-Sherbeeny
{"title":"Joint Task Offloading and Multi-Task Offloading Based on NOMA Enhanced Internet of Vehicles in Edge Computing","authors":"Jie Zhao, Ahmed M. El-Sherbeeny","doi":"10.1007/s10723-024-09748-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09748-4","url":null,"abstract":"<p>With the rapid development of technology, the Internet of vehicles (IoV) has become increasingly important. However, as the number of vehicles on highways increases, ensuring reliable communication between them has become a significant challenge. To address this issue, this paper proposes a novel approach that combines Non-Orthogonal Multiple Access (NOMA) with a time-optimized multitask offloading model based on Optimal Stopping Theory (OST) principles. NOMA-OST is a promising technology that can address the high volume of multiple access and the need for reliable communication in IoV. A NOMA-OST-based IoV system is proposed to meet the Vehicle-to-Vehicle (V2V) communication requirements. This approach optimizes joint task offloading and resource allocation for multiple users, tasks, and servers. NOMA enables efficient resource sharing by accommodating multiple devices, whereas OST ensures timely and intelligent task offloading decisions, resulting in improved reliability and efficiency in V2V communication within IoV, making it a highly innovative and technically robust solution. It suggests a low-complexity sub-optimal matching approach for sub-channel allocation to increase the effectiveness of offloading. Simulation results show that NOMA with OST significantly improves the system’s energy efficiency (EE) and reduces computation time. The approach also enhances the effectiveness of task offloading and resource allocation, leading to better overall system performance. The performance of NOMA with OST under V2V communication requirements in IoV is significantly improved compared to traditional orthogonal multiaccess methods. Overall, NOMA with OST is a promising technology that can address the high reliability of V2V communication requirements in IoV. It can improve system performance, and energy efficiency and reduce computation time, making it a valuable technology for IoV applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An IoT-based Covid-19 Healthcare Monitoring and Prediction Using Deep Learning Methods 使用深度学习方法进行基于物联网的 Covid-19 医疗监控和预测
IF 5.5 2区 计算机科学
Journal of Grid Computing Pub Date : 2024-02-09 DOI: 10.1007/s10723-024-09742-w
Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang
{"title":"An IoT-based Covid-19 Healthcare Monitoring and Prediction Using Deep Learning Methods","authors":"Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang","doi":"10.1007/s10723-024-09742-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09742-w","url":null,"abstract":"<p>The Internet of Things (IoT) is developing a more significant transformation in the healthcare industry by improving patient care with reduced cost of treatments. Main aim of this research is to monitor the Covid-19 patients and report the health issues immediately using IoT. Collected data is analyzed using deep learning model. The technological advancement of sensor and mobile technologies came up with IoT-based healthcare systems. These systems are more preventive than the traditional healthcare systems. This paper developed an efficient real-time IoT-based COVID-19 monitoring and prediction system using a deep learning model. By collecting symptomatic patient data and analyzing it, the COVID-19 suspects are predicted in the early stages in a better way. The effective parameters are selected using the Modified Chicken Swarm optimization (MCSO) approach by mining the health parameters gathered from the sensors. The COVID-19 presence is computed using the hybrid Deep learning model called Convolution and graph LSTM using the desired features. (ConvGLSTM). This process includes four stages such as data collection, data analysis (feature selection), diagnostic system (DL model), and the cloud system (Storage). The developed model is experimented with using the dataset from Srinagar based on parameters such as accuracy, precision, recall, F1 score, RMSE, and AUC. Based on the outcome, the proposed model is effective and superior to the traditional approaches to the early identification of COVID-19.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信