{"title":"多队列雾系统中的概率截止时间感知应用卸载:最大熵框架","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"40 1","pages":""},"PeriodicalIF":3.6000,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework\",\"authors\":\"\",\"doi\":\"10.1007/s10723-024-09753-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>\",\"PeriodicalId\":54817,\"journal\":{\"name\":\"Journal of Grid Computing\",\"volume\":\"40 1\",\"pages\":\"\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Grid Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10723-024-09753-7\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Grid Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10723-024-09753-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework
Abstract
Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.
期刊介绍:
Grid Computing is an emerging technology that enables large-scale resource sharing and coordinated problem solving within distributed, often loosely coordinated groups-what are sometimes termed "virtual organizations. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, Grid technologies promise to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed groups to work together in ways that were previously impossible. Similar technologies are being adopted within industry, where they serve as important building blocks for emerging service provider infrastructures.
Even though the advantages of this technology for classes of applications have been acknowledged, research in a variety of disciplines, including not only multiple domains of computer science (networking, middleware, programming, algorithms) but also application disciplines themselves, as well as such areas as sociology and economics, is needed to broaden the applicability and scope of the current body of knowledge.