Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou
{"title":"Assessing the Complexity of Cloud Pricing Policies: A Comparative Market Analysis","authors":"Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou","doi":"10.1007/s10723-024-09780-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09780-4","url":null,"abstract":"<p>Cloud computing has gained popularity at a breakneck pace over the last few years. It has revolutionized the way businesses operate by providing a flexible and scalable infrastructure for their computing needs. Cloud providers offer a range of services with a variety of pricing schemes. Cloud pricing schemes are based on functional factors like CPU, RAM, and storage, combined with different payment options, such as pay-per-use, subscription-based, and non-functional aspects, such as scalability and availability. While cloud pricing can be complicated, it is critical for businesses to thoroughly assess and compare pricing policies along with technical requirements to ensure they design an investment strategy. This paper evaluates current pricing strategies for IaaS, CaaS, and PaaS cloud services and also focuses on the three leading cloud providers, Amazon, Microsoft, and Google. To compare pricing policies between different services and providers, a hedonic price index is constructed for each service type based on data collected in 2022. Using the hedonic price index, a comparative analysis between them becomes feasible. The results revealed that providers follow the very same pricing pattern for IaaS and CaaS, with CPU being the main driver of cloud pricing schemes, whereas PaaS pricing fluctuates among cloud providers.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh
{"title":"A Quasi-Oppositional Learning-based Fox Optimizer for QoS-aware Web Service Composition in Mobile Edge Computing","authors":"Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh","doi":"10.1007/s10723-024-09779-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09779-x","url":null,"abstract":"<p>Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi
{"title":"WIDESim: A Toolkit for Simulating Resource Management Techniques Of Scientific Workflows in Distributed Environments with Graph Topology","authors":"Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi","doi":"10.1007/s10723-024-09778-y","DOIUrl":"https://doi.org/10.1007/s10723-024-09778-y","url":null,"abstract":"<p>Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CMK: Enhancing Resource Usage Monitoring across Diverse Bioinformatics Workflow Management Systems","authors":"Robert Nica, Stefan Götz, Germán Moltó","doi":"10.1007/s10723-024-09777-z","DOIUrl":"https://doi.org/10.1007/s10723-024-09777-z","url":null,"abstract":"<p>The increasing use of multiple Workflow Management Systems (WMS) employing various workflow languages and shared workflow repositories enhances the open-source bioinformatics ecosystem. Efficient resource utilization in these systems is crucial for keeping costs low and improving processing times, especially for large-scale bioinformatics workflows running in cloud environments. Recognizing this, our study introduces a novel reference architecture, Cloud Monitoring Kit (CMK), for a multi-platform monitoring system. Our solution is designed to generate uniform, aggregated metrics from containerized workflow tasks scheduled by different WMS. Central to the proposed solution is the use of task labeling methods, which enable convenient grouping and aggregating of metrics independent of the WMS employed. This approach builds upon existing technology, providing additional benefits of modularity and capacity to seamlessly integrate with other data processing or collection systems. We have developed and released an open-source implementation of our system, which we evaluated on Amazon Web Services (AWS) using a transcriptomics data analysis workflow executed on two scientific WMS. The findings of this study indicate that CMK provides valuable insights into resource utilization. In doing so, it paves the way for more efficient management of resources in containerized scientific workflows running in public cloud environments, and it provides a foundation for optimizing task configurations, reducing costs, and enhancing scheduling decisions. Overall, our solution addresses the immediate needs of bioinformatics workflows and offers a scalable and adaptable framework for future advancements in cloud-based scientific computing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resource Utilization Based on Hybrid WOA-LOA Optimization with Credit Based Resource Aware Load Balancing and Scheduling Algorithm for Cloud Computing","authors":"Abhikriti Narwal","doi":"10.1007/s10723-024-09776-0","DOIUrl":"https://doi.org/10.1007/s10723-024-09776-0","url":null,"abstract":"<p>In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on s","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy-Constrained DAG Scheduling on Edge and Cloud Servers with Overlapped Communication and Computation","authors":"Keqin Li","doi":"10.1007/s10723-024-09775-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09775-1","url":null,"abstract":"<p>Mobile edge computing (MEC) has been widely applied to numerous areas and aspects of human life and modern society. Many such applications can be represented as directed acyclic graphs (DAG). Device-edge-cloud fusion provides a new kind of heterogeneous, distributed, and collaborative computing environment to support various MEC applications. DAG scheduling is a procedure employed to effectively and efficiently manage and monitor the execution of tasks that have precedence constraints on each other. In this paper, we investigate the NP-hard problems of DAG scheduling and energy-constrained DAG scheduling on mobile devices, edge servers, and cloud servers by designing and evaluating new heuristic algorithms. Our contributions to DAG scheduling can be summarized as follows. First, our heuristic algorithms guarantee that all task dependencies are correctly followed by keeping track of the number of remaining predecessors that are still not completed. Second, our heuristic algorithms ensure that all wireless transmissions between a mobile device and edge/cloud servers are performed one after another. Third, our heuristic algorithms allow an edge/cloud server to start the execution of a task as soon as the transmission of the task is finished. Fourth, we derive a lower bound for the optimal makespan such that the solutions of our heuristic algorithms can be compared with optimal solutions. Our contributions to energy-constrained DAG scheduling can be summarized as follows. First, our heuristic algorithms ensure that the overall computation energy consumption and communication energy consumption does not exceed the given energy constraint. Second, our algorithms adopt an iterative and progressive procedure to determine appropriate computation speed and wireless communication speeds while generating a DAG schedule and satisfying the energy constraint. Third, we derive a lower bound for the optimal makespan and evaluate the performance of our heuristic algorithms in such a way that their heuristic solutions are compared with optimal solutions. To the author’s knowledge, this is the first paper that considers DAG scheduling and energy-constrained DAG scheduling on edge and cloud servers with sequential wireless communications and overlapped communication and computation to minimize makespan.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resource Allocation Using Deep Deterministic Policy Gradient-Based Federated Learning for Multi-Access Edge Computing","authors":"Zheyu Zhou, Qi Wang, Jizhou Li, Ziyuan Li","doi":"10.1007/s10723-024-09774-2","DOIUrl":"https://doi.org/10.1007/s10723-024-09774-2","url":null,"abstract":"<p>The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Resource Consumption and Reducing Power Usage in Data Centers, A Novel Mathematical VM Replacement Model and Efficient Algorithm","authors":"Reza Rabieyan, Ramin Yahyapour, Patrick Jahnke","doi":"10.1007/s10723-024-09772-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09772-4","url":null,"abstract":"<p>This study addresses the issue of power consumption in virtualized cloud data centers by proposing a virtual machine (VM) replacement model and a corresponding algorithm. The model incorporates multi-objective functions, aiming to optimize VM selection based on weights and minimize resource utilization disparities across hosts. Constraints are incorporated to ensure that CPU utilization remains close to the average CPU usage while mitigating overutilization in memory and network bandwidth usage. The proposed algorithm offers a fast and efficient solution with minimal VM replacements. The experimental simulation results demonstrate significant reductions in power consumption compared with a benchmark model. The proposed model and algorithm have been implemented and operated within a real-world cloud infrastructure, emphasizing their practicality.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EQGSA-DPW: A Quantum-GSA Algorithm-Based Data Placement for Scientific Workflow in Cloud Computing Environment","authors":"Zaki Brahmi, Rihab Derouiche","doi":"10.1007/s10723-024-09771-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09771-5","url":null,"abstract":"<p>The processing of scientific workflow (SW) in geo-distributed cloud computing holds significant importance in the placement of massive data between various tasks. However, data movement across storage services is a main concern in the geo-distributed data centers, which entails issues related to the cost and energy consumption of both storage services and network infrastructure. Aiming to optimize data placement for SW, this paper proposes EQGSA-DPW a novel algorithm leveraging quantum computing and swarm intelligence optimization to intelligently reduce costs and energy consumption when a SW is processed in multi-cloud. EQGSA-DPW considers multiple objectives (e.g., transmission bandwidth, cost and energy consumption of both service and communication) and improves the GSA algorithm by using the log-sigmoid transfer function as a gravitational constant <i>G</i> and updating agent position by quantum rotation angle amplitude for more diversification. Moreover, to assist EQGSA-DPW in finding the optima, an initial guess is proposed. The performance of our EQGSA-DPW algorithm is evaluated via extensive experiments, which show that our data placement method achieves significantly better performance in terms of cost, energy, and data transfer than competing algorithms. For instance, in terms of energy consumption, EQGSA-DPW can on average achieve up to <span>(25%)</span>, <span>(14%)</span>, and <span>(40%)</span> reduction over that of GSA, PSO, and ACO-DPDGW algorithms, respectively. As for the storage services cost, EQGSA-DPW values are the lowest.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Enhanced Energy Aware Resource Optimization for Edge Devices Through Multi-cluster Communication Systems","authors":"Saihong Li, Yingying Ma, Yusha Zhang, Yinghui Xie","doi":"10.1007/s10723-024-09773-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09773-3","url":null,"abstract":"<p>In the realm of the Internet of Things (IoT), the significance of edge devices within multi-cluster communication systems is on the rise. As the quantity of clusters and devices associated with each cluster grows, challenges related to resource optimization emerge. To address these concerns and enhance resource utilization, it is imperative to devise efficient strategies for resource allocation to specific clusters. These strategies encompass the implementation of load-balancing algorithms, dynamic scheduling, and virtualization techniques that generate logical instances of resources within the clusters. Moreover, the implementation of data management techniques is essential to facilitate effective data sharing among clusters. These strategies collectively minimize resource waste, enabling the streamlined management of networking and data resources in a multi-cluster communication system. This paper introduces an energy-efficient resource allocation technique tailored for edge devices in such systems. The proposed strategy leverages a higher-level meta-cluster heuristic to construct an optimization model, aiming to enhance the resource utilization of individual edge nodes. Emphasizing energy consumption and resource optimization while meeting latency requirements, the model employs a graph-based node selection method to assign high-load nodes to optimal clusters. To ensure fairness, resource allocation collaborates with resource descriptions and Quality of Service (QoS) metrics to tailor resource distribution. Additionally, the proposed strategy dynamically updates its parameter settings to adapt to various scenarios. The simulations confirm the superiority of the proposed strategy in different aspects.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}