Cluster ComputingPub Date : 2024-06-23DOI: 10.1007/s10586-024-04621-1
Jun Wang, Ze Luo, Chenglong Wang
{"title":"A two-way trust routing scheme to improve security in fog computing environment","authors":"Jun Wang, Ze Luo, Chenglong Wang","doi":"10.1007/s10586-024-04621-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04621-1","url":null,"abstract":"<p>Compliance with security requirements in the fog computing environment is known as an important phenomenon in maintaining the quality of service due to the dynamic topology. Security and privacy breaches can occur in fog computing because of its properties and the adaptability of its deployment method. These characteristics render current systems inappropriate for fog computing, including support for high mobility, a dynamic environment, geographic distribution, awareness of location, closeness to end users, and absence of redundancy. Although efficient secure routing protocols have been developed by researchers in recent years, it is challenging to ensure security, reliability, and quality of service at the same time to overcome the limitations of cloud-fog computing. In light of the fact that trust management is an effective means of protecting sensitive information, this study proposes a two-way trust management system (TMS) that would enable both the service requester and the service provider to verify each other's reliability and safety. The trustworthiness of the service seeker can also be verified in this way. So that fog clients can confirm that fog nodes can deliver suitable, dependable, and secure services, trust in a fog computing environment should ideally be two-way. The ability to verify the authenticity of fog clients is an important capability for fog nodes to have. A distributed, event-based, multi-trust trust system is presented by the suggested approach to trust computation, which makes use of social relationships (nodes and clients) and service quality criteria. Hence, the trust score is computed using a number of characteristics. Here, the weight of direct and indirect ratings is emphasized, and the final trust score is computed by dynamically merging the information gained from self-observation and the suggestions of nearby nodes. An extensive evaluation of the proposed method shows that it is resistant to a large number of badly behaved nodes and can successfully neutralize trust-based attacks.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"136 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-22DOI: 10.1007/s10586-024-04605-1
Abdelazim G. Hussien, Amit Chhabra, Fatma A. Hashim, Adrian Pop
{"title":"A novel hybrid Artificial Gorilla Troops Optimizer with Honey Badger Algorithm for solving cloud scheduling problem","authors":"Abdelazim G. Hussien, Amit Chhabra, Fatma A. Hashim, Adrian Pop","doi":"10.1007/s10586-024-04605-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04605-1","url":null,"abstract":"<p>Cloud computing has revolutionized the way a variety of ubiquitous computing resources are provided to users with ease and on a pay-per-usage basis. Task scheduling problem is an important challenge, which involves assigning resources to users’ Bag-of-Tasks applications in a way that maximizes either system provider or user performance or both. With the increase in system size and the number of applications, the Bag-of-Tasks scheduling (<i>BoTS</i>) problem becomes more complex due to the expansion of search space. Such a problem falls in the category of NP-hard optimization challenges, which are often effectively tackled by metaheuristics. However, standalone metaheuristics generally suffer from certain deficiencies which affect their searching efficiency resulting in deteriorated final performance. This paper aims to introduce an optimal hybrid metaheuristic algorithm by leveraging the strengths of both the Artificial Gorilla Troops Optimizer (GTO) and the Honey Badger Algorithm (HBA) to find an approximate scheduling solution for the <i>BoTS</i> problem. While the original GTO has demonstrated effectiveness since its inception, it possesses limitations, particularly in addressing composite and high-dimensional problems. To address these limitations, this paper proposes a novel approach by introducing a new updating equation inspired by the HBA, specifically designed to enhance the exploitation phase of the algorithm. Through this integration, the goal is to overcome the drawbacks of the GTO and improve its performance in solving complex optimization problems. The initial performance of the GTOHBA algorithm tested on standard CEC2017 and CEC2022 benchmarks shows significant performance improvement over the baseline metaheuristics. Later on, we applied the proposed GTOHBA on the <i>BoTS</i> problem using standard parallel workloads (CEA-Curie and HPC2N) to optimize makespan and energy objectives. The obtained outcomes of the proposed GTOHBA are compared to the scheduling techniques based on well-known metaheuristics under the same experimental conditions using standard statistical measures and box plots. In the case of CEA-Curie workloads, the GTOHBA produced makespan and energy consumption reduction in the range of 8.12–22.76% and 6.2–18.00%, respectively over the compared metaheuristics. Whereas for the HPC2N workloads, GTOHBA achieved 8.46–30.97% makespan reduction and 8.51–33.41% energy consumption reduction against the tested metaheuristics. In conclusion, the proposed hybrid metaheuristic algorithm provides a promising solution to the <i>BoTS</i> problem, that can enhance the performance and efficiency of cloud computing systems.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-22DOI: 10.1007/s10586-024-04546-9
Y. Sreenivasa Rao, Vikas Srivastava, Tapaswini Mohanty, Sumit Kumar Debnath
{"title":"Designing quantum-secure attribute-based encryption","authors":"Y. Sreenivasa Rao, Vikas Srivastava, Tapaswini Mohanty, Sumit Kumar Debnath","doi":"10.1007/s10586-024-04546-9","DOIUrl":"https://doi.org/10.1007/s10586-024-04546-9","url":null,"abstract":"<p>In the last couple of decades, Attribute-Based Encryption (ABE) has been a promising encryption technique to realize fine-grained access control over encrypted data. ABE has appealing functionalities such as (i) access control through encryption and (ii) encrypting a message to a group of recipients without knowing their actual identities. However, the existing state-of-the-art ABEs are based on number-theoretic hardness assumptions. These designs are not secure against attacks by quantum algorithms such as Shor algorithm. Moreover, existing Post-Quantum Cryptography (PQC)-based ABEs fail to provide long-term security. Therefore, there is a need for quantum secure ABE that can withstand quantum attacks and provides long-term security. In this work, for the first time, we introduce the notion of a quantum-secure ABE (<span>qABE</span>) framework that preserves the classical ABE’s functionalities and resists quantum attacks. Next, we provide a generic construction of <span>qABE</span> which is able to transform any existing ABE into <span>qABE</span> scheme. Thereafter, we illustrate a concrete construction of a quantum ABE based on our generic transformation <span>qABE</span> and the Waters’ ciphertext-policy ABE scheme.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-22DOI: 10.1007/s10586-024-04623-z
Rashmi Keshri, Deo Prakash Vidyarthi
{"title":"Energy-efficient communication-aware VM placement in cloud datacenter using hybrid ACO–GWO","authors":"Rashmi Keshri, Deo Prakash Vidyarthi","doi":"10.1007/s10586-024-04623-z","DOIUrl":"https://doi.org/10.1007/s10586-024-04623-z","url":null,"abstract":"<p>Virtual machine placement (VMP) is the process of mapping virtual machines to physical machines, which is very important for resource utilization in cloud data centres. As such, VM placement is an NP-class problem, and therefore, researchers have frequently applied meta-heuristics for this. In this study, we applied a hybrid meta-heuristic that combines ant colony optimisation (ACO) and grey wolf optimisation (GWO) to minimise resource wastage, energy consumption, and bandwidth usage. The performance study of the proposed work is conducted on variable number of virtual machines with different resource correlation coefficients. According to the observations, there is 2.85%, 7.61%, 15.78% and 19.41% improvement in power consumption, 26.44%, 57.83%, 77.90% and 83.89% improvement in resource wastage and 2.94%, 8.20%, 9.99% and 10.72% improvement in bandwidth utilisation as compared to multi-objective GA, ACO, FFD and random based algorithm respectively. To study the convergence of the proposed method, it is compared with few recent hybrid meta-heuristic algorithms, namely ACO–PSO, GA–PSO, GA–ACO and GA–GWO which exhibits that the proposed hybrid method converges faster.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"239 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-22DOI: 10.1007/s10586-024-04596-z
Ankit Kumar Jain, Hariom Shukla, Diksha Goel
{"title":"A comprehensive survey on DDoS detection, mitigation, and defense strategies in software-defined networks","authors":"Ankit Kumar Jain, Hariom Shukla, Diksha Goel","doi":"10.1007/s10586-024-04596-z","DOIUrl":"https://doi.org/10.1007/s10586-024-04596-z","url":null,"abstract":"<p> Software Defined Networking (SDN) has become increasingly prevalent in cloud computing, Internet of Things (IoT), and various environments to optimize network efficiency. While it provides a flexible network infrastructure, it also faces security threats, particularly from Distributed Denial of Service (DDoS) attacks due to its centralized design. This survey comprehensively reviews the efforts of various researchers in safeguarding SDN against DDoS attacks and analyzes different detection and mitigation strategies employed in SDN environments. Furthermore, the survey explores various types of DDoS attacks that can occur across different planes and communication links in SDN. Additionally, emerging security measures for preventing DDoS attacks in SDN are examined. The survey also reviews the datasets, tools, and simulators used for detecting DDoS attacks in SDN. Moreover, the survey identifies various open challenges in detecting and mitigating DDoS attacks in SDN and outlines potential future research directions. Lastly, the survey provides a comprehensive comparative analysis of various DDoS detection techniques based on various essential parameters. </p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantum competitive decision algorithm for the emergency siting problem under given deadline conditions","authors":"Wei Zhao, Weiming Gao, Shengnan Gao, Chenmei Teng, Xiaoya Zhu","doi":"10.1007/s10586-024-04548-7","DOIUrl":"https://doi.org/10.1007/s10586-024-04548-7","url":null,"abstract":"<p>Allocating emergency resources effectively is an essential aspect of disaster preparation and response. The Emergency Siting Problem (ESP) involves identifying the best places to locate emergency services in order that it can serve the most people in the least amount of time. Maintaining time limitations is of greatest significance in situations where each second matters, such as during disasters or public health emergencies. In this study, we concentrate on the difficulty of solving the ESP under extreme time limits. In this research, Genetic-adaptive reptile search optimization (GRSO) is proposed to provide a different way to solve the ESP problem within the constraints of limited time. The proposed GRSO method takes into account travel times, prospective facility places, and the geographic location of demand sites while keeping to the established time restrictions. In this study, the proposed method demonstrating superior performance accuracy in locating transportation facilities under extreme time limits for Emergency Service Planning (ESP), outperforming established optimization strategies and heuristics commonly applied to ESP problems. A fitness function is created to assess the standard of responses based on elements including response speed, coverage, and meeting deadlines. The GRSO algorithm has been modified and altered to handle the distinctive features of the ESP, such as precise facility placements and time constraints. Simulated and real-world datasets describing emergency circumstances are used in computational research to confirm the efficiency of the proposed method. The results are evaluated with established optimization strategies and heuristics generally applied to ESP problems. Results show that the GRSOapproach provides solutions that are more in pace with time limit constraints without sacrificing sufficient degrees of coverage or response time.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"204 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved aquila optimizer with mRMR for feature selection of high-dimensional gene expression data","authors":"Xiwen Qin, Siqi Zhang, Xiaogang Dong, Hongyu Shi, Liping Yuan","doi":"10.1007/s10586-024-04614-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04614-0","url":null,"abstract":"<p>Accurate classification of gene expression data is crucial for disease diagnosis and drug discovery. However, gene expression data usually has a large number of features, which poses a challenge for accurate classification. In this paper, a novel feature selection method based on minimal redundancy maximal relevance (mRMR) and aquila optimizer is proposed, which introduces the mRMR method in the initialization stage of the population to generate excellent initial populations, effectively improve the quality of the population, and then, the using random opposition-based learning strategy to improve the diversity of aquila population and accelerate the convergence speed of the algorithm, and finally, introducing inertia weight in the position update formula in the late iteration of the aquila optimizer to avoid the algorithm falling into the local optimum and improve the algorithm’s capability to find the optimum. In order to verify the effectiveness of the proposed method, ten real gene expression datasets are selected in this paper and compared with several meta-heuristic algorithms. Experimental results show that the proposed method is significantly superior to other meta-heuristic algorithms in terms of fitness value, classification accuracy and the number of selected features. Compared with the original aquila optimizer, the average classification accuracy of the proposed method on KNN and SVM classifiers is improved by 3.48–12.41% and 0.53–18.63% respectively. The proposed method significantly reduces the feature dimension of gene expression data, retains important features, and obtains higher classification accuracy, providing a new method and idea for feature selection of gene expression data.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-19DOI: 10.1007/s10586-024-04616-y
B. Dhanalaxmi, Yeligeti Raju, B. Saritha, N. Sabitha, Namita Parati, Kandula Damodhar Rao
{"title":"OptFBFN: IOT threat mitigation in software-defined networks based on fuzzy approach","authors":"B. Dhanalaxmi, Yeligeti Raju, B. Saritha, N. Sabitha, Namita Parati, Kandula Damodhar Rao","doi":"10.1007/s10586-024-04616-y","DOIUrl":"https://doi.org/10.1007/s10586-024-04616-y","url":null,"abstract":"<p>Software-Defined Networking (SDN) has emerged as a new architectural paradigm in computer networks, aiming to enhance network capabilities and address the limitations of conventional networks. Despite its many advantages, SDN has encountered numerous attack risks and vulnerabilities. Using an intrusion detection system (IDS) is one of the most important ways to address threats and concerns in the SDN. The great flexibility, adaptability, and programmability of SDN, together with other unique qualities, make the integration of IDS into the SDN network effective. The majority of these methods are less scalable and have poor accuracy. This research suggests an Optimized Fuzzy Based Function Network (OFBFN) to solve this problem. The Modified ResNet152 method is utilized to extract features from the input data. The Binary Waterwheel Plant Algorithm (BWWPA) selects the essential features. To characterize attacks within the InSDN, BOT-IOT, ToN-IoT, and CICIDS 2019 datasets, the system first selects the most efficient features. Then, it employs the FBFN with the Coatis Optimization Algorithm for classification. The suggested system classifies attacks and benign traffic, distinguishes between different types of attacks, and specifies high-performance sub-attacks. Four benchmark datasets were utilized for training and evaluating the proposed system, demonstrating its effectiveness. According to the findings from the experiments, the suggested approach performs better than others at identifying a wide range of threats.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"136 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid metaheuristics for selective inference task offloading under time and energy constraints for real-time IoT sensing systems","authors":"Abdelkarim Ben Sada, Amar Khelloufi, Abdenacer Naouri, Huansheng Ning, Sahraoui Dhelim","doi":"10.1007/s10586-024-04578-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04578-1","url":null,"abstract":"<p>The recent widespread of AI-powered real-time applications necessitates the use of edge computing for inference task offloading. Power constrained edge devices are required to balance between processing inference tasks locally or offload to edge servers. This decision is determined according to the time constraint demanded by the real-time nature of applications, and the energy constraint dictated by the device’s power budget. This problem is further exacerbated in the case of systems leveraging multiple local inference models varying in size and accuracy. In this work, we tackle the problem of assigning inference models to inference tasks either using local inference models or by offloading to edge servers under time and energy constraints while maximizing the overall accuracy of the system. This problem is shown to be strongly NP-hard and therefore, we propose a hybrid genetic algorithm (HGSTO) to solve this problem. We leverage the speed of simulated annealing (SA) with the accuracy of genetic algorithms (GA) to develop a hybrid, fast and accurate algorithm compared with classic GA, SA and Particle Swarm Optimization (PSO). Experiment results show that HGSTO achieved on-par or higher accuracy than GA while resulting in significantly lower scheduling times compared to other schemes.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster ComputingPub Date : 2024-06-19DOI: 10.1007/s10586-024-04597-y
Nisha Pal, Dharmendra Kumar Yadav
{"title":"Modeling and verification of software evolution using bigraphical reactive system","authors":"Nisha Pal, Dharmendra Kumar Yadav","doi":"10.1007/s10586-024-04597-y","DOIUrl":"https://doi.org/10.1007/s10586-024-04597-y","url":null,"abstract":"<p>Changes are inevitable in software due to technology advancements, and changes in business requirements. Making changes in the software by insertion, deletion or modification of new code may lead to malfunctioning of the old code. Hence, there is a need for a priori analysis to ensure and capture these types of changes to run the software smoothly. Making changes in the software while it is in use is called dynamic evolution. Due to the lack of formal modeling and verification, this dynamic evolution process of software systems has not become prominent. Hence, we used the bigraphical reactive system (BRS) technique to ensure that changes do not break the software functionality (adversely affect the system). BRS provides a powerful framework for modeling, analyzing, and verifying the dynamic evolution of software systems, resulting in ensuring the reliability and correctness of evolving software system. In this paper, we proposed a formal method technique for modeling and verifying the dynamic evolution process (changing user requirements at run time) using the BRS. We used a bigraph to model software architectures and described the evolution rules for supporting the dynamic changes of the software system. Finally, we have used the BigMC model checker tool to validate this model with its properties and provide associated verification procedures.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}