ComputingPub Date : 2024-08-20DOI: 10.1007/s00607-024-01339-1
C. Venkata Subbaiah, K. Govinda
{"title":"Energy-aware and trust-based cluster head selection in healthcare WBANs with enhanced GWO optimization","authors":"C. Venkata Subbaiah, K. Govinda","doi":"10.1007/s00607-024-01339-1","DOIUrl":"https://doi.org/10.1007/s00607-024-01339-1","url":null,"abstract":"<p>This paper describes a comprehensive methodology for improving Wireless Body Area Networks (WBANs) in healthcare systems using Enhanced Gray Wolf Optimization (GWO). The methodology begins with WBAN initialization and the configuration of critical network parameters. To improve network performance and trustworthiness, direct trust calculations, historical trust , and energy trust, as well as energy consumption models based on distance and transmission type, are integrated. The use of an Enhanced GWO approach makes it easier to select optimal cluster heads, guided by a customized fitness function that balances trust and energy efficiency. This work has been carried on a PC with 16 GB RAM using MATLAB R2022b tool for simulation purpose. The methodology outperforms existing methods in terms of throughput, computation time, and residual energy. This promising methodology provides improved data routing, energy efficiency, and trustworthiness, making it a valuable asset in WBAN-based healthcare systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"27 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-17DOI: 10.1007/s00607-024-01335-5
Mohsen Ghorbian, Mostafa Ghobaei-Arani
{"title":"A survey on the cold start latency approaches in serverless computing: an optimization-based perspective","authors":"Mohsen Ghorbian, Mostafa Ghobaei-Arani","doi":"10.1007/s00607-024-01335-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01335-5","url":null,"abstract":"<p>Serverless computing is one of the latest technologies that has received much attention from researchers and companies in recent years since it provides dynamic scalability and a clear economic model. Serverless computing enables users to pay only for the time they use resources. This approach has several benefits, including optimizing costs and resource utilization; however, cold starts are a concern and challenge. Various studies have been conducted in the academic and industrial sectors to deal with this problem, which poses a significant research challenge. This paper comprehensively reviews recent cold start research in serverless computing. Hence, this paper presents a detailed taxonomy of several serverless computing strategies for dealing with cold start latency. We have considered two main approaches in the proposed classification: Optimizing Loading Times (OLT) and Optimizing Resource Usage (ORU), each including several subsets. The subsets of the primary approach OLT are divided into container-based and checkpoint-based. Also, the primary approach ORU is divided into machine learning (ML)-based, optimization-based, and heuristic-based approaches. After analyzing current methods, we have categorized and investigated them according to their characteristics and commonalities. Additionally, we examine potential challenges and directions for future research.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"60 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-17DOI: 10.1007/s00607-024-01332-8
Brototi Mondal, Avishek Choudhury
{"title":"Multi-objective cuckoo optimizer for task scheduling to balance workload in cloud computing","authors":"Brototi Mondal, Avishek Choudhury","doi":"10.1007/s00607-024-01332-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01332-8","url":null,"abstract":"<p>A cloud load balancer should be proficient to modify it’s approach to handle the various task kinds and the dynamic environment. In order to prevent situations where computing resources are excess or underutilized, an efficient task scheduling system is always necessary for optimum or efficient utilization of resources in cloud computing. Task Scheduling can be thought of as an optimization problem. As task scheduling in the cloud is an NP-Complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-Complete problems in a reasonable amount of time. Therefore, the task scheduling problem should be solved using evolutionary and meta-heuristic techniques. This study proposes a novel approach to task scheduling using the Cuckoo Optimization algorithm. With this approach, the load is effectively distributed among the virtual machines that are available, all the while keeping the total response time and average task processing time(PT) low. The comparative simulation results show that the proposed strategy performs better than state-of-the-art techniques such as Particle Swarm optimization, Ant Colony optimization, Genetic Algorithm and Stochastic Hill Climbing.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"5 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data and resource aware incremental ML training in support of pervasive applications","authors":"Thanasis Moustakas, Athanasios Tziouvaras, Kostas Kolomvatsos","doi":"10.1007/s00607-024-01338-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01338-2","url":null,"abstract":"<p>Nowadays, the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms is increasingly affecting the performance of innovative systems. At the same time, the advent of the Internet of Things (IoT) and the Edge Computing (EC) as means to place computational resources close to users create the need for new models in the training process of ML schemes due to the limited computational capabilities of the devices/nodes placed there. In any case, we should not forget that IoT devices or EC nodes exhibit less capabilities than the Cloud back end that could be adopted for a more complex training upon vast volumes of data. The ideal case is to have, at least, basic training capabilities at the IoT-EC ecosystem in order to reduce the latency and face the needs of near real time applications. In this paper, we are motivated by this need and propose a model that tries to save time in the training process by focusing on the training dataset and its statistical description. We do not dive into the architecture of any ML model as we target to provide a more generic scheme that can be applied upon any ML module. We monitor the statistics of the training dataset and the loss during the process and identify if there is a potential to stop it when not significant contribution is foreseen for the data not yet adopted in the model. We argue that our approach can be applied only when a negligibly decreased accuracy is acceptable by the application gaining time and resources from the training process. We provide two algorithms for applying this approach and an extensive experimental evaluation upon multiple supervised ML models to reveal the benefits of the proposed scheme and its constraints.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"21 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-15DOI: 10.1007/s00607-024-01341-7
Abir El Haj
{"title":"Community detection in multiplex continous weighted nodes networks using an extension of the stochastic block model","authors":"Abir El Haj","doi":"10.1007/s00607-024-01341-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01341-7","url":null,"abstract":"<p>The stochastic block model (SBM) is a probabilistic model aimed at clustering individuals within a simple network based on their social behavior. This network consists of individuals and edges representing the presence or absence of relationships between each pair of individuals. This paper aims to extend the traditional stochastic block model to accommodate multiplex weighted nodes networks. These networks are characterized by multiple relationship types occurring simultaneously among network individuals, with each individual associated with a weight representing its influence in the network. We introduce an inference method utilizing a variational expectation-maximization algorithm to estimate model parameters and classify individuals. Finally, we demonstrate the effectiveness of our approach through applications using simulated and real data, highlighting its main characteristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-13DOI: 10.1007/s00607-024-01334-6
N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila
{"title":"Automatic video captioning using tree hierarchical deep convolutional neural network and ASRNN-bi-directional LSTM","authors":"N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila","doi":"10.1007/s00607-024-01334-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01334-6","url":null,"abstract":"<p>The development of automatic video understanding technology is highly needed due to the rise of mass video data, like surveillance videos, personal video data. Several methods have been presented previously for automatic video captioning. But, the existing methods have some problems, like more time consume during processing a huge number of frames, and also it contains over fitting problem. This is a difficult task to automate the process of video caption. So, it affects final result (Caption) accuracy. To overcome these issues, Automatic Video Captioning using Tree Hierarchical Deep Convolutional Neural Network and attention segmental recurrent neural network-bi-directional Long Short-Term Memory (ASRNN-bi-directional LSTM) is proposed in this paper. The captioning part contains two phases: Feature Encoder and Decoder. In feature encoder phase, the tree hierarchical Deep Convolutional Neural Network (Tree CNN) encodes the vector representation of video and extract three kinds of features. In decoder phase, the attention segmental recurrent neural network (ASRNN) decode vector into textual description. ASRNN-base methods struck with long-term dependency issue. To deal this issue, focuses on all generated words from the bi-directional LSTM and caption generator for extracting global context information presented by concealed state of caption generator is local and unfinished. Hence, Golden Eagle Optimization is exploited to enhance ASRNN weight parameters. The proposed method is executed in Python. The proposed technique achieves 34.89%, 29.06% and 20.78% higher accuracy, 23.65%, 22.10% and 29.68% lesser Mean Squared Error compared to the existing methods.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"61 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-07DOI: 10.1007/s00607-024-01323-9
Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti
{"title":"Improved snake optimization-based task scheduling in cloud computing","authors":"Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti","doi":"10.1007/s00607-024-01323-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01323-9","url":null,"abstract":"<p>The recent focus on cloud computing is due to its evolving platform and features like multiplexing users on shared infrastructure and on-demand resource computation. Efficient use of computer resources is crucial in cloud computing. Effective task-scheduling methods are essential to optimize cloud system performance. Scheduling virtual machines in dynamic cloud environments, marked by uncertainty and constant change, is challenging. Despite many efforts to improve cloud task scheduling, it remains an unresolved issue. Various scheduling approaches have been proposed, but researchers continue to refine performance by incorporating diverse quality-of-service characteristics, enhancing overall cloud performance. This study introduces an innovative task-scheduling algorithm that improves upon existing methods, particularly in quality-of-service criteria like makespan and energy efficiency. The proposed technique enhances the Snake Optimization Algorithm (SO) by incorporating sine chaos mapping, a spiral search strategy, and dynamic adaptive weights. These enhancements increase the algorithm’s ability to escape local optima and improve global search. Compared to other models, the proposed method shows improvements in cloud scheduling performance by 6%, 4.6%, and 3.27%. Additionally, the approach quickly converges to the optimal scheduling solution.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"22 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-06DOI: 10.1007/s00607-024-01330-w
Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das
{"title":"PP-PRNU: PRNU-based source camera attribution with privacy-preserving applications","authors":"Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das","doi":"10.1007/s00607-024-01330-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01330-w","url":null,"abstract":"<p>Tracing the origin of digital images is a crucial concern in digital image forensics, where accurately identifying the source of an image is essential that leads important clues to investing and law enforcement agencies. Photo Response Non-Uniformity (PRNU) based camera attribution is an effective forensic tool for identifying the source camera of a crime scene image. The PRNU pattern approach helps investigators determine whether a specific camera captured a crime scene image using the Pearson correlation coefficient between the unique camera fingerprint and the PRNU noise. However, this approach raises privacy concerns as the camera fingerprint or the PRNU noise can be linked to non-crime images taken by the camera, potentially disclosing the photographer’s identity. To address this issue, we propose a novel PRNU-based source camera attribution scheme that enables forensic investigators to conduct criminal investigations while preserving privacy. In the proposed scheme, a camera fingerprint extracted from a set of known images and PRNU noise extracted from the anonymous image are divided into multiple shares using Shamir’s Secret Sharing (SSS). These shares are distributed to various cloud servers where correlation is computed on a share basis between the camera fingerprint and the PRNU noise. The partial correlation values are combined to obtain the final correlation value, determining whether the camera took the image. The security analysis and the experimental results demonstrate that the proposed scheme not only preserves privacy and ensures data confidentiality and integrity, but also is computationally efficient compared to existing methods. Specifically, the results showed that our scheme achieves similar accuracy in source camera attribution with a negligible decrease in performance compared to non-privacy-preserving methods and is computationally less expensive than state-of-the-art schemes. Our work advances research in image forensics by addressing the need for accurate source identification and privacy protection. The privacy-preserving approach is beneficial for scenarios where protecting the identity of the photographer is crucial, such as in whistleblower cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"127 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-08-06DOI: 10.1007/s00607-024-01329-3
Jiahao Li, Tianhan Gao, Qingwei Mi
{"title":"Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation","authors":"Jiahao Li, Tianhan Gao, Qingwei Mi","doi":"10.1007/s00607-024-01329-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01329-3","url":null,"abstract":"<p>Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The <i>Phasic Parallel-Network Policy</i> (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"34 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ComputingPub Date : 2024-07-31DOI: 10.1007/s00607-024-01322-w
Gyan Singh, Amit K. Chaturvedi
{"title":"A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment","authors":"Gyan Singh, Amit K. Chaturvedi","doi":"10.1007/s00607-024-01322-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01322-w","url":null,"abstract":"<p>Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"295 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}