Mengyu Sun , Shuo Quan , Xuliang Wang , Zhilan Huang
{"title":"Latency-aware scheduling for data-oriented service requests in collaborative IoT-edge-cloud networks","authors":"Mengyu Sun , Shuo Quan , Xuliang Wang , Zhilan Huang","doi":"10.1016/j.future.2024.107538","DOIUrl":"10.1016/j.future.2024.107538","url":null,"abstract":"<div><div>Edge computing facilitates the collaboration of physical devices at the network edge to support nearby computing requests, in order to reduce long-distance sensory data transmission from <em>I</em>nternet <em>o</em>f <em>T</em>hings (<em>IoT</em>) devices to the remote cloud. An <em>IoT</em>-edge-cloud network is constructed, where sensory data collected by <em>IoT</em> devices is aggregated to the physically adjacent edge nodes and is transmitted between these edge nodes for achieving task processing, and the cloud acts as a central controller with global scheduling, considering the latency sensitivity of service requests and capacity limitation of physical devices. These service requests are decomposed into multiple data-oriented tasks with certain logical relations, and the satisfaction of service requests is implemented in such a collaborative <em>IoT</em>-edge-cloud network. In this setting, a data-oriented task scheduling mechanism is presented through considering data aggregation, data transmission and task processing in a latency-efficient and energy-saving fashion, which is formulated as a constrained objective optimization problem. We develop an <em>i</em>mproved <em>G</em>enetic <em>A</em>lgorithm-based <em>T</em>ask <em>S</em>cheduling (<em>iGATS</em>) approach, where task scheduling decisions are regarded as chromosome codings, fitness function and genetic operators are designed to solve the formulated problem. Simulation experiments are evaluated, and numerical results show that our <em>iGATS</em> outperforms other baseline techniques for reducing response latency, improving temporal satisfaction of service requests, and maintaining load-balancing across the whole network.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107538"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Sismanis, Petros Potikas, Dora Souliou, Aris Pagourtzis
{"title":"Overlapping community detection using graph attention networks","authors":"Konstantinos Sismanis, Petros Potikas, Dora Souliou, Aris Pagourtzis","doi":"10.1016/j.future.2024.107529","DOIUrl":"10.1016/j.future.2024.107529","url":null,"abstract":"<div><div>Community detection is a research area with increasing practical significance. Successful examples of its application are found in many scientific areas like social networks, recommender systems and biology. Deep learning has achieved many successes (Miotto et al., 2018; Voulodimos et al., 2018) on various graph related tasks and is recently used in the field of community detection, offering accuracy and scalability. In this paper, we propose a novel method called Attention Overlapping Community Detection (AOCD) a method that incorporates an attention mechanism into the well-known method called Neural Overlapping Community Detection (NOCD) (Shchur and Günnemann, 2019) to discover overlapping communities in graphs. We perform several experiments in order to evaluate our proposed method’s ability to discover ground truth communities. Compared to NOCD, increased performance is achieved in many cases.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107529"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To tune or not to tune? An approach for recommending important hyperparameters for classification and clustering algorithms","authors":"Radwa El Shawi, Mohamadjavad Bahman, Sherif Sakr","doi":"10.1016/j.future.2024.107524","DOIUrl":"10.1016/j.future.2024.107524","url":null,"abstract":"<div><div>Machine learning algorithms are widely employed across various applications and fields. Novel technologies in automated machine learning ease the complexity of algorithm selection and hyperparameter optimization process. Tuning hyperparameters plays a crucial role in determining the performance of machine learning models. While many optimization techniques have achieved remarkable success in hyperparameter tuning, even surpassing human experts’ performance, relying solely on these black-box techniques can deprive practitioners of insights into the relative importance of different hyperparameters. In this paper, we investigate the importance of hyperparameter tuning by establishing a relationship between machine learning model performance and their corresponding hyperparameters. Our focus is primarily on classification and clustering tasks. We conduct experiments on benchmark datasets using six traditional classification and clustering algorithms, along with one deep learning model. Our findings empower users to make informed decisions regarding the necessity of engaging in time-consuming tuning processes. We highlight the most important hyperparameters and provide guidance on selecting an appropriate configuration space. The results of our experiments confirm that the hyperparameters identified as important are indeed crucial for performance. Overall, our study offers a quantitative basis for guiding automated hyperparameter optimization efforts and contributes to the development of better-automated machine learning frameworks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107524"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving WSN-based dataset using data augmentation for TSCH protocol performance modeling","authors":"Melchizedek Alipio","doi":"10.1016/j.future.2024.107540","DOIUrl":"10.1016/j.future.2024.107540","url":null,"abstract":"<div><div>This study addresses the problem of inadequate datasets in Time-Slotted Channel Hopping (TSCH) protocol in Wireless Sensor Networks (WSN) by introducing a viable machine learning (ML) approach that explicitly tackles the limitations associated with the scarcity of data samples. The dataset employed in this research is derived from actual sensor node implementations, ensuring authenticity and relevance. To counteract overfitting, Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN) algorithms are utilized for data augmentation during the modeling phase, alongside the incorporation of Random Forest (RF) and Artificial Neural Network (ANN) algorithms. Results reveal a notable improvement in the performance of the ML models through the implementation of data augmentation techniques. A comparative analysis of various ML models underscores the superiority of the RF model, augmented by the GAN technique. This model exhibits enhanced predictive capabilities for TSCH latency, underscoring its efficacy in modeling network protocol performance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107540"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dishi Xu , Fagui Liu , Bin Wang , Xuhao Tang , Dinghao Zeng , Huaiji Gao , Runbin Chen , Qingbo Wu
{"title":"GenesisRM: A state-driven approach to resource management for distributed JVM web applications","authors":"Dishi Xu , Fagui Liu , Bin Wang , Xuhao Tang , Dinghao Zeng , Huaiji Gao , Runbin Chen , Qingbo Wu","doi":"10.1016/j.future.2024.107539","DOIUrl":"10.1016/j.future.2024.107539","url":null,"abstract":"<div><div>Reducing resource waste while maintaining end-to-end latency service-level objective (SLO) by simultaneously managing CPU bandwidth, memory allocation, and pod number of web applications running on Java virtual machine (JVM) is challenging. The challenges stem from the complexity of the multi-type resource allocation optimization problem, the high sensitivity of JVM performance to resource scaling actions, and the lack of low-level resource scaling mechanisms. We present <em>GenesisRM</em>, a resource management framework with a novel state-driven architecture. Specifically, we design a state control model for JVM web applications that encompasses seven pod states. This model serves as an abstraction layer, decoupling the centralized resource management system into a global state manager and distributed pod managers. The state manager controls the state transitions of the pods based on the overall workload, while the pod managers dynamically allocate resources for each pod according to the state and local workload.Then, we propose a multi-frequency control model with two predictive state controllers and a reactive state controller to manage the state of pods based on the state control model. In addition, GenesisRM brings new mechanisms to scale JVM pods’ low-level resources without damaging their performance. We evaluate our work using a real-world JVM web application benchmark in three different scale server clusters of Pengcheng Laboratory Developer Cloud, and the 21-day experimental results show that GenesisRM saves 31.70% CPU and 17.60% memory compared to the best-performing state-of-the-art solutions while guaranteeing the SLO imposed on end-to-end latency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107539"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Milan Jain, Nicolas Bohm Agostini, Sayan Ghosh, Antonino Tumeo
{"title":"Analyzing inference workloads for spatiotemporal modeling","authors":"Milan Jain, Nicolas Bohm Agostini, Sayan Ghosh, Antonino Tumeo","doi":"10.1016/j.future.2024.107513","DOIUrl":"10.1016/j.future.2024.107513","url":null,"abstract":"<div><p>Ensuring power grid resiliency, forecasting climate conditions, and optimization of transportation infrastructure are some of the many application areas where data is collected in both space and time. Spatiotemporal modeling is about modeling those patterns for forecasting future trends and carrying out critical decision-making by leveraging machine learning/deep learning. Once trained offline, field deployment of trained models for near real-time inference could be challenging because performance can vary significantly depending on the environment, available compute resources and tolerance to ambiguity in results. Users deploying spatiotemporal models for solving complex problems can benefit from analytical studies considering a plethora of system adaptations to understand the associated performance-quality trade-offs.</p><p>To facilitate the co-design of next-generation hardware architectures for field deployment of trained models, it is critical to characterize the workloads of these deep learning (DL) applications during inference and assess their computational patterns at different levels of the execution stack. In this paper, we develop several variants of deep learning applications that use spatiotemporal data from dynamical systems. We study the associated computational patterns for inference workloads at different levels, considering relevant models (Long short-term Memory, Convolutional Neural Network and Spatio-Temporal Graph Convolution Network), DL frameworks (Tensorflow and PyTorch), precision (FP16, FP32, AMP, INT16 and INT8), inference runtime (ONNX and AI Template), post-training quantization (TensorRT) and platforms (Nvidia DGX A100 and Sambanova SN10 RDU).</p><p>Overall, our findings indicate that although there is potential in mixed-precision models and post-training quantization for spatiotemporal modeling, extracting efficiency from contemporary GPU systems might be challenging. Instead, co-designing custom accelerators by leveraging optimized High Level Synthesis frameworks (such as SODA High-Level Synthesizer for customized FPGA/ASIC targets) can make workload-specific adjustments to enhance the efficiency.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107513"},"PeriodicalIF":6.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ling Xiong , Qiang Li , LeLe Tang , Fagen Li , Xingchun Yang
{"title":"Blockchain-based conditional privacy-preserving authentication scheme using PUF for vehicular ad hoc networks","authors":"Ling Xiong , Qiang Li , LeLe Tang , Fagen Li , Xingchun Yang","doi":"10.1016/j.future.2024.107530","DOIUrl":"10.1016/j.future.2024.107530","url":null,"abstract":"<div><p>Vehicular ad hoc networks (VANET) have been the key indispensable module of the future intelligent transportation system. Security and privacy are two essential attributes that protect the safe driving of vehicles. Over the last two decades, numerous conditional privacy-preserving authentication schemes have been presented for the VANET environment. However, existing schemes have various limitations, including security issues, high storage overhead, and frequent interactions. In order to bridge these difficulties, this work combines physically unclonable function and blockchain technology to construct a conditional privacy-preserving authentication scheme for the VANET environment. Specifically, we combine physical unclonable function and dynamic pseudonym techniques to generate unique pseudonym IDs dynamically and private keys using physical unclonable function to enhance privacy protection and resist physical attack. To reduce the number of communication rounds during the verification process, we deployed lightweight blockchain nodes to avoid direct communication between the receiver and the blockchain network. The proposed scheme demonstrates resilience against various potential attacks through comprehensive security analysis and proof. Furthermore, performance metrics indicate that our scheme outperforms similar schemes, making it suitable for resource-constrained VANET.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107530"},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitor G. da Silva Ruffo , Daniel M. Brandão Lent , Luiz F. Carvalho , Jaime Lloret , Mario Lemes Proença Jr.
{"title":"Generative adversarial networks to detect intrusion and anomaly in IP flow-based networks","authors":"Vitor G. da Silva Ruffo , Daniel M. Brandão Lent , Luiz F. Carvalho , Jaime Lloret , Mario Lemes Proença Jr.","doi":"10.1016/j.future.2024.107531","DOIUrl":"10.1016/j.future.2024.107531","url":null,"abstract":"<div><p>Computer networks facilitate regular human tasks, providing services like data streaming, online shopping, and digital communications. These applications require more and more network capacity and dynamicity to accomplish their goals. The networks may be targeted by attacks and intrusions that compromise the applications that rely on them and lead to potential losses. We propose a semi-supervised systematic methodology for developing a detection system for traffic volume anomalies in IP flow-based networks. The system is implemented with a vanilla Generative Adversarial Network (GAN). The mitigation module is triggered whenever an anomaly is detected, automatically blocking the suspect IPs and restoring the correct network functioning. We implemented three versions of the proposed solution by incorporating Long Short-Term Memory (LSTM), 1D-Convolutional Neural Network (1D-CNN), and Temporal Convolutional Network (TCN) into the GAN internal structure. The experiments are conducted on three public benchmark datasets: Orion, CIC-DDoS2019, and CIC-IDS2017. The results show that the three considered deep learning models have distinct impacts on the GAN model and, consequently, on the overall system performance. The 1D-CNN-based GAN implementation is the best since it reasonably solves the mode collapse problem, has the most efficient computational complexity, and achieves competitive Matthews Correlation Coefficient scores for the anomaly detection task. Also, the mitigation module can drop most anomalous flows, blocking only a slight portion of legitimate traffic. For comparison with state-of-the-art models, we implemented 1D-CNN, LSTM, and TCN separately from the GAN. The generative networks show improved overall results in the considered performance metrics compared to the other models.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107531"},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feed4Cloud: Towards trustworthy QoE-aware cloud service monitoring using blockchain","authors":"Ioanna Angeliki Kapetanidou , Christos-Alexandros Sarros , Giannis Ledakis , Vassilis Tsaoussidis","doi":"10.1016/j.future.2024.107532","DOIUrl":"10.1016/j.future.2024.107532","url":null,"abstract":"<div><div>The recent prevalence of microservice-based applications that leverage the capabilities offered by cloud and edge computing, has given rise to highly complex services which create new challenges for efficient monitoring and orchestration. In today’s cloud environments, service monitoring is typically premised on technical Quality of Service (QoS) performance metrics, rather than on Quality of Experience (QoE) as perceived by users. In this paper, we posit that user feedback should also play a significant role in cloud service monitoring. However, we explicitly set a prerequisite: the trustworthiness of user feedback should not be considered guaranteed. Therefore, we have developed Feed4Cloud, the first system to complement QoS monitoring with exclusively trustworthy user feedback for QoE-aware cloud service management. The novelty of our solution lies in two key aspects: First, the establishment of an intermediate verification layer that validates user feedback before it is injected into the orchestration engine. The second key aspect is the use of Blockchain in this layer, as a means to record user feedback in a decentralized and secure way, aiming to achieve non-repudiation and ensure its integrity. In this paper, we present the architectural details of the Feed4Cloud prototype, while placing a particular focus on aspects regarding trustworthy evaluation of service performance. Furthermore, we provide evaluation results that validate the effectiveness of the introduced verification layer and demonstrate that QoE-based service evaluation can consistently be conducted in a trustworthy manner across a wide range of system conditions and user behaviors.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107532"},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient federated learning solution for the artificial intelligence of things","authors":"Mohamed Amine Kouda, Badis Djamaa, Ali Yachir","doi":"10.1016/j.future.2024.107533","DOIUrl":"10.1016/j.future.2024.107533","url":null,"abstract":"<div><p>Federated Learning (FL) has gained popularity due to its advantages over centralized learning. However, existing FL research has primarily focused on unconstrained wired networks, neglecting the challenges posed by wireless Internet of Things (IoT) environments. The successful integration of FL into IoT networks requires tailored adaptations to address unique constraints, especially in computation and communication. This paper introduces Communication-Aware Federated Averaging (CAFA), a novel algorithm designed to enhance FL operations in wireless IoT networks with shared communication channels. CAFA primarily leverages the latent computational capacities during the communication phase for local training and aggregation. Through extensive and realistic evaluations in dedicated FL-IoT framework, our method demonstrates significant advantages over state-of-the-art approaches. Indeed, CAFA achieves up to a 4x reduction in communication costs and accelerates FL training by as much as 70%, while preserving model accuracy. These achievements position CAFA as a promising solution for the efficient implementation of FL in constrained wireless networks.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107533"},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}