Dimitris Gkoulis, Anargyros Tsadimas, George Kousiouris, Cleopatra Bardaki, Mara Nikolaidou
{"title":"Exploring the performance of real-time data imputation to enhance fault tolerance on the edge: A study on environmental data","authors":"Dimitris Gkoulis, Anargyros Tsadimas, George Kousiouris, Cleopatra Bardaki, Mara Nikolaidou","doi":"10.1016/j.simpat.2025.103178","DOIUrl":"10.1016/j.simpat.2025.103178","url":null,"abstract":"<div><div>Real-time data streams from edge-based IoT sensors are frequently affected by transmission errors, sensor faults, and network disruptions, leading to missing or incomplete data. This paper investigates the application of lightweight, real-time imputation methods to enhance fault tolerance in edge computing systems. To this end, we propose to integrate a modular imputation engine on edge system supporting lightweight forecasting models selected for their computational efficiency and suitability to operate on real-time data streams. To assess the performance of different popular lightweight forecasting models for real-time applications, a simulation framework is introduced that simulates the operation of the imputation engine, replicates sensor failure scenarios and allows controlled testing on real-world systems. Imputation accuracy is evaluated using Mean Absolute Error (MAE), 95th percentile error, and maximum error, with results benchmarked against sensor tolerance thresholds. The simulation framework is used to explore imputation on environmental data based on observations collected from a weather station. The findings show that Holt–Winters Exponential Smoothing delivers the highest accuracy for real-time imputation across environmental variables, outperforming simpler models suited only for short-term gaps. Errors grow with longer forecasts, confirming imputation as a temporary solution. Evaluations against sensor-specific thresholds offer practical insights, and execution profiling proves these models are lightweight enough for deployment on low-power edge devices, enabling real-time, fault-tolerant monitoring without cloud dependence.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103178"},"PeriodicalIF":3.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CoFANN: A collaborative framework for accelerating DNN inference in drone-based agricultural monitoring systems","authors":"Nhu-Y Tran-Van, Kim-Hung Le","doi":"10.1016/j.simpat.2025.103176","DOIUrl":"10.1016/j.simpat.2025.103176","url":null,"abstract":"<div><div>Plant leaf diseases pose a major threat to global agricultural productivity, causing substantial crop losses annually. While drone-based monitoring systems equipped with deep neural networks (DNNs) offer a promising solution for large-scale disease detection, their deployment is hindered by the computational limitations of IoT devices and the latency issues associated with cloud and edge computing. Existing collaborative inference approaches aim to mitigate end-to-end latency by offloading computation across devices. However, these methods often compromise model accuracy and add computing latency in generating inference strategies. To address these challenges, we present CoFANN, a novel collaborative framework to accelerate DNN inference in dynamic IoT environments. Our framework includes two key advances: a differentiable strategy search space with a gradient-based optimization algorithm for efficiently identify optimal partitioning strategies, and an adaptive model partitioning algorithm that effectively divides and allocates DNN components across computing devices based on their capabilities and network conditions. Experimental results in the plant disease dataset demonstrate that CoFANN reduces the total inference latency by up to 70% compared to device-only and 50% compared to edge-only approaches under varying network conditions, while maintaining comparable accuracy from 93.7% to 95.8%.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103176"},"PeriodicalIF":3.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144611623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU-accelerated cloud computing services and performance evaluation","authors":"Zakery Collins, Gennaro De Luca, Yinong Chen","doi":"10.1016/j.simpat.2025.103181","DOIUrl":"10.1016/j.simpat.2025.103181","url":null,"abstract":"<div><div>This paper explores the feasibility of replacing traditional CPU-based cloud computing with Graphic Processing Unit GPU-accelerated services. Using NVIDIA’s CUDA GPU-accelerated C/<em>C</em>++ and Python libraries, we benchmark the performance of GPU computing against multithreaded CPU computing across several domains, including machine learning and large-scale image processing. A novel contribution of this work is an intelligent autoscaling system that maximizes single-GPU resource utilization before scaling to additional GPUs, improving efficiency in cloud-based deployments. Our simulation experiments demonstrate significant performance gains for GPU-accelerated computing and highlight the impact of optimized resource allocation in cloud environments. For example, in a machine learning experiment, using a dataset with 8.790 entries, the execution of a GeForce 3060 ti GPU is 3.42 times faster than a 16-thread CPU computer. Compared with the same 16-thread CPU, Tesla K80 GPU is 4.17 times faster. Furthermore, we provide an analysis of GPU performance optimization strategies, including memory management, concurrency techniques, and workload distribution methodologies, offering insights into the long-term scalability and cost-effectiveness of GPU-accelerated cloud infrastructure.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103181"},"PeriodicalIF":3.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renlong Wang , Lingzhi Li , Wenjie Lin , Endong Wang , Jingfeng Yuan
{"title":"Mastering the complexity: An enhanced cellular automata-based framework for simulating resilience of hospital Power-Water-Firefighting-Space nexus system","authors":"Renlong Wang , Lingzhi Li , Wenjie Lin , Endong Wang , Jingfeng Yuan","doi":"10.1016/j.simpat.2025.103177","DOIUrl":"10.1016/j.simpat.2025.103177","url":null,"abstract":"<div><div>Modeling the resilience of hospital Power-Water-Firefighting-Space (PWFS) nexus systems is a complex, dynamic, and nonlinear challenge characterized by high uncertainty. Existing methods, mainly agent-based and network-based models, face difficulties in balancing detailed component-level behaviors with broader system-level interdependencies and neglect the impact of external disruptions, such as surges in service demand during the COVID-19 pandemic, on hospital PWFS system resilience. To address this, the study proposes an enhanced cellular automata (CA)-based framework for simulating hospital PWFS system resilience. The PWFS system is modeled as a seven-tuple CA, incorporating cell structure, state, space, neighborhood, transition rules, and time, facilitating the integration of micro-level component behavior with macro-level interdependencies. A set of resilience metrics, including robustness, rapidity, performance loss, and an integrated resilience index, are introduced, based on the system performance curve, which includes normality, connectivity, resource transfer efficiency, and space functionality. The model enables scalable, polynomial-time simulations of cascading failures, resource redistribution, and spatial–temporal recovery across interconnected PWFS subsystems. A real-world outpatient building case study demonstrates the applicability and validity of the enhanced CA model. The findings emphasize the importance of modeling intra-system interdependencies and provide actionable insights for infrastructure design and emergency preparedness. Overall, the enhanced CA framework offers a systematic, scalable, and computationally efficient approach to resilience assessment, bridging theoretical modeling with practical infrastructure planning.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103177"},"PeriodicalIF":3.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation model and performance evaluation of automated valet parking technologies in parking lots","authors":"Ning Ma , Angjun Tang , Jingxin Hai , Fang Yuan","doi":"10.1016/j.simpat.2025.103175","DOIUrl":"10.1016/j.simpat.2025.103175","url":null,"abstract":"<div><div>Autonomous valet parking (AVP) is widely employed among parking lots and city logistics worldwide, expanding the applications of autonomous driving technologies. Auto companies are promoting three technology roadmaps to improve the implementation of AVP: AVP with autonomous driving system (AVP-ADS), AVP with Intelligent Infrastructure Systems (AVP-IS), and AVP with cooperative vehicle infrastructure systems (AVP-CVIS). Specifically, AVP-ADS can further be divided into LIDAR solution (AVP-ADS-LIDAR) and visual solution (AVP-ADS-VISUAL). This paper presents a simulation model to evaluate and compare the performances of AVP-CVIS, AVP-IS, AVP-ADS-LIDAR, AVP-ADS-VISUAL and Manual Parking (MP) in the real parking lot. The vehicle parking system is modeled as a discrete event simulation model, in which the controller module simulates driving behavior and defines the control logics of these three parking solutions. Extensive experiments are conducted and metrics are presented to evaluate the performance of these AVP technical solutions. The results indicate that AVP-CVIS exhibits more efficient parking performance. Management insights are provided to facilitate a more effective implementation of AVP.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103175"},"PeriodicalIF":3.5,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Jiang , Hanchen Yu , Eric Wai Ming Lee , Hongyun Yang , Lizhong Yang , Richard Kwok Kit Yuen
{"title":"Machine learning methods in microscopic pedestrian and evacuation dynamics simulation: a comparative study","authors":"Nan Jiang , Hanchen Yu , Eric Wai Ming Lee , Hongyun Yang , Lizhong Yang , Richard Kwok Kit Yuen","doi":"10.1016/j.simpat.2025.103180","DOIUrl":"10.1016/j.simpat.2025.103180","url":null,"abstract":"<div><div>The modeling and simulation of pedestrian and evacuation dynamics provides essential insights for the field of crowd safety against the background of population increasing and regional development. With the superior performance of machine learning methods demonstrated in pedestrian modeling, varying data encoding schemes and machine learning algorithms were investigated and lack of comparative analysis. Hence, this study analyzes machine learning methods for simulating microscopic pedestrian and evacuation dynamics. The motion interaction field along with a data extraction rule that standardizes input lengths for learning-based models is proposed. Two typical algorithms, Classification and Regression Trees (CART) and Artificial Neural Networks (ANN), are employed for model training and comparison. The fitting performance is evaluated using mean absolute error of velocity, revealing that the CART-based model outperforms the ANN-based model in stability and lower error rates, particularly in varying local density ranges. Dynamics tests are further performed to examine the two models’ robustness against inherent error. The results indicate that the CART-based model struggles under high-density conditions due to the split-based structure. In contrast, the ANN-based model demonstrates superior non-linear fitting ability, allowing for better reproduction of pedestrian dynamics at relatively higher densities. Moreover, the Wasserstein Distance with Sinkhorn iteration is used to quantify model performance in terms of flow-density fundamental diagrams, highlighting the advantages of learning-based approaches over traditional social force model. This research has significant implications for the field of building and civil engineering, as insights from comparative analysis of two typical machine learning algorithms and the establishment of motion interaction field can inform the progress of learning-based pedestrian and evacuation dynamics simulation. The study presented underscores the transformative potential of machine learning methods in simulating pedestrian dynamics and suggests future research directions to enhance robustness and applicability across diverse scenarios of learning-based methods in microscopic pedestrian and evacuation dynamics simulation.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103180"},"PeriodicalIF":3.5,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144702136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CADCO: An Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks of IoT","authors":"Zhuangzhi Tian , Xiaolong Xu","doi":"10.1016/j.simpat.2025.103168","DOIUrl":"10.1016/j.simpat.2025.103168","url":null,"abstract":"<div><div>With the rapid development of the Internet of Things (IoT) and cloud-fog computing, efficient offloading of complex dependency tasks has become a key challenge for improving system performance, especially for real-time IoT applications. Traditional methods are inefficient in handling dynamic environments and long-range dependencies, while existing deep reinforcement learning approaches face issues such as rigid resource allocation and Q-value overestimation. To address these problems, we propose an Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks (CADCO). The method accurately models task dependencies using the multi-head attention mechanism of Transformer, optimizes computational and memory resource allocation through Hybrid Model Parallelism (HMP) technology, and designs a dynamic offloading strategy based on an improved Double Deep Q-Network (DDQN). A freshness factor is introduced to optimize the experience replay mechanism, enhancing the stability of the strategy. Experimental results show that CADCO demonstrates significant advantages in multi-user, multi-task offloading scenarios, optimizing task scheduling, improving resource utilization, and significantly enhancing QoS while reducing task latency and energy consumption. These results validate the practical application value of CADCO in complex task dependency environments, providing solid theoretical and experimental support for intelligent computing offloading optimization.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103168"},"PeriodicalIF":3.5,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144570443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gorka Nieto , Neco Villegas , Luis Diez , Idoia de la Iglesia , Unai Lopez-Novoa , Cristina Perfecto , Ramón Agüero
{"title":"Comparing Control Theory and Deep Reinforcement Learning techniques for decentralized task offloading in the edge–cloud continuum","authors":"Gorka Nieto , Neco Villegas , Luis Diez , Idoia de la Iglesia , Unai Lopez-Novoa , Cristina Perfecto , Ramón Agüero","doi":"10.1016/j.simpat.2025.103170","DOIUrl":"10.1016/j.simpat.2025.103170","url":null,"abstract":"<div><div>With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using <em>ITSASO</em>, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks’ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of <em>ITSASO</em> is publicly available.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103170"},"PeriodicalIF":3.5,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wujiu Pan , Yuanbin Chen , Xi Li , Junyi Wang , Jianwen Bao
{"title":"Bearing compound fault diagnosis considering the fusion fragment data and multi-head attention mechanism considering the actual variable working conditions","authors":"Wujiu Pan , Yuanbin Chen , Xi Li , Junyi Wang , Jianwen Bao","doi":"10.1016/j.simpat.2025.103174","DOIUrl":"10.1016/j.simpat.2025.103174","url":null,"abstract":"<div><div>In this paper, a bearing compound fault diagnosis model considering the actual variable working conditions, which combines segment data and multi head attention mechanism, is proposed to improve the accurate recognition ability of compound fault signals. The design of the overall model architecture, which combines the advantages of the convolution layer and the multi-head attention layer, enables the model to better handle fragmented compound fault signals under multiple conditions in engineering practice. In addition, the application strategies under different working conditions are also discussed to ensure that the model has good robustness in the real environment. Through a series of experiments, the excellent diagnostic performance of the proposed model under different working conditions and noise environment is demonstrated. Compared with other existing models, the results showed that the proposed model not only improves the accuracy of fault diagnosis but also demonstrated excellent industrial field adaptability and stability. This research not only provides a new perspective and methodology for the field of fault diagnosis, but also provides a technical basis for industrial intelligence and digital transformation, which has a broad application prospect and value.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103174"},"PeriodicalIF":3.5,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GrC-VMM: An intelligent framework for virtual machine migration optimization using granular computing","authors":"Seyyed Meysam Rozehkhani, Farnaz Mahan","doi":"10.1016/j.simpat.2025.103169","DOIUrl":"10.1016/j.simpat.2025.103169","url":null,"abstract":"<div><div>Virtual Machine Migration (VMM) is a critical component in cloud computing environments, enabling dynamic resource management and system optimization. However, existing approaches often face challenges such as increased downtime, excessive resource consumption, and complex decision-making processes in heterogeneous environments. This paper presents a novel framework based on Granular Computing (GrC) principles to address these challenges through systematic VM categorization and prioritization. The proposed framework employs a three-stage approach: (1) feature extraction and granule formation, converting VM attributes such as workload, downtime sensitivity, and resource utilization into meaningful information granules; (2) granule-based decision rule generation using formal GrC methodologies; and (3) priority-based classification using weighted membership functions. Experimental evaluations conducted using CloudSim 5.0 demonstrate the framework’s effectiveness across multiple performance dimensions. The results show 92. 1% classification accuracy, 83. 7% resource utilization and reduced migration downtime of 1.9 s. The framework exhibits linear computational complexity O(N), confirming its scalability for large-scale deployments. Additionally, performance analysis under various workload patterns (resource-intensive, service-oriented, and mixed) validates the framework’s robustness and adaptability. These results suggest that the proposed GrC-based approach offers a promising solution to optimize VM migration in cloud environments while maintaining operational efficiency and service quality.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103169"},"PeriodicalIF":3.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144517220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}