{"title":"Online detection of process activity executions from IoT sensors using generated event processing services","authors":"Ronny Seiger, Aaron F. Kurz, Marco Franceschetti","doi":"10.1016/j.future.2025.107987","DOIUrl":"10.1016/j.future.2025.107987","url":null,"abstract":"<div><div>Data streams from Internet of Things (IoT) devices containing sensors and actuators provide new insights into their interactions, context, and process executions in the physical world. These new data sources may open up novel opportunities to apply Business Process Management (BPM) technologies to analyze process and activity executions using established process mining techniques. However, the rather low abstraction level of data emitted from the IoT devices is often not suitable to directly apply process mining, which requires additional steps of event abstraction. Related approaches train expensive supervised machine learning models on historical sensor data to realize this event abstraction enabling only a post-mortem classification of activity executions. In this work we propose a framework to automatically generate activity detection services from IoT data with minimal human involvement to implement the event abstraction. Along with the framework, we present a software architecture focused on a flexible and extensible complex event processing (CEP) platform that achieves high-performance activity detection from IoT data streams at runtime–enabling online process analytics. Evaluations of our proof-of-concept implementation to monitor processes executed in smart manufacturing and smart healthcare show acceptable results when detecting activities that are affected by no to only small variations in the underlying IoT data. We identify several ways to improve the robustness of the activity detections regarding variations in IoT data as starting points for future work.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107987"},"PeriodicalIF":6.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144522212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziqiang Xu, Ahmad Salehi Shahraki, Naveen Chilamkurti
{"title":"A Fault-Tolerant Sharding Mechanism for Resilience and Scalability in Blockchain Using Backup Pool","authors":"Ziqiang Xu, Ahmad Salehi Shahraki, Naveen Chilamkurti","doi":"10.1016/j.future.2025.107982","DOIUrl":"10.1016/j.future.2025.107982","url":null,"abstract":"<div><div>With the development of blockchain technology, an increasing number of devices are joining the network and generating numerous transactions, which pose significant performance and scalability challenges to blockchain networks. Sharding technology is one solution that improves throughput by parallelising transaction validation and block generation, thereby alleviating this challenge. However, during the process of dividing shard groups in large-scale networks, the uneven distribution of malicious nodes or dynamic changes in joined nodes may lead to failures in the availability and liveness of shard groups. This paper proposes an enhanced sharding blockchain system that improves fault tolerance and reliability to ensure high shard group performance, scalability, and reliability. We also propose the concept of a backup pool and achieve the detection and recovery of faulty shards through pre-deployed backup pool nodes and redesigned consensus algorithms. After a thorough security analysis, we conclude that the proposed sharding blockchain system can increase the Resilience of shard groups from 33.3% to 66.6%. Additionally, we evaluate the proposed system, and the results show that it ensures the security, reliability, and high performance of sharding while increasing the scalability of the system’s network nodes from 10<span><math><msup><mrow></mrow><mrow><mn>4</mn></mrow></msup></math></span> to 10<span><math><msup><mrow></mrow><mrow><mn>6</mn></mrow></msup></math></span> compared to existing sharding blockchains.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107982"},"PeriodicalIF":6.2,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144536153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging community detection for clustered federated learning on Non-IID data: From an information-theoretic perspective","authors":"Bingli Sun, Yuchun Tu, Xiao Song","doi":"10.1016/j.future.2025.108005","DOIUrl":"10.1016/j.future.2025.108005","url":null,"abstract":"<div><div>Clustered Federated Learning (CFL) alleviates the negative effects of data heterogeneity to some extent by clustering clients with similar data distributions together and performing personalized model training and aggregation within each cluster. However, many CFL methods still rely on a static clustering mechanism, i.e., the number of clusters and the similarity identification threshold remain constant throughout the training process, which limits their adaptability in dynamic environments. To tackle the above challenges, this paper proposes a community detection Infomap-based clustered federated learning framework called <strong>FedInfo</strong> from the perspective of information-theoretic (<strong>Fed</strong>erated Learning based on <strong>Info</strong>map and <strong>Info</strong>rmation-theoretic). This framework introduces inference similarity and graph theory to clustered federated learning and utilizes the Infomap community detection algorithm to achieve dynamic clustering of clients from an information theoretic perspective which constructs a similarity graph for the adjacency matrix of clients’ inference similarity and minimizes the encoding length of the random walk paths on the graph, without the need to set the number of clusters and similarity thresholds as a priori knowledge in advance. Extensive experiments demonstrate that FedInfo achieves significantly superior performance over the state-of-the-art (SOTA) methods on several federated learning datasets under Non-Independent Identically Distribution (Non-IID) settings, and has better communication efficiency and robustness with limited communication budgets. In conclusion, FedInfo provides an efficient and flexible CFL solution, which offers a novel idea for personalized federated learning with highly heterogeneous data.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108005"},"PeriodicalIF":6.2,"publicationDate":"2025-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Shehata , Peter Groszkowski , Thomas Naughton , Muralikrishnan Gopalakrishnan Meena , Elaine Wong , Daniel Claudino , Rafael Ferreira da Silva , Thomas Beck
{"title":"Bridging paradigms: Designing for HPC-Quantum convergence","authors":"Amir Shehata , Peter Groszkowski , Thomas Naughton , Muralikrishnan Gopalakrishnan Meena , Elaine Wong , Daniel Claudino , Rafael Ferreira da Silva , Thomas Beck","doi":"10.1016/j.future.2025.107980","DOIUrl":"10.1016/j.future.2025.107980","url":null,"abstract":"<div><div>This paper presents a comprehensive software stack architecture for integrating quantum computing (QC) capabilities with High-Performance Computing (HPC) environments. While quantum computers show promise as specialized accelerators for scientific computing, their effective integration with classical HPC systems presents significant technical challenges. We propose a hardware-agnostic software framework that supports both current noisy intermediate-scale quantum devices and future fault-tolerant quantum computers, while maintaining compatibility with existing HPC workflows. The architecture includes a quantum gateway interface, standardized APIs for resource management, and robust scheduling mechanisms to handle both simultaneous and interleaved quantum–classical workloads. Key innovations include: (1) a unified resource management system that efficiently coordinates quantum and classical resources, (2) a flexible quantum programming interface that abstracts hardware-specific details, (3) A Quantum Platform Manager API that simplifies the integration of various quantum hardware systems, and (4) a comprehensive tool chain for quantum circuit optimization and execution. We demonstrate our architecture through implementation of quantum–classical algorithms, including the variational quantum linear solver, showcasing the framework’s ability to handle complex hybrid workflows while maximizing resource utilization. This work provides a foundational blueprint for integrating QC capabilities into existing HPC infrastructures, addressing critical challenges in resource management, job scheduling, and efficient data movement between classical and quantum resources.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107980"},"PeriodicalIF":6.2,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parameter-efficient Quantum Denoising Diffusion Probabilistic Models with temporal encoding","authors":"Xuefen Zhang , Chuangtao Chen","doi":"10.1016/j.future.2025.107981","DOIUrl":"10.1016/j.future.2025.107981","url":null,"abstract":"<div><div>Quantum generative models have attracted growing interest for their potential to transform generative learning through the principles of quantum computing. The recently proposed Quantum Denoising Diffusion Probabilistic Models (QuDDPM) represent a significant advancement by integrating classical diffusion mechanisms with quantum computation. However, QuDDPM suffers from a key scalability bottleneck: its parameter count grows linearly with the number of denoising steps, as each step requires independent optimization. To overcome this limitation, we propose a Temporal-aware Quantum Denoising Diffusion Probabilistic Model (TQuDDPM), a parameter-sharing framework that incorporates temporal encoding into the denoising process. Our numerical simulations show that TQuDDPM significantly reduces parameter requirements by up to 94% and training time by up to 90%, all while preserving or even improving generative performance. This work introduces a novel approach to timestep representation in quantum generative learning and demonstrates that TQuDDPM achieves substantial computational efficiency alongside high-fidelity generation.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107981"},"PeriodicalIF":6.2,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melody Lee , Zhixin Song , Sriharsha Kocherla , Austin Adams , Alexander Alexeev , Spencer H. Bryngelson
{"title":"A multiple-circuit approach to quantum resource reduction with application to the quantum lattice Boltzmann method","authors":"Melody Lee , Zhixin Song , Sriharsha Kocherla , Austin Adams , Alexander Alexeev , Spencer H. Bryngelson","doi":"10.1016/j.future.2025.107975","DOIUrl":"10.1016/j.future.2025.107975","url":null,"abstract":"<div><div>This work proposes a multi-circuit quantum lattice Boltzmann method (QLBM) algorithm that leverages parallel quantum computing to reduce quantum resource requirements. Computational fluid dynamics (CFD) simulations often entail a large computational burden on classical computers. At present, these simulations can require up to trillions of grid points and millions of time steps. To reduce costs, novel architectures like quantum computers may be intrinsically more efficient for these computations. Current quantum algorithms for solving CFD problems are based on a single quantum circuit and, in many cases, use lattice-based methods. Current quantum devices are adorned with sufficient noise and make large and deep circuits untenable. We introduce a multiple-circuit algorithm for a quantum lattice Boltzmann method (QLBM) solution of the incompressible Navier–Stokes equations. The method, called QLBM-frugal, aims to create more practical quantum circuits and strategies for differential equation-based problems. The presented method is validated and demonstrated for 2D lid-driven cavity flow. The two-circuit algorithm exhibits a notable reduction in CX gates, which account for the majority of the runtime on quantum devices. Compared to the baseline QLBM technique, a two-circuit strategy shows increasingly large improvements in gate counts as the qubit size, or problem size, increases. For 64 lattice sites, the CX count was reduced by 35%, and the gate depth decreased by 16%. This strategy also enables concurrent circuit execution, further halving the seen gate depth.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107975"},"PeriodicalIF":6.2,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tightly-integrated quantum–classical computing using the QHDL hardware description language","authors":"Gilbert Netzer, Pratibha Raghupati Hegde, Ivy Peng, Stefano Markidis","doi":"10.1016/j.future.2025.107977","DOIUrl":"10.1016/j.future.2025.107977","url":null,"abstract":"<div><div>We present the design, development, and application of QHDL, a quantum hardware description language specifically designed for tightly-coupled quantum–classical computing systems. Together with the language design principles, we describe the QHDL compiler, debugger, and co-simulation infrastructure. We showcase the benefits of using a quantum–classical integrated approach in four use cases, requiring close quantum–classical device interaction: Bell’s pair circuit, dynamic delay, Quantum Fourier Transform (QFT), and teleportation. To interface with QHDL, we propose to use synchronous techniques that are commonplace in digital hardware design. We illustrate examples of modeling both loosely-coupled and tightly-coupled quantum circuits that use so-called <em>measurement-in-the-middle</em> by utilizing these techniques in QHDL. For clock-cycle accurate implementations, we propose implementing such classical modules as programmable hardware blocks using Register-Transfer Level (RTL) or gate-level approaches. These approaches provide the highest coupling performance and are feasible to be implemented in state-of-the-art control systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107977"},"PeriodicalIF":6.2,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SpAIware: Uncovering a novel artificial intelligence attack vector through persistent memory in LLM applications and agents","authors":"Manuel Herrador , Johann Rehberger","doi":"10.1016/j.future.2025.107994","DOIUrl":"10.1016/j.future.2025.107994","url":null,"abstract":"<div><div>As generative AI systems become more advanced, new security vulnerabilities emerge, particularly in Large Language Models (LLMs) like GPT (Generative Pre-trained Transformer) with persistent memory capabilities. This paper introduces \"SpAIware\", a novel cybersecurity threat exploiting persistent memory vulnerabilities in LLM applications. We demonstrate how malicious actors can leverage generative AI to inject and persistently store harmful instructions across multiple chat sessions, enabling continuous data exfiltration. Our proof-of-concept on ChatGPT reveals critical security flaws in AI systems with long-term memory capabilities, showcasing an advanced form of automated hacking. We analyze the potential impacts on vulnerability assessment, cyber defense automation, and incident response. The study also examines the ethical implications of using generative AI in both attack and defense scenarios. We propose a range of technical, regulatory, and educational countermeasures, underscoring the urgent need for AI-specific security protocols. Our findings highlight a significant gap in current cybersecurity solutions, potentially spawning a new industry of AI-focused security tools. This research emphasizes the critical importance of proactive security measures and ethical considerations in the rapidly evolving landscape of generative AI technologies in cybersecurity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107994"},"PeriodicalIF":6.2,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RAW-HF: Resource Availability & Workload-aware Hybrid Framework for raw data query processing","authors":"Mayank Patel , Minal Bhise","doi":"10.1016/j.future.2025.107973","DOIUrl":"10.1016/j.future.2025.107973","url":null,"abstract":"<div><div>Scientific experiments and modern applications are generating large amounts of data every day. The traditional database management system spends significant time & resources to load the entire or required partitions of a dataset into DBMS before starting query execution. In-situ engines may reparse required data multiple times, increasing resource utilization and data processing costs. Additionally, over or under allocation of resources also increases application running costs. This paper proposes a lightweight Resource Availability & Workload-aware Hybrid Framework RAW-HF to optimize querying raw data by utilizing existing finite resources efficiently. RAW-HF includes modules that help optimize the resources required to execute a given workload while maximizing the utilization of existing resources. RAW-HF is demonstrated on real-world scientific dataset workload Sloan Digital Sky Survey. It reduces workload execution time by 90% compared to widely used traditional DBMS PostgreSQL. The overall CPU, IO resource utilization, and workload execution time have been reduced by 26%, compared to the state-of-the-art workload-aware technique. Moreover the presented method is tested on variety of datasets. RAW-HF can help in generating better resource utilization plans for in-house as well as cloud servers thereby reducing resource utilization costs for organizations handling modern applications.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107973"},"PeriodicalIF":6.2,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seoyeon Kim , Jinsung Cho , Jiwoo Shin , Bongjae Kim , Jinman Jung
{"title":"Automatic generation of spiking neural networks on neuromorphic computing hardware for IoT edge computing","authors":"Seoyeon Kim , Jinsung Cho , Jiwoo Shin , Bongjae Kim , Jinman Jung","doi":"10.1016/j.future.2025.107953","DOIUrl":"10.1016/j.future.2025.107953","url":null,"abstract":"<div><div>Recently, there has been a remarkable increase in the demand for edge computing on the Internet of Things (IoT), which encompasses a network of interconnected devices capable of operating independently and making intelligent decisions. To enable low-power and real-time processing, spiking neural networks (SNNs) on neuromorphic computing hardware are expected to emerge as a prominent solution for supporting IoT edge computing. SNN models need to be optimized to meet the specific quality of service requirements according to a variety of IoT edge services. However, the complex dynamics and non-differentiable nature of SNNs are challenging issues for implementing the SNN models. The heterogeneity of neuromorphic computing hardware makes it more difficult to generate SNN models that meet specified requirements, such as accuracy or execution time. Existing IoT platforms support various neural functionalities, yet they are typically unsuitable for generating SNN models customized to meet user performance requirements for IoT edge computing, especially considering the neuromorphic computing hardware. In this paper, we propose an automatic SNN generation technique on neuromorphic computing hardware for IoT Edge computing. The goal is to automatically generate SNN models that efficiently balance cost while satisfying user-specified performance requirements and hardware constraints. Our proposed approach enables rapid prototyping by using a novel predictive model based on profiling results obtained from FPGA-based hardware, as well as neuromorphic hardware like Loihi. We built its implementation that is compatible with the open IoT platform Node-RED and measured its performance. The experimental results show that our approach enables the automatic generation of SNN models with an average requirement meeting ratio of over 96%, demonstrating its efficacy for IoT edge computing. Furthermore, a voice recognition service implementation in Node-RED demonstrated the feasibility of our approach, confirming that SNN models generated through our framework can effectively operate on FPGA-based neuromorphic hardware within IoT platforms.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107953"},"PeriodicalIF":6.2,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144515907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}