Mingxi Liu , Tailong Yang , Wenbo Shi , Athanasios V. Vasilakos , Ning Lu
{"title":"ESDI: An efficient and secure data integrity verification scheme for indoor navigation","authors":"Mingxi Liu , Tailong Yang , Wenbo Shi , Athanasios V. Vasilakos , Ning Lu","doi":"10.1016/j.future.2025.107759","DOIUrl":"10.1016/j.future.2025.107759","url":null,"abstract":"<div><div>Currently, indoor navigation software is typically embedded in smartphones as mobile applications. These apps enable users to access cloud-based data while retrieving indoor navigation information. However, cloud data faces risks of tampering and deletion, necessitating verification of its integrity by users. While smartphones possess certain computational capabilities, prolonged execution of computationally intensive tasks can lead to rapid battery depletion. Additionally, excessive storage demands may prompt users to frequently close or even uninstall the apps to free up memory. This paper presents a blockchain-based data integrity verification scheme tailored for indoor navigation. To address storage overhead, we introduce a user-frequency-based selection technique that designates certain blockchain nodes as light nodes. We further propose a Merkle Hash Tree-based proof extraction method to facilitate efficient proof transfer between different types of nodes. Our approach incorporates an efficient Zhang-Safavi-Susilo (ZSS) signature-based data auditing protocol. By leveraging a data label placement mechanism during signature generation, our scheme supports tamper-proof batch verification, significantly reducing computational overhead. To enable dynamic data updates, we design a novel dynamic data structure, the Red-Black Hash Table, which enhances efficiency in handling updates. Through rigorous security analysis, we demonstrate that our scheme effectively defends against forgery, replay, and replacement attacks. We implemented and simulated our solution on smartphones and indoor navigation apps, conducting experimental evaluations using indoor positioning data. We take audit initialization overhead, audit verification computation overhead, evidence storage overhead, consensus computation overhead, etc. as important experimental indicators. Performance results indicate that our scheme, Efficient and Secure Data Integrity (ESDI), improves auditing efficiency by approximately 54% on average compared to existing approaches.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107759"},"PeriodicalIF":6.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy-aware scheduling for reliability-oriented real-time parallel applications allocation on heterogeneous computing systems","authors":"Rui She , Yuting Wu , Enfang Cui","doi":"10.1016/j.future.2025.107738","DOIUrl":"10.1016/j.future.2025.107738","url":null,"abstract":"<div><div>Heterogeneous computing systems (HCSs) have rapidly developed and been widely applied due to their high performance and low cost characteristics. However, HCSs face trade-offs and conflicts among the three core indicators: energy consumption, reliability, and scheduling length. How to balance the three core indicators to achieve optimal performance is the core issue faced by HCSs. In this paper, we propose an energy-aware scheduling model for reliability-oriented real-time parallel applications on heterogeneous computing systems. The problem of minimum system-centric energy efficiency problem is studied. In terms of problem solving, minimum schedule time length (MSTL) algorithm is proposed, which provides a baseline for assessing feasibility and ensuring compliance with both response time and reliability criteria. To further enhance reliability, this paper considers both transient faults and permanent faults, and proposes the primary–secondary backup (PSB) algorithm to improve the fault tolerance, with dynamic power management (DPM) and dynamic voltage and frequency scaling (DVFS) to reduce energy consumption. Furthermore, the dynamic voltage and frequency scaling (DVFS) algorithm is proposed, within the deadline, redistributing tasks that have not been executed on failed processors to reduce energy consumption caused by excessively long redundant backups. Extensive experimental results on real-world and randomly generated applications demonstrate the effectiveness of the proposed algorithms under various conditions.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107738"},"PeriodicalIF":6.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Chen , Weiwei Lin , Huikang Huang , Xiaoying Ye , Zhiping Peng
{"title":"GAS-MARL: Green-Aware job Scheduling algorithm for HPC clusters based on Multi-Action Deep Reinforcement Learning","authors":"Rui Chen , Weiwei Lin , Huikang Huang , Xiaoying Ye , Zhiping Peng","doi":"10.1016/j.future.2025.107760","DOIUrl":"10.1016/j.future.2025.107760","url":null,"abstract":"<div><div>In recent years, the computational power of High-Performance Computing (HPC) clusters has surged. However, amidst global calls for energy conservation and emission reduction, their rapid power consumption poses a developmental bottleneck. Adopting renewable energy sources for power supply is a crucial measure to reduce carbon emissions from HPC clusters. However, due to the variability and intermittency of renewable energy, formulating effective job scheduling plans to fully utilize these sources has become urgent. To tackle this, we propose a Green-Aware job Scheduling algorithm for HPC clusters based on Multi-Action Deep Reinforcement Learning (GAS-MARL), which optimizes both renewable energy utilization and average bounded slowdown. In this algorithm, the agent outputs two actions during one decision-making period: job selection action and delay decision action. The introduction of delay decision actions enhances the flexibility of the scheduling algorithm, enabling each job to be executed during appropriate time slots. Furthermore, we have designed a new backfilling policy called Green-Backfilling to better cooperate with GAS-MARL for job scheduling. Experimental evaluations demonstrate that, compared to other algorithms, the combination of GAS-MARL and Green-Backfilling exhibits significant advantages in enhancing renewable energy utilization and decreasing average bounded slowdown.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107760"},"PeriodicalIF":6.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-sovereign identity framework with user-friendly private key generation and rule table","authors":"Jungwon Seo , Sooyong Park","doi":"10.1016/j.future.2025.107757","DOIUrl":"10.1016/j.future.2025.107757","url":null,"abstract":"<div><div>The rise of self-sovereign identity (SSI) technology plays a critical role in addressing the limitations of conventional digital identity management systems. This paper focuses on the credential layer within the SSI technology stack, presenting a comprehensive solution to challenges related to usability, inefficient encryption and decryption processes, and verifiable credential management in existing SSI frameworks. To tackle these issues, the proposed approach introduces a user-friendly private key generation method, a rule table-based encryption and decryption technique, and a verifiable credential management system using smart contracts. In a usability evaluation involving 58 participants, 74.1% rated the proposed approach as user-friendly. Performance evaluations demonstrated that the rule table-based encryption method is between 10.37 and 171.51 times faster than existing encryption techniques. Similarly, the decryption process showed significant improvements, achieving performance that is 16.94 to 58.68 times faster than traditional methods. Security analyses were also conducted, highlighting the resilience against brute-force attacks and unauthorized access. The impact of this research extends beyond addressing current limitations, offering a robust and efficient framework that enhances the usability, security, and performance of SSI systems. By advancing the credential layer, this work paves the way for broader adoption of SSI technology across diverse applications, contributing to the evolution of decentralized identity management solutions.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107757"},"PeriodicalIF":6.2,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Louis-Claude Canon, Anthony Dugois, Mohamad El Sayah, Pierre-Cyrille Héam
{"title":"MCMC generation of cost matrices for scheduling performance evaluation","authors":"Louis-Claude Canon, Anthony Dugois, Mohamad El Sayah, Pierre-Cyrille Héam","doi":"10.1016/j.future.2025.107758","DOIUrl":"10.1016/j.future.2025.107758","url":null,"abstract":"<div><div>In high performance computing, scheduling and allocating tasks to machines has long been a critical challenge, especially when dealing with heterogeneous execution costs. To design efficient algorithms and then assess their performance, many approaches have been proposed, among which simulations, which can be performed on a large variety of environments and application models. However, this technique is known to be sensitive to bias when it relies on random instances with an uncontrolled distribution. In this article, instead of designing a new optimization method, we focus on generating cost matrices to improve the empirical evaluation methodology. In particular, we use methods from the literature to provide formal guarantee on how costs matrices are distributed: we ensure a uniform distribution among the cost matrices with given task and machine heterogeneities. Although the use of randomly generated matrices has often been criticized, this new generation procedure is the first that is proven to prevent biased generation by ensuring a uniform generation with given properties. This method is relevant to assess the performance of scheduling heuristics, in particular when characterizing for which parameter values a given approach performs better than others. When applied to a makespan minimization problem, the methodology reveals when each of three efficient heuristics performs better depending on the instance heterogeneity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107758"},"PeriodicalIF":6.2,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on transaction allocation strategy in blockchain state sharding","authors":"Guangxia Xu , Zhean Zhou , Xiaoling Song , Yongfei Huang","doi":"10.1016/j.future.2025.107756","DOIUrl":"10.1016/j.future.2025.107756","url":null,"abstract":"<div><div>With the continuous enrichment of blockchain application scenarios, people have higher requirements for blockchain throughput and storage costs. State sharding is one of the most promising technologies for blockchain. It decentralizes the storage of the blockchain ledger to effectively reduce storage costs while increasing the throughput of the blockchain. However, it still has the hot sharding problem of most transactions in individual committees. This paper proposes a sharding transaction allocation strategy (STAS) to score committees and transactions according to different methods and assign high-scoring transactions to high-scoring committees. This allocation strategy, which allocates transactions on demand based on node capacity, mitigates hot sharding issues and makes it safer to hand over more valuable transactions to a more honest committee. Comparative experiments show that the proposed STAS strategy has lower latency and higher throughput than the previous sharding model.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107756"},"PeriodicalIF":6.2,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiong Wang , Wei He , Shang Yang , Ruoyu Zhao , Yinglong Ma
{"title":"Accelerating complex graph queries by summary-based hybrid partitioning for discovering vulnerabilities of distribution equipment","authors":"Qiong Wang , Wei He , Shang Yang , Ruoyu Zhao , Yinglong Ma","doi":"10.1016/j.future.2025.107747","DOIUrl":"10.1016/j.future.2025.107747","url":null,"abstract":"<div><div>With the high proportion of electrical and electronic devices in China’s power grids, massive graph data of power distribution equipment has been accumulated to share the knowledge across heterogeneous information, while the vulnerabilities of power devices consequently trigger new security risks to the power grid. It is crucial to swiftly and accurately discover the intrinsic vulnerabilities of power devices from the massive power distribution graph data for ensuring safe operation of the power grid. However, diverse complex queries make it inefficient to achieve consistent graph querying performance over the massive power graph data for swift and accurate vulnerability discovery in a highly available and user-friendly manner. To handle the aforementioned problem, in this paper, we present a power graph query-oriented pipeline framework to consistently accelerate complex graph queries over the massive graph data of power distribution equipment for efficient vulnerability discovery. First, we propose a lossless graph summarization method, through which a summary graph is produced from the raw graph data. Second, very different from existing methods, we propose a two-stage hybrid partitioning including the binary partitioning and the consequent ternary partitioning, which is conducted based on the summary graph instead of the raw graph for reducing the search scope and minimizing the input of the queried data, thereby accelerating the query. Third, the complex graph query with multiple triplet patterns will be automatically translated into the Spark SQL statement for query execution without users’ interference, through which the accurate results will be obtained by recovering the summary-based intermediate results. At last, extensive experiments were made over four datasets against some state-of-the-art methods, and the results show that our approach is very competitive with these approaches and achieves consistent graph querying performance in accelerating complex graph queries while obtaining accurate results.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107747"},"PeriodicalIF":6.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Waqas Ahmed , Waseem Iqbal , Ammar Hassan , Awais Ahmad , Farhan Ullah , Gautam Srivastava
{"title":"Elevating e-health excellence with IOTA distributed ledger technology: Sustaining data integrity in next-gen fog-driven systems","authors":"Waqas Ahmed , Waseem Iqbal , Ammar Hassan , Awais Ahmad , Farhan Ullah , Gautam Srivastava","doi":"10.1016/j.future.2025.107755","DOIUrl":"10.1016/j.future.2025.107755","url":null,"abstract":"<div><div>Ensuring data integrity is crucial for IoT-based healthcare and emotion care services, which utilize Fog computing to bring resources and services closer to the network edge. This proximity, however, increases the risks of data tampering, loss, and unauthorized access. To mitigate these risks, Distributed Ledger Technology (DLT) platforms such as Hash graph, Big chain-DB, IOTA (Internet of Things Application) and Blockchain are being investigated for their potential to enhance data integrity within Fog computing environments. This study presents a framework designed to ensure data integrity in IoT-based healthcare and emotion care services by leveraging IOTA technology. IOTA, which employs a directed a-cyclic graph (DAG) structure known as the Tangle, provides a secure, decentralized and tamper-resistant method for data storage and sharing. Unlike traditional blockchain, IOTA’s consensus mechanism operates without miners, offering improved scalability and efficiency suitable for IoT environments. Our proposed framework exploits IOTA’s capabilities to deliver a robust solution for maintaining data integrity in Fog computing contexts. The evaluation results demonstrate the framework’s feasibility and effectiveness in enhancing data integrity for IoT-based healthcare and emotion care services. Although IOTA significantly improves data integrity by complicating unauthorized data alterations, it is essential to acknowledge that complete immutability is influenced by various factors, such as consensus mechanisms and the number of network participants, similar to the limitations observed in other DLTs.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107755"},"PeriodicalIF":6.2,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hua Qin , Hao Chen , Ni Li , Yaqi Deng , Gelan Yang , Yang Peng
{"title":"DNA: Dual-radio Dual-constraint Node Activation scheduling for energy-efficient data dissemination in IoT","authors":"Hua Qin , Hao Chen , Ni Li , Yaqi Deng , Gelan Yang , Yang Peng","doi":"10.1016/j.future.2025.107746","DOIUrl":"10.1016/j.future.2025.107746","url":null,"abstract":"<div><div>With the fast expansion of the Internet of Things (IoT), a vast number of IoT gateways are being deployed and continuously disseminating data to proximate devices. As one of the most prevalent wireless technologies in our daily work and life, WiFi has been extensively used for data dissemination because of the widespread availability of WiFi infrastructures. However, data transmission over high-power WiFi can quickly deplete the batteries of IoT devices. Despite the introduction of numerous power saving protocols in WiFi-based IoT computer network systems, achieving both high energy efficiency and low delay remains a challenge due to the unpredictability of data traffic. To address this, we propose a dual-radio Dual-constraint Node Activation (DNA) scheduling scheme, which leverages an auxiliary low-power ZigBee radio to reactively activate the high-power WiFi radio for energy-efficient data dissemination. Besides the delay constraint required by WiFi upper-layer applications, the dual-radio energy optimization problem studied in this work is also limited by the constrained ZigBee bandwidth for performing radio activation. By jointly scheduling dual-radio duty cycles, DNA dynamically allocates ZigBee bandwidth to balance energy and delay for optimized system performance. Extensive real-world testing was conducted on a prototype dual-radio system equipped with off-the-shelf ZigBee and WiFi radios. Under medium bandwidth and delay constraints, DNA achieves an energy consumption of 7.95 mJ per data packet, which is 95.4% and 36.2% lower than the WiFi’s standard power saving protocol and a contemporary dual-radio scheduling scheme, respectively. Additionally, DNA has demonstrated superior reliability and adaptability in various scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107746"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Andrade , Cristiano André da Costa , Alex Roehrs , Debora Muchaluat-Saade , Rodrigo da Rosa Righi
{"title":"Blending lossy and lossless data compression methods to support health data streaming in smart cities","authors":"Alexandre Andrade , Cristiano André da Costa , Alex Roehrs , Debora Muchaluat-Saade , Rodrigo da Rosa Righi","doi":"10.1016/j.future.2025.107748","DOIUrl":"10.1016/j.future.2025.107748","url":null,"abstract":"<div><div>The digital transformation process has significantly boosted the widespread adoption of telemedicine and the utilization of wearable devices for vital signs remote monitoring. However, implementing a system for continuous monitoring of the population’s vital signs, with data being streamed from various locations within a smart city context, faces significant challenges. These challenges are related to bandwidth consumption, communication latency, and storage capacity due to the large volume of data. To overcome these challenges, a common practice consists in modeling an edge-fog-cloud layered architecture. The literature lacks software solutions capable of managing the simultaneous transmission of various vital signs data from geographically distributed individuals while maintaining the ability to generate health notifications promptly. In this context, we propose the VSAC (Vital Sign Adaptive Compressor) model, which combines lossy and lossless data compression algorithms in a layered architecture to support healthcare demands in a smart city. The main contribution is how we blend both strategies: we first use lossy compression to collect only valuable vital sign data for everyone, applying lossless algorithms afterwards to reduce the number of bytes before sending it to higher layers. We provide a real-time processing protocol that facilitates the collection of heterogeneous data distributed across different city regions. After executing a VSAC prototype, the results indicate that orchestrating the aforementioned two data compression algorithms is more efficient than conventional data reduction methods. In particular, we obtained gains of up to 42% when measuring the compression rate metric.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107748"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}