{"title":"Self-sovereign identity framework with user-friendly private key generation and rule table","authors":"Jungwon Seo , Sooyong Park","doi":"10.1016/j.future.2025.107757","DOIUrl":"10.1016/j.future.2025.107757","url":null,"abstract":"<div><div>The rise of self-sovereign identity (SSI) technology plays a critical role in addressing the limitations of conventional digital identity management systems. This paper focuses on the credential layer within the SSI technology stack, presenting a comprehensive solution to challenges related to usability, inefficient encryption and decryption processes, and verifiable credential management in existing SSI frameworks. To tackle these issues, the proposed approach introduces a user-friendly private key generation method, a rule table-based encryption and decryption technique, and a verifiable credential management system using smart contracts. In a usability evaluation involving 58 participants, 74.1% rated the proposed approach as user-friendly. Performance evaluations demonstrated that the rule table-based encryption method is between 10.37 and 171.51 times faster than existing encryption techniques. Similarly, the decryption process showed significant improvements, achieving performance that is 16.94 to 58.68 times faster than traditional methods. Security analyses were also conducted, highlighting the resilience against brute-force attacks and unauthorized access. The impact of this research extends beyond addressing current limitations, offering a robust and efficient framework that enhances the usability, security, and performance of SSI systems. By advancing the credential layer, this work paves the way for broader adoption of SSI technology across diverse applications, contributing to the evolution of decentralized identity management solutions.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107757"},"PeriodicalIF":6.2,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiong Wang , Wei He , Shang Yang , Ruoyu Zhao , Yinglong Ma
{"title":"Accelerating complex graph queries by summary-based hybrid partitioning for discovering vulnerabilities of distribution equipment","authors":"Qiong Wang , Wei He , Shang Yang , Ruoyu Zhao , Yinglong Ma","doi":"10.1016/j.future.2025.107747","DOIUrl":"10.1016/j.future.2025.107747","url":null,"abstract":"<div><div>With the high proportion of electrical and electronic devices in China’s power grids, massive graph data of power distribution equipment has been accumulated to share the knowledge across heterogeneous information, while the vulnerabilities of power devices consequently trigger new security risks to the power grid. It is crucial to swiftly and accurately discover the intrinsic vulnerabilities of power devices from the massive power distribution graph data for ensuring safe operation of the power grid. However, diverse complex queries make it inefficient to achieve consistent graph querying performance over the massive power graph data for swift and accurate vulnerability discovery in a highly available and user-friendly manner. To handle the aforementioned problem, in this paper, we present a power graph query-oriented pipeline framework to consistently accelerate complex graph queries over the massive graph data of power distribution equipment for efficient vulnerability discovery. First, we propose a lossless graph summarization method, through which a summary graph is produced from the raw graph data. Second, very different from existing methods, we propose a two-stage hybrid partitioning including the binary partitioning and the consequent ternary partitioning, which is conducted based on the summary graph instead of the raw graph for reducing the search scope and minimizing the input of the queried data, thereby accelerating the query. Third, the complex graph query with multiple triplet patterns will be automatically translated into the Spark SQL statement for query execution without users’ interference, through which the accurate results will be obtained by recovering the summary-based intermediate results. At last, extensive experiments were made over four datasets against some state-of-the-art methods, and the results show that our approach is very competitive with these approaches and achieves consistent graph querying performance in accelerating complex graph queries while obtaining accurate results.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107747"},"PeriodicalIF":6.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hua Qin , Hao Chen , Ni Li , Yaqi Deng , Gelan Yang , Yang Peng
{"title":"DNA: Dual-radio Dual-constraint Node Activation scheduling for energy-efficient data dissemination in IoT","authors":"Hua Qin , Hao Chen , Ni Li , Yaqi Deng , Gelan Yang , Yang Peng","doi":"10.1016/j.future.2025.107746","DOIUrl":"10.1016/j.future.2025.107746","url":null,"abstract":"<div><div>With the fast expansion of the Internet of Things (IoT), a vast number of IoT gateways are being deployed and continuously disseminating data to proximate devices. As one of the most prevalent wireless technologies in our daily work and life, WiFi has been extensively used for data dissemination because of the widespread availability of WiFi infrastructures. However, data transmission over high-power WiFi can quickly deplete the batteries of IoT devices. Despite the introduction of numerous power saving protocols in WiFi-based IoT computer network systems, achieving both high energy efficiency and low delay remains a challenge due to the unpredictability of data traffic. To address this, we propose a dual-radio Dual-constraint Node Activation (DNA) scheduling scheme, which leverages an auxiliary low-power ZigBee radio to reactively activate the high-power WiFi radio for energy-efficient data dissemination. Besides the delay constraint required by WiFi upper-layer applications, the dual-radio energy optimization problem studied in this work is also limited by the constrained ZigBee bandwidth for performing radio activation. By jointly scheduling dual-radio duty cycles, DNA dynamically allocates ZigBee bandwidth to balance energy and delay for optimized system performance. Extensive real-world testing was conducted on a prototype dual-radio system equipped with off-the-shelf ZigBee and WiFi radios. Under medium bandwidth and delay constraints, DNA achieves an energy consumption of 7.95 mJ per data packet, which is 95.4% and 36.2% lower than the WiFi’s standard power saving protocol and a contemporary dual-radio scheduling scheme, respectively. Additionally, DNA has demonstrated superior reliability and adaptability in various scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107746"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Andrade , Cristiano André da Costa , Alex Roehrs , Debora Muchaluat-Saade , Rodrigo da Rosa Righi
{"title":"Blending lossy and lossless data compression methods to support health data streaming in smart cities","authors":"Alexandre Andrade , Cristiano André da Costa , Alex Roehrs , Debora Muchaluat-Saade , Rodrigo da Rosa Righi","doi":"10.1016/j.future.2025.107748","DOIUrl":"10.1016/j.future.2025.107748","url":null,"abstract":"<div><div>The digital transformation process has significantly boosted the widespread adoption of telemedicine and the utilization of wearable devices for vital signs remote monitoring. However, implementing a system for continuous monitoring of the population’s vital signs, with data being streamed from various locations within a smart city context, faces significant challenges. These challenges are related to bandwidth consumption, communication latency, and storage capacity due to the large volume of data. To overcome these challenges, a common practice consists in modeling an edge-fog-cloud layered architecture. The literature lacks software solutions capable of managing the simultaneous transmission of various vital signs data from geographically distributed individuals while maintaining the ability to generate health notifications promptly. In this context, we propose the VSAC (Vital Sign Adaptive Compressor) model, which combines lossy and lossless data compression algorithms in a layered architecture to support healthcare demands in a smart city. The main contribution is how we blend both strategies: we first use lossy compression to collect only valuable vital sign data for everyone, applying lossless algorithms afterwards to reduce the number of bytes before sending it to higher layers. We provide a real-time processing protocol that facilitates the collection of heterogeneous data distributed across different city regions. After executing a VSAC prototype, the results indicate that orchestrating the aforementioned two data compression algorithms is more efficient than conventional data reduction methods. In particular, we obtained gains of up to 42% when measuring the compression rate metric.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107748"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafiq Ul Islam, Claudio Savaglio, Giancarlo Fortino
{"title":"Leading Smart Environments towards the Future Internet through Name Data Networking: A survey","authors":"Rafiq Ul Islam, Claudio Savaglio, Giancarlo Fortino","doi":"10.1016/j.future.2025.107754","DOIUrl":"10.1016/j.future.2025.107754","url":null,"abstract":"<div><div>The increasing diffusion of Smart Environments enabled by the Internet of Things (IoT) technologies has evidenced the limitations of traditional Internet Protocol (IP), thus pushing for a paradigm shift from host-centric to Information-Centric Networking (ICN). The Named Data Networking (NDN) is a particular ICN implementation that prospects more efficient and effective communication and service provision, reason why it is widely considered as an enabler towards Future Internet. Driven by the PRISMA methodology, in this work we systematically survey the current literature and analyze opportunities and limitations of NDN adoption within Smart Environments, targeted application areas, adopted technologies and research gaps. In particular, by means of a research framework, we highlight how, by shifting from the traditional IP-based to NDN, Smart Environments can benefit from unseen degrees of mobility, scalability, security and performance, paving the way to innovative and cutting-edge cyberphysical services.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107754"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan José Escobar , Pablo Sánchez-Cuevas , Beatriz Prieto , Rukiye Savran Kızıltepe , Fernando Díaz-del-Río , Dragi Kimovski
{"title":"Energy–time modelling of distributed multi-population genetic algorithms with dynamic workload in HPC clusters","authors":"Juan José Escobar , Pablo Sánchez-Cuevas , Beatriz Prieto , Rukiye Savran Kızıltepe , Fernando Díaz-del-Río , Dragi Kimovski","doi":"10.1016/j.future.2025.107753","DOIUrl":"10.1016/j.future.2025.107753","url":null,"abstract":"<div><div>Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy–time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU–GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy–time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107753"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RGAnomaly: Data reconstruction-based generative adversarial networks for multivariate time series anomaly detection in the Internet of Things","authors":"Cheng Qian , Wenzhong Tang , Yanyang Wang","doi":"10.1016/j.future.2025.107751","DOIUrl":"10.1016/j.future.2025.107751","url":null,"abstract":"<div><div>The Internet of Things encompasses a variety of components, including sensors and controllers, which generate vast amounts of multivariate time series data. Anomaly detection within this data can reveal patterns of behavior that deviate from normal operating states, providing timely alerts to mitigate potential serious issues or losses. The prevailing methodologies for multivariate time series anomaly detection are based on data reconstruction. However, these methodologies face challenges related to insufficient feature extraction and fusion, as well as instability in the reconstruction effectiveness of a single model. In this article, we propose RGAnomaly, a novel data reconstruction-based generative adversarial network model. This model leverages transformers and cross-attention mechanisms to extract and fuse the temporal and metric features of multivariate time series. RGAnomaly constructs a joint generator comprising an autoencoder and a variational autoencoder, which forms the adversarial structure with a discriminator. The anomaly score is derived from the combined data reconstruction loss and discrimination loss, providing a more comprehensive evaluation for anomaly detection. Comparative experiments and ablation studies on four public multivariate time series datasets demonstrate that RGAnomaly delivers superior performance in anomaly detection, effectively identifying anomalies in time series data within IoT environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107751"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"K-bisimulation: A novel approach for simplifying heterogeneous information networks","authors":"Yongjie Liang , Wujie Hu , Jinzhao Wu","doi":"10.1016/j.future.2025.107749","DOIUrl":"10.1016/j.future.2025.107749","url":null,"abstract":"<div><div>Heterogeneous information networks (HINs) are becoming increasingly important and widely used; however, fewer studies are focusing on the branch structures within HINs. Based on the commonalities of concurrent systems and heterogeneous information networks, as well as the significant application of bisimulation equivalence in concurrent systems, this article proposes k-bisimulation among nodes belonging to same node type, aiming to simplify the branching structure of that to obtain a cost-effective model, wherein the k is a positive integrate being closely related to the similarity degree of nodes. In this paper, we initially define the notion of k-bisimulation for nodes. Subsequently, we propose a computational method to identify k-bisimulation among nodes of same type in HINs. With the assistance of this method, we can derive a network that is approximately bisimular to the original one. Theoretical and practical analysis reveals that errors in connected paths between the original and bisimular networks are controllable. Experimental results indicate that, in comparison to the original network, the obtained network exhibits a reduction in the number of nodes and edges, while still preserve same or similar information.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107749"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elton F. de S. Soares , Emilio Vital Brazil , Carlos Alberto V. Campos
{"title":"Enhancing federated averaging of self-supervised monocular depth estimators for autonomous vehicles with Bayesian optimization","authors":"Elton F. de S. Soares , Emilio Vital Brazil , Carlos Alberto V. Campos","doi":"10.1016/j.future.2025.107752","DOIUrl":"10.1016/j.future.2025.107752","url":null,"abstract":"<div><div>Recent research in computer vision for intelligent transportation systems has prominently focused on image-based depth estimation due to its cost-effectiveness and versatile applications. Monocular depth estimation methods, in particular, have gained attention for their reliance on a single camera, offering high versatility compared to binocular techniques requiring two fixed cameras. While advanced approaches leverage self-supervised deep neural network learning with proxy tasks like pose estimation and semantic segmentation, some overlook crucial requirements for real autonomous vehicle deployment. These include data privacy, reduced network consumption, distributed computational cost, and resilience to connectivity issues. Recent studies highlight the effectiveness of federated learning combined with Bayesian optimization in addressing these requirements without compromising model efficacy. Thus, we introduce BOFedSCDepth, a novel method integrating Bayesian optimization, federated learning, and deep self-supervision to train monocular depth estimators with better efficacy and efficiency than the state-of-the-art method on self-supervised federated learning. Evaluation experiments on KITTI and DDAD datasets demonstrate the superiority of our approach, achieving up to 40.1% test loss improvement over the baseline at the initial rounds of training with up to 33.3% communication cost reduction, linear computational cost overhead at the central server and no overhead at the autonomous vehicles.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107752"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Web of Things approach for learning on the Edge–Cloud Continuum","authors":"Luca Bedogni , Federico Chiariotti","doi":"10.1016/j.future.2025.107736","DOIUrl":"10.1016/j.future.2025.107736","url":null,"abstract":"<div><div>Internet of Things (IoT) devices provide constant, contextual data that can be leveraged to automatically reconfigure and optimize smart environments. Artificial Intelligence (AI) and deep learning techniques are tools of increasing importance for this, as Deep Reinforcement Learning (DRL) can provide a general solution to this problem. However, the heterogeneity of scenarios in which DRL models may be deployed is vast, making the design of universal plug-and-play models extremely difficult. Moreover, the real deployment of DRL models on the Edge, and in the IoT in particular, is limited by two factors: firstly, the computational complexity of the training procedure, and secondly, the need for a relatively long exploration phase, during which the agent proceeds by trial and error. A natural solution to both these issues is to use simulated environments by creating a Digital Twin (DT) of the environment, which can replicate physical entities in the digital domain, providing a standardized interface to the application layer. DTs allow for simulation and testing of models and services in a simulated environment, which may be hosted on more powerful Cloud servers without the need to exchange all the data generated by the real devices. In this paper, we present a novel architecture based on the emerging Web of Things (WoT) standard, which provides a DT of a smart environment and applies DRL techniques on real time data. We discuss the theoretical properties of DRL training using DTs, showcasing our system in an existing real deployment, comparing its performance with a legacy system. Our findings show that the implementation of a DT, specifically for DRL models, allows for faster convergence and finer tuning, as well as reducing the computational and communication demands on the Edge network. The use of multiple DTs with different complexities and data requirements can also help accelerate the training, progressing by steps.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107736"},"PeriodicalIF":6.2,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}