Xin Li, Zhaoyang Qu, Tong Yu, Ming Xie, Fu Yu, Wei Ding
{"title":"Multi-Task Learning Empowered Anomaly Detection for Internet of Power Systems","authors":"Xin Li, Zhaoyang Qu, Tong Yu, Ming Xie, Fu Yu, Wei Ding","doi":"10.1002/cpe.8352","DOIUrl":"https://doi.org/10.1002/cpe.8352","url":null,"abstract":"<div>\u0000 \u0000 <p>The integration of the Internet of Things (IoT) with power systems, referred to as the Internet of Power Systems (IoPS), has significantly enhanced the efficiency and reliability of energy distribution and management. However, this integration introduces complexity and vulnerability to anomalies that can disrupt system functionality and security. Traditional anomaly detection methods, while effective to a degree, often struggle with the scale and diversity of data generated by IoPS. Motivated by this, we propose a novel anomaly detection framework based on multi-task learning (MTL) to address these challenges in this paper. MTL leverages shared representations across multiple related tasks, improving detection performance and robustness compared to single-task systems. We present a comprehensive methodology for implementing this framework, including model architecture, data handling, and evaluation metrics. Our experimental results demonstrate that our MTL approach significantly outperforms traditional methods in accuracy and efficiency. This research aims to advance IoPS security and, meanwhile, sets a foundational approach for future explorations into smart grid analytics. The paper concludes by discussing the implications of our findings for the development of more resilient IoPS and suggesting directions for further research.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143717264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recon: Efficient Intrusion Recovery for Web Applications","authors":"Mohamed Hammad, Nabil Hewahi, Wael Elmedany","doi":"10.1002/cpe.70066","DOIUrl":"https://doi.org/10.1002/cpe.70066","url":null,"abstract":"<div>\u0000 \u0000 <p>In the present systems, recovery from intrusions usually takes much time. Hosted web applications are vulnerable to cyberattacks and may be at risk due to HTTP requests that exploit these vulnerabilities. In this article, we present an approach to recovering web systems from cyberattacks using machine learning approaches. Our approach is called Reconstruct (Recon). Users and administrators of web applications can benefit from the Recon system that helps recover from intrusions while protecting authorized user changes. The recovery mechanism used in Recon involves carrying out the compensation operations to remove the effects of the attack and re-do the subsequently authorized actions. A system administrator can carry out the recovery operation that does not require any changes to be made to the software. In this article, a convolutional neural network is used with long short-term memory to map the requests that the application receives to the database statements executed in the database. Two extensively utilized web applications, that is, WordPress and LimeSurvey, were used to evaluate Recon. According to the findings, it is possible to remove the impact of malicious requests while maintaining legitimate application data with minimum user input at an expense of 1%–2% in throughput, 2.24–3.1 GB/day in storage, and achieving an F1-score of up to 98.56%. The obtained performance results outperform past research studies' performance overhead by up to 20×.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RETRACTION: Analysis and Design of a New Fault-Tolerant Digital Comparator Based on Nano-Scale Quantum Technology","authors":"","doi":"10.1002/cpe.70053","DOIUrl":"https://doi.org/10.1002/cpe.70053","url":null,"abstract":"<p>\u0000 <b>RETRACTION:</b> <span>Y. Jiao</span>, “ <span>Analysis and Design of a New Fault-Tolerant Digital Comparator Based on Nano-Scale Quantum Technology</span>,” <i>Concurrency and Computation: Practice and Experience</i> <span>34</span>, no. <span>20</span> (<span>2022</span>): e7158, \u0000https://doi.org/10.1002/cpe.7158.</p><p>The above article, published online on 31 July 2022 in Wiley Online Library (\u0000wileyonlinelibrary.com), has been retracted by agreement between the journal Editors, Jinjun Chen, Nitin Auluck, and Martin Berzins; and John Wiley & Sons Ltd. The retraction has been agreed on as the manuscript was found to be published solely on the basis of a compromised peer review process. Furthermore, the authors included incoherent, meaningless, and irrelevant information in this article. The results presented are unsupported by any evidence and the editors consider the conclusions of this article to be invalid.</p><p>The author has been informed of the decision to retract.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Monitoring of On-Board Unit Status in Highway Electronic Toll Collection Systems Using Graphsage-Based Heterogeneous Graph Learning","authors":"Qiang Ren, Chengmingchan Yan, Fumin Zou, Yue Xing, Haolin Wang, Ying Zhang","doi":"10.1002/cpe.70056","DOIUrl":"https://doi.org/10.1002/cpe.70056","url":null,"abstract":"<div>\u0000 \u0000 <p>The reliable operation of on-board unit (OBU) in electronic toll collection (ETC) systems is critical for maintaining transaction accuracy and preventing revenue loss. However, real-time monitoring of OBU status faces challenges such as technological obsolescence, environmental vulnerabilities, and data inconsistencies. This study proposes a novel GraphSAGE-based approach for real-time OBU status monitoring. First, we establish a classification standard for OBU operating status based on missing data patterns, enabling precise identification of abnormal states. Second, we design a real-time data warehouse architecture tailored to the characteristics of ETC transaction data, ensuring efficient data processing and storage. Third, we use the GraphSAGE model to monitor OBU status in real-time, leveraging heterogeneous graph learning to capture both temporal and structural dependencies in the data. The experimental results demonstrate the effectiveness of the proposed approach, achieving a true positive rate of 99.8% and a false positive rate of 0.2% across various performance metrics, including accuracy, precision, recall, and F1-score. The proposed method outperforms existing models, such as graph convolutional network, GAT, and XGBoost, in real-time monitoring tasks, showcasing its stability and generalization ability under different data volumes. This study provides a comprehensive framework for improving OBU condition monitoring, contributing to enhanced maintenance strategies and more effective detection of fee evasion by regulatory authorities.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Based Attack Detection and QoS Aware Secure Routing Protocol for SDN-IoT Network","authors":"Manvitha Gali, Aditya Mahamkali","doi":"10.1002/cpe.70045","DOIUrl":"https://doi.org/10.1002/cpe.70045","url":null,"abstract":"<div>\u0000 \u0000 <p>The IoT network and the software-based controller comprise the Software-Defined Network-based IoT (SDN-IoT). SDN-IoT is widely utilized in traffic control and management, smart buildings and homes, safety applications, health care monitoring and automation, and several application domains. The most challenging aspects of the SDN-IoT network scenario are efficient routing and security issues. Hence, a novel framework of SDN-IoT with deep learning-based attack detection and a QoS-aware secure routing protocol is proposed. The attack detection is employed initially for the user request using the Deep Belief Network (DBN). The detected malicious request is dropped, and the routing is developed for the normal data packet. Here, African Aquila Optimization is introduced for efficient routing based on secure QoS factors. The analysis of the proposed method based on the assessment measures such as end-to-end delay, energy consumption, network lifetime, packet delivery ratio, and throughput acquired the values of 1.86, 5.22, 1632.84, 0.9959, and 0.96, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-Preserving Real-Time Smart Grid Topology Analysis: A Graph Neural Networks Approach","authors":"Wangyong Guo, Siyang Shao, Kexin Zhang, Rongqiang Feng, Xueqiong Wu, Zhihao Zhang","doi":"10.1002/cpe.8343","DOIUrl":"https://doi.org/10.1002/cpe.8343","url":null,"abstract":"<div>\u0000 \u0000 <p>The transition to smart grids represents a significant evolution in power system technology, incorporating advanced communication and control mechanisms to enhance efficiency, reliability, and sustainability. Real-time topology analysis is a critical functionality in smart grids, enabling the detection of faults, optimization of operations, and maintenance of grid stability. However, this analysis presents challenges, including the need for efficient data processing, handling dynamic topologies, and preserving data privacy. This article proposes a novel approach for real-time smart grid topology analysis using graph neural networks (GNNs) with integrated privacy-preserving techniques. The GNN model is designed to capture complex relationships within the grid, facilitating accurate node classification, edge prediction, and anomaly detection. To address privacy concerns, we incorporate differential privacy and secure multi-party computation, ensuring that sensitive data remains protected during analysis. Extensive experiments conducted on a synthetic smart grid dataset demonstrate the effectiveness of the proposed method, achieving high accuracy, precision, recall, and AUC-ROC scores across various tasks. A case study further illustrates the practical applicability of our approach, showing efficient real-time performance and robust privacy guarantees.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kemal Ebcioğlu, Batuhan Bulut, Atakan Doğan, Gürhan Küçük, İsmail San
{"title":"1,024-FPGA DES Supercomputer on the AWS Cloud","authors":"Kemal Ebcioğlu, Batuhan Bulut, Atakan Doğan, Gürhan Küçük, İsmail San","doi":"10.1002/cpe.70051","DOIUrl":"https://doi.org/10.1002/cpe.70051","url":null,"abstract":"<div>\u0000 \u0000 <p>We present a 1,024-FPGA DES supercomputer accelerator that is automatically compiled from a single-threaded sequential DES key search application by means of our High-Level Synthesis compiler. Our 1,024-FPGA supercomputer is deployed on several Amazon Web Services (AWS) EC2 F1 instance platforms from different AWS regions. Consequently, it can be considered the first multi-chip application-specific supercomputer that is scattered to multiple geographically distributed data centers around the world. Furthermore, invoking our 1,024-FPGA DES supercomputer is functionally identical to invoking the single-threaded sequential DES application the supercomputer accelerator is compiled from. Our 1,024-FPGA supercomputer achieves 3.016E+12 keys/sec and it performs 5,286,000 times better than an AWS EC2 m5.8xlarge Xeon x86 machine executing the original sequential application with a performance of 5.706 E+5 keys/sec.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantum Computer Architecture for Quantum Error Correction With Distributing Process to Multiple Temperature Layers","authors":"Ryuji Ukai, Chihiro Yoshimura, Hiroyuki Mizuno","doi":"10.1002/cpe.8351","DOIUrl":"https://doi.org/10.1002/cpe.8351","url":null,"abstract":"<div>\u0000 \u0000 <p>Quantum computers are capable of performing large-scale calculations in a shorter time than conventional classical computers. Because quantum computers are realized in microscopic physical systems, unintended change in the quantum state is unavoidable due to interaction between environment, and it would lead to error in computation. Therefore, quantum error correction is needed to detect and correct errors that have occurred. In this paper, we propose quantum computer architecture for quantum error correction by taking account that the components of a quantum computer with quantum dots in silicon are divided into multiple temperature layers inside and outside the dilution refrigerator. Based on the required performance and possible processing capacity, each component was distributed in various temperature layers: the chip with qubits and the chip for generation of precise analog signals to control qubits are placed on 100 mK and 4 K stages inside the dilution refrigerator, respectively, while real-time digital processing is performed outside the dilution refrigerator. We then experimentally demonstrate the digital control sequence for quantum error correction combined with a simulator which simulates quantum states based on control commands from the digital processing system. The simulator enables the proof-of-principle experiment of system architecture independent of the development of the chips. The real time processing including determination of feed-forward operation and transmission of feed-forward operation commands is carried out by a field-programmable gate array (FPGA) outside the dilution refrigerator within 0.01 ms for bit-flip or phase-flip error corrections. This is a sufficiently short time compared to the assumed relaxation time, which is the approximate time that the quantum state can be preserved, meaning that our proposed architecture is applicable to quantum error correction.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143646325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Hybrid Recommender System for e-Learning Based on Cloud Content in Educational Web Services","authors":"Baoqing Tai, Xianxian Yang, Ju Chong, Lei Chen","doi":"10.1002/cpe.70059","DOIUrl":"https://doi.org/10.1002/cpe.70059","url":null,"abstract":"<div>\u0000 \u0000 <p>In this article, we present a novel method for multimodal learning using Siamese networks to recommend appropriate educational content on e-learning platforms. One of the main challenges in current recommendation systems is their inability to effectively personalize content based on the unique needs and preferences of individual learners. Existing methods often struggle to capture long-term dependencies and intricate patterns in user behavior, leading to irrelevant or inadequate content suggestions. To address this, our approach utilizes two residual Siamese networks based on Long Short-Term Memory (LSTM) and Recurrent Convolutional Neural Networks (RCNN). This hybrid model effectively captures both sequential and contextual information, leveraging LSTM's strength in handling long-term dependencies and RCNN's capability to extract local features through convolutional operations. By analyzing complex patterns within the data, our method significantly enhances recommendation accuracy, considering both temporal sequences and contextual relationships. The Siamese network encodes user and item data into a high-dimensional feature space, positioning similar users and items closer together. The residual connections allow the model to capture both low-level and high-level features, leading to richer representations. Extensive experiments on real-world e-learning datasets demonstrate the superiority of our method over traditional recommendation techniques, evaluated through metrics such as precision, recall, and accuracy. The results show that our approach not only improves recommendation accuracy but also enhances the diversity and relevance of suggested content, offering more personalized learning experiences that cater to the individual needs and preferences of learners.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU Acceleration of the GWO Optimization Algorithm: Application to the Solution of Large Nonlinear Equation Systems","authors":"Bruno Silva, Luiz Guerreiro Lopes","doi":"10.1002/cpe.70043","DOIUrl":"https://doi.org/10.1002/cpe.70043","url":null,"abstract":"<div>\u0000 \u0000 <p>Large-scale optimization problems present formidable challenges in various scientific and engineering domains. To address these challenges, population-based computational intelligence algorithms have emerged as potent tools capable of being parallelized. Among these algorithms, the gray wolf optimizer (GWO) stands out for its ability to simulate the hierarchical structure and hunting behaviors of gray wolves in the wild and has been used successfully to solve several hard optimization problems. However, the study of its applicability for solving nonlinear equation systems (NESs), which is arguably one of the most difficult classes of numerical problems, poses significant challenges in terms of computational efficiency and scalability. To address this gap, this article introduces a novel GPU-based parallel implementation of the GWO algorithm aimed at addressing the particular challenges of optimizing large-scale NESs by employing the substantial parallel processing capabilities of GPUs. The GPU-based version of GWO was developed using the Julia programming language, and its performance was evaluated with two GPUs of professional grade: the NVIDIA Tesla V100 SXM2 with 32 GB VRAM and the NVIDIA A100 PCIe with 80 GB VRAM. The testing involved a series of complex, scalable NESs with dimensions ranging from 500 to 4000. The results obtained demonstrate average speedups ranging from 154.9<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> to 250.2<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> for the V100 GPU and 204.0<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> to 923.9<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> for the A100. These results highlight the effectiveness of the proposed GPU-based acceleration technique in reducing computation times.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 6-8","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}