Concurrency and Computation-Practice & Experience最新文献

筛选
英文 中文
Research on Online Log Anomaly Detection Model Based on Informer 基于Informer的在线日志异常检测模型研究
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70300
Yimin Guo, Yiling Sun, Ping Xiong
{"title":"Research on Online Log Anomaly Detection Model Based on Informer","authors":"Yimin Guo,&nbsp;Yiling Sun,&nbsp;Ping Xiong","doi":"10.1002/cpe.70300","DOIUrl":"https://doi.org/10.1002/cpe.70300","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the limitations of conventional reactive log anomaly detection in high-availability systems, this paper presents OADS—an online anomaly detection system that synergizes time-series prediction with real-time detection. The system features LSP-Informer, a multivariate log sequence predictor built upon Informer architecture and enhanced by a novel weighted combination loss (WCL) that simultaneously optimizes both prediction accuracy and semantic consistency. Furthermore, OADS implements a unique prediction-detection cascade by integrating LSP-Informer with a Temporal Convolutional Network + Attention (TCNA)-based Log Anomaly Detection Model (LADM), enabling proactive anomaly forecasting 5–10 steps ahead. Experimental results on HDFS logs demonstrate exceptional performance: The TCNA-based LADM achieves an F1-score of 0.9860, while LSP-Informer maintains a 0.9801 F1-score for 5-step-ahead prediction. The complete OADS system successfully predicts potential anomalies in advance, maintaining a robust 0.73+ Jaccard index under heavy masking conditions while preserving interpretability in real-world deployments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Reinforcement Learning Approach With Attention Mechanism for DAG Task Scheduling in Data Centers 数据中心DAG任务调度的一种带注意机制的深度强化学习方法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70279
Jun Cai, Li-juan Lu
{"title":"A Deep Reinforcement Learning Approach With Attention Mechanism for DAG Task Scheduling in Data Centers","authors":"Jun Cai,&nbsp;Li-juan Lu","doi":"10.1002/cpe.70279","DOIUrl":"https://doi.org/10.1002/cpe.70279","url":null,"abstract":"<div>\u0000 \u0000 <p>Task scheduling algorithms for data centers must be capable of making instantaneous decisions based on the current state of the system. However, due to information limitations, these scheduling algorithms often fail to achieve optimal scheduling plans. To address the information bottleneck faced in DAG (Directed Acyclic Graph) task scheduling within data centers, this paper proposes a deep reinforcement learning scheduling model based on a DAG attention mechanism. This model utilizes the attention mechanism to capture the potential relationships between dependent tasks, thereby improving scheduling efficiency and system performance under limited information conditions. The experimental results indicate that our DAG attention mechanism can significantly reduce makespan.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Scale Spatiotemporal Memory-Augmented Network for Unsupervised Video Anomaly Detection 无监督视频异常检测的跨尺度时空记忆增强网络
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70315
Lihu Pan, Bingyi Li, Shouxin Peng, Rui Zhang, Linliang Zhang
{"title":"Cross-Scale Spatiotemporal Memory-Augmented Network for Unsupervised Video Anomaly Detection","authors":"Lihu Pan,&nbsp;Bingyi Li,&nbsp;Shouxin Peng,&nbsp;Rui Zhang,&nbsp;Linliang Zhang","doi":"10.1002/cpe.70315","DOIUrl":"https://doi.org/10.1002/cpe.70315","url":null,"abstract":"<div>\u0000 \u0000 <p>Video anomaly detection (VAD), a critical task in intelligent surveillance systems, faces two key challenges: Dynamic behavioral characterization under complex scenarios and robust spatiotemporal context modeling. Existing methods face limitations, such as inadequate cross-scale feature fusion, weak channel-wise dependency modeling, and sensitivity to background noise. To address these issues, we propose a novel multi-scale spatiotemporal feature augmentation framework. Our approach introduces three core innovations: Hierarchical feature pyramid architecture for multi-granularity representation learning, capturing both local motion patterns and global scene semantics; A channel-adaptive attention mechanism that dynamically models long-range spatiotemporal dependencies; A spatiotemporal Gaussian difference module to enhance anomaly response through frequency-domain feature reconstruction, effectively suppressing noise interference. Extensive experiments on UCSD Ped1/2, CUHK Avenue, and ShanghaiTech benchmarks demonstrate that our method achieves state-of-the-art performance, outperforming existing approaches in both accuracy and robustness.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Curvature-Guided Fast and Robust Normal Estimation for Point Clouds 一种曲率导向的点云快速鲁棒正态估计
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70307
Mingxiu Tuo, Puyu Qian, Siyu Jin, Haonan Zhang, Shunli Zhang
{"title":"A Curvature-Guided Fast and Robust Normal Estimation for Point Clouds","authors":"Mingxiu Tuo,&nbsp;Puyu Qian,&nbsp;Siyu Jin,&nbsp;Haonan Zhang,&nbsp;Shunli Zhang","doi":"10.1002/cpe.70307","DOIUrl":"https://doi.org/10.1002/cpe.70307","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate normal estimation is a fundamental task in 3D geometry processing, with wide-ranging applications in computer vision, robotics, and computer graphics. However, existing globally consistent normal estimation (GCNO) methods are often limited by reduced accuracy and high computational cost when applied to complex models. To address these challenges, we propose a fast and robust point cloud normal estimation method guided by curvature information. The proposed method integrates curvature as a geometric prior into a global winding-number-based optimization formulation, effectively enhancing normal orientation consistency while preserving sharp geometric features. Furthermore, to improve computational efficiency, we introduce a PCA-based visibility-aware initialization strategy. This strategy adaptively adjusts the initial normal directions by leveraging the local geometric distribution of points, thereby enhancing the consistency of initial normal orientations. Experimental results demonstrate that, compared to the state-of-the-art GCNO method, the proposed approach significantly improves both the accuracy and efficiency of normal estimation. This work provides an effective and precise solution for achieving globally consistent normal estimation in point clouds.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Tier Model-Free Defense Approach Against False Data Injection Attacks in Two-Area Load Frequency Control Systems 两区负荷变频控制系统虚假数据注入攻击的两层无模型防御方法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70312
Weixun Li, Libo Yang, Huifeng Li, Feng Zheng, Yajun Wang
{"title":"A Two-Tier Model-Free Defense Approach Against False Data Injection Attacks in Two-Area Load Frequency Control Systems","authors":"Weixun Li,&nbsp;Libo Yang,&nbsp;Huifeng Li,&nbsp;Feng Zheng,&nbsp;Yajun Wang","doi":"10.1002/cpe.70312","DOIUrl":"https://doi.org/10.1002/cpe.70312","url":null,"abstract":"<div>\u0000 \u0000 <p>In open-network environments, two-area load frequency control (LFC) systems are exposed to potential threats from false data injection attacks (FDIAs). Conventional model-based detection and control methods exhibit poor adaptability to unstructured disturbances, making it difficult to ensure system stability and robustness. To address this issue, a two-tier defense mechanism is proposed, in which both the front-end detection and the back-end control components are designed in a model-free manner. The detection module adopts recursive estimation and model-free disturbance observation, while the control module employs a feedback optimal bounded error learning (FOBEL) strategy built on reinforcement learning. The detection module identifies attacks through state residual analysis and signal disturbance estimation, while the control module implements dynamic compensation using a controller that integrates fractional-order structures with reinforcement learning. Compared with traditional methods, this approach demonstrates significant improvements in disturbance rejection and control accuracy. Simulation studies under two representative attack scenarios validate the superiority and effectiveness of the proposed method in terms of frequency deviation suppression, power fluctuation mitigation, and estimation accuracy.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Energy Management Based Task Allocation With Security Analysis Using Machine Learning Algorithms 基于智能能源管理的任务分配和使用机器学习算法的安全分析
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70283
S. Suhasini, Hemalatha Thanganadar, Surendra Kumar Shukla, Achyut Shankar, Fabio Arena, Mohammed Amoon
{"title":"Smart Energy Management Based Task Allocation With Security Analysis Using Machine Learning Algorithms","authors":"S. Suhasini,&nbsp;Hemalatha Thanganadar,&nbsp;Surendra Kumar Shukla,&nbsp;Achyut Shankar,&nbsp;Fabio Arena,&nbsp;Mohammed Amoon","doi":"10.1002/cpe.70283","DOIUrl":"https://doi.org/10.1002/cpe.70283","url":null,"abstract":"<div>\u0000 \u0000 <p>An emerging component of smart cities is vehicle-to-grid (V2G) technology, which provides a novel approach to scheduling and energy storage. Security threats currently impede V2G's normal operations. V2G security faces two challenges. Current V2G security schemes only consider the static security approach, which is insufficient to handle the problem of advanced persistent attacks and high dynamics in V2G. However, the lack of a unified information modeling technique in present V2G causes problems with security and communication. The aim is to propose a novel technique in task allocation and security analysis based on smart energy management using a machine learning model in V2G architecture. Here, the smart energy management and task allocation are carried out using a hybrid fuel cell model with a deep vector Q-gradient model. Then, the security analysis of the V2G network is carried out using a multilayer blockchain smart contract-based federated LSTM model. Experimental analysis is carried out in terms of QoS, energy efficiency, network efficiency, data integrity, and training accuracy. Simulation results are conducted to prove the effectiveness of this proposed method.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Balanced Partitioning Method for Big Graphs via Coarsen-Partition-Refining Steps With Preserving Atomic Subgraphs 一种基于保留原子子图的粗分割-精炼的大图平衡分割方法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70304
Tengteng Cheng, Guosun Zeng, Shun Wang
{"title":"A Balanced Partitioning Method for Big Graphs via Coarsen-Partition-Refining Steps With Preserving Atomic Subgraphs","authors":"Tengteng Cheng,&nbsp;Guosun Zeng,&nbsp;Shun Wang","doi":"10.1002/cpe.70304","DOIUrl":"https://doi.org/10.1002/cpe.70304","url":null,"abstract":"<div>\u0000 \u0000 <p>Atomic subgraphs are inherent and functionally meaningful structures in real-world graphs, capturing cohesive units such as social communities, molecular functional groups, or neural circuits. Preserving these atomic subgraphs during graph partitioning is crucial for maintaining semantic integrity, improving algorithmic interpretability, and reducing communication overhead in parallel processing. However, traditional partitioning methods often overlook this structural prior, leading to fragmentation of such subgraphs and degradation in downstream analytical quality. In this work, we propose a novel balanced graph partitioning approach that explicitly preserves atomic subgraphs through a coarsen-partition-refine framework. In the coarsening phase, smaller subgraphs are merged into a larger one based on the maximum edge-to-vertex weight ratio between subgraphs. In the partitioning phase, a spectral <i>k</i>-way method divides the coarsened graph into <i>k</i> balanced blocks. In the refinement phase, boundary subgraphs are exchanged between target blocks via designed rules, reducing cut-edge weights and ultimately yielding higher-quality balanced partitions. We evaluate our method on real-world and synthetic datasets by generating graphs with diverse subgraph distributions. The experimental results demonstrate the feasibility and effectiveness of our method.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C-LSTM Traffic Anomaly Detection Model Based on Attention Mechanism 基于注意机制的C-LSTM流量异常检测模型
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70314
Qinlu He, Fan Zhang, Genqing Bian, Weiqi Zhang, Zhen Li
{"title":"C-LSTM Traffic Anomaly Detection Model Based on Attention Mechanism","authors":"Qinlu He,&nbsp;Fan Zhang,&nbsp;Genqing Bian,&nbsp;Weiqi Zhang,&nbsp;Zhen Li","doi":"10.1002/cpe.70314","DOIUrl":"https://doi.org/10.1002/cpe.70314","url":null,"abstract":"<div>\u0000 \u0000 <p>Amid the rapid expansion of digital infrastructure and the escalating sophistication of cyberattack strategies, network traffic anomaly detection has emerged as a critical cybersecurity mechanism for securing modern digital ecosystems. To overcome the shortcomings of traditional machine learning methods—specifically their limited accuracy in traffic pattern recognition—this paper proposes a novel C-LSTM anomaly detection model enhanced by an attention mechanism. Building on advancements in deep learning architectures, the proposed model integrates CNNs and Bi-LSTM networks to comprehensively capture spatial and temporal traffic features. The attention mechanism mitigates Bi-LSTM's inherent vulnerability to vanishing gradients during long-sequence data processing by adaptively reweighting feature significance, thereby optimizing detection performance. The model was rigorously validated using the NSL-KDD and UNSW-NB15 standard benchmark datasets and evaluated against contemporary state-of-the-art detection methods. Experimental results demonstrate superior performance, with classification accuracies of 97.3% on NSL-KDD and 95.8% on UNSW-NB15, alongside a 12% reduction in false positives compared to baseline models. Notably, the attention mechanism achieved incremental accuracy improvements of 1.62% (NSL-KDD) and 1.48% (UNSW-NB15) compared to the baseline CNN-LSTM model. These findings demonstrate the model's effectiveness in enhancing anomaly detection robustness, providing a practical framework for real-world cybersecurity implementations.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deadline-Aware Task Scheduling in Fog-Cloud Computing Using Multi-Agent Reinforcement Learning and Software-Defined Network Security 基于多智能体强化学习和软件定义网络安全的雾云计算中截止日期感知任务调度
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-28 DOI: 10.1002/cpe.70258
Javid Ali Liakath, Lathaselvi Gandhimaruthian, Manikandan Nanajappan, Ramya Jegatheeshan
{"title":"Deadline-Aware Task Scheduling in Fog-Cloud Computing Using Multi-Agent Reinforcement Learning and Software-Defined Network Security","authors":"Javid Ali Liakath,&nbsp;Lathaselvi Gandhimaruthian,&nbsp;Manikandan Nanajappan,&nbsp;Ramya Jegatheeshan","doi":"10.1002/cpe.70258","DOIUrl":"https://doi.org/10.1002/cpe.70258","url":null,"abstract":"<div>\u0000 \u0000 <p>Task offloading and resource scheduling in fog-cloud Internet of Things environments face significant challenges, including high latency, constrained throughput, and unpredictable network conditions. These limitations hinder real-time responsiveness and efficient resource utilization, particularly in mission-critical Internet of Things applications. Moreover, ensuring robust data security under such dynamic and latency-sensitive scenarios is vital, as unsecured task execution and data exchange can lead to severe vulnerabilities. Therefore, optimizing both performance and security in low-latency conditions remains a crucial requirement for reliable and scalable fog-cloud computing infrastructures. Hence, this paper proposes a novel task scheduling framework such as Type−2 Fuzzy Multi-Agent Reinforcement Learning with Cauchy Mutation War Optimization algorithm within a secure Software-Defined Network architecture. The proposed model improves decision-making under uncertainty by analyzing the task scheduling process and optimizes resource allocation to strengthen network security against malicious attacks. The Cauchy mutation incorporates with war competition to explore the effectiveness of improving security and validates the control of dynamic functionality by estimating the routing process. The experimental results are analyzed by varied metrics and two benchmark datasets such as NASA Ames Research Center iPSC/860 and High Performance Computing Center North that demonstrate the superiority of the proposed model over state-of-the-art techniques. The results revealed that the latency is minimized for the proposed model by 43% and maximized throughput by 82.3% with better quality of service at 69%, and enhanced network security by 78.2%. Also, the proposed method diminishes response time by 37 s and optimizes resource utilization to conform to the robustness and efficiency in real-time Internet of Things applications. Thus, the results validate the capability of the proposed framework by improving offloading strategies with secure and scalable task scheduling.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Traceable Data Circulation Encryption Scheme for Edge Computing 用于边缘计算的轻量级可跟踪数据循环加密方案
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-25 DOI: 10.1002/cpe.70294
Miao Li, Changgen Peng, Hai Liu, Hanlin Tang, Jin Niu, Chuanda Cai, Tao Zhang
{"title":"Lightweight Traceable Data Circulation Encryption Scheme for Edge Computing","authors":"Miao Li,&nbsp;Changgen Peng,&nbsp;Hai Liu,&nbsp;Hanlin Tang,&nbsp;Jin Niu,&nbsp;Chuanda Cai,&nbsp;Tao Zhang","doi":"10.1002/cpe.70294","DOIUrl":"https://doi.org/10.1002/cpe.70294","url":null,"abstract":"<div>\u0000 \u0000 <p>With the increasing demand for secure and efficient data circulation in edge computing environments, ensuring data privacy, integrity, and traceability has become a critical challenge. In such decentralized and untrusted settings, traditional encryption schemes often suffer from key management complexity, single points of failure, and high computational costs. To address these issues, this paper proposes a lightweight and traceable data circulation encryption (LTP-CLE) scheme tailored for edge computing scenarios. The scheme leverages certificateless encryption to eliminate the dependency on a trusted key generation center (KGC), and integrates a proxy re-encryption mechanism and a digital signature scheme using a unified key structure. This unified design not only enhances security and traceability but also reduces key management overhead. Furthermore, the scheme minimizes the use of expensive cryptographic operations, such as bilinear pairings and scalar multiplications, thereby improving computational efficiency. Security analysis in the random oracle model demonstrates the scheme's resistance to collision attacks, ciphertext indistinguishability, and signature forgery. Experimental evaluations show that the LTP-CLE scheme outperforms existing methods in both computational and communication efficiency, making it well-suited for practical deployment in data-centric edge computing applications such as IoT-based healthcare monitoring, industrial control, and smart city infrastructure.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信