Yongle Lu, Yi Luo, Junjie Ma, Sheng Su, Fangyuan Chen
{"title":"Wheeled Mobile Robot Dead Reckoning Based on Trans-GCN Model","authors":"Yongle Lu, Yi Luo, Junjie Ma, Sheng Su, Fangyuan Chen","doi":"10.1002/cpe.70271","DOIUrl":"https://doi.org/10.1002/cpe.70271","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the challenge of low positioning accuracy caused by sensor uncertainties in mobile robot dead reckoning systems, this study proposes Trans-GCN, a novel position prediction model that integrates Graph Convolutional Networks (GCN) with a Transformer architecture. The model leverages data-driven AI principles and sensor-specific characteristics to uncover hidden dependencies between wheel speed and inertial data, thereby enhancing navigation accuracy. Initially, the sensor data is segmented using a sliding window approach and represented as multiple graph structures. GCN is employed to capture spatial dependencies by learning the complex topological structures inherent in the data. Subsequently, positional encoding of graph feature signals is embedded into the Transformer, enabling more efficient extraction of global node features. An adaptive learning rate is introduced to enhance flexibility and efficiency in information propagation. The integrated model performs multi-sensor data modeling and feature fusion to predict the two-dimensional displacement increments of the mobile robot at each sampling interval, ultimately reconstructing the navigation trajectory. The model is trained under GNSS availability and used to predict robot positions during GNSS signal degradation or outages. Six sets of experiments were conducted on the publicly available NCLT dataset and a self-collected dataset. Results demonstrate that the proposed model achieves a trajectory fitting accuracy of 89.2%–97.7% in scenarios with partial or complete GNSS failures. The proposed model also improves training and inference speeds by 19.6% and 26.0%, respectively, compared to state-of-the-art methods, validating its superior performance in dead reckoning.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DMTL: An Adaptive Integrity Verification Scheme for Dynamic Cloud Datasets","authors":"Xinfeng He, Qing Zhou","doi":"10.1002/cpe.70264","DOIUrl":"https://doi.org/10.1002/cpe.70264","url":null,"abstract":"<div>\u0000 \u0000 <p>With the widespread application of cloud computing, large-scale datasets in fields such as deep learning are increasingly stored in the cloud. Advanced techniques for data integrity verification are necessitated due to the frequent incremental updates of these datasets. In practice, existing Merkle tree-based schemes face challenges, including high computational costs, low real-time performance, and inefficient handling of incremental updates. To address these issues, a novel data structure named dynamic Merkle tree ladder (DMTL) was proposed in this paper, which enhanced Merkle trees by establishing ladder rungs for each dataset and incorporating a flexible dataset partition strategy. Based on the DMTL, we designed an integrity verification scheme that supported adaptive incremental updates of cloud datasets. Experimental results demonstrated that our scheme had outperformed mainstream schemes in dynamic operation efficiency, especially under workloads with intensive data insertions. Security analysis further showed that our scheme could defend against malicious behaviors effectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FPGA-Based Self-Calibrating Attention Algorithm for Low-Light Enhancement and Implementation","authors":"Jin Han, Haoyu Jiang","doi":"10.1002/cpe.70255","DOIUrl":"https://doi.org/10.1002/cpe.70255","url":null,"abstract":"<div>\u0000 \u0000 <p>Image enhancement methods in extreme low-light scenarios face noise amplification, insufficient brightness restoration, and high model complexity. Moreover, existing methods often fail to balance enhancement performance and efficiency on resource-constrained edge devices. To address these issues, we propose an ASCLE (Attention-based Self-Calibrating Low-light Enhancement) algorithm with FPGA-based hardware-software co-optimization. Building upon a self-calibrating illumination framework, ASCLE employs an attention mechanism and dual-path denoising to suppress noise in dark regions while sharpening edge details. An illumination correction module, with a brightness-aware mask and color fidelity loss constraint, is introduced to address insufficient brightness and color distortion. To reduce computational and storage demands, we employ optimization strategies such as BN layer fusion and approximate activation functions to improve the hardware adaptability of the core modules. For FPGA deployment, tiled computation and ping-pong double buffering optimize data flow, while parallel pipelining boosts hardware resource use and computational efficiency. Experimental results demonstrate that the ASCLE algorithm achieves PSNR and SSIM scores of 19.88 dB and 0.784 on the LOL dataset, outperforming baseline methods. On FPGA, the inference time for a single frame is 10.64 ms, surpassing that of an Intel i7-12800HX CPU (1.414 s) and an ARM Cortex-A9 processor (9.088 s), while system power consumption is reduced to 2.07 W.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comprehensive Soft Error Resiliency Analysis of Distributed Deep Neural Networks","authors":"Setareh Ahsaei, Mohsen Raji, Maryam Asadi Golmankhaneh","doi":"10.1002/cpe.70259","DOIUrl":"https://doi.org/10.1002/cpe.70259","url":null,"abstract":"<div>\u0000 \u0000 <p>Distributed deep neural networks (DDNNs) have emerged as a promising solution to enhance the efficiency of deep learning tasks compared to traditional centralized cloud-based Deep Neural Networks (DNNs) by distributing the computational workload across cloud, fog, and edge nodes. Although model parameter changes caused by the well-known soft error effects have shown considerable degradation in the performance and reliability of DNNs, the resiliency of DDNNs against these effects is still understudied. This paper conducts a comprehensive analysis of the error resiliency of DDNNs, focusing on the impact of soft errors at various network layers. Using Docker containers to emulate real-world scenarios, the study evaluates SqueezeNet and MobileNetV2 models trained on CIFAR-100 and CIFAR-10 datasets under varying bit error rates (BER). The obtained results demonstrate that up to a certain BER, errors introduce uncertainty in the edge node of DDNNs while beyond this BER threshold, the edge node becomes significantly compromised due to faults, leading to a high likelihood of false decisions. Increasing uncertainty causes the decision-making process to shift to the fog and cloud nodes, leading to a considerable increase in response time. The insights from this study not only deepen our understanding of fault tolerance in DDNNs but also lay the groundwork for creating more resilient and efficient distributed learning architectures. By utilizing Docker-based emulation, our approach provides a flexible and reproducible experimental framework that can be adapted for further studies in this area. Additionally, the findings highlight the need for adaptive strategies that can intelligently manage errors and computational resources across cloud, fog, and edge layers. These results are particularly relevant for time-sensitive applications like autonomous vehicles, industrial IoT systems, and smart city infrastructures, where the reliability and speed of DDNNs are critical.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Cost-Aware and Latency-Benefit Evaluation-Based Task Scheduling Optimization Strategy in Apache Spark","authors":"Qingsong Xu, Congyang Wang, Junyang Yu, Haifeng Fei, Xiaojin Ren","doi":"10.1002/cpe.70244","DOIUrl":"https://doi.org/10.1002/cpe.70244","url":null,"abstract":"<div>\u0000 \u0000 <p>In the Spark distributed framework, data communication problems (network transfer overhead, network IO bottlenecks) caused by data transfer across nodes/racks are a common cause of performance degradation due to the inconsistency between the task execution location and the data location. Additionally, in heterogeneous environments, Spark's task scheduling strategy cannot fully utilize the advantages of high-performance nodes. To address the above issues, firstly, this paper proposes a cost-aware task selection strategy. The strategy models the cost of tasks by considering the impact of data locality and heterogeneous factors on the efficiency of job execution. For scenarios where data locality needs to be reduced for scheduling tasks, the task scheduling problem is transformed into a minimum weighted bipartite graph matching problem, and a greedy matching algorithm is used to solve for the minimum processing cost option. For scenarios that maintain the current data localization level for scheduling tasks, select the task execution with the largest change in task processing cost due to data localization changes. Secondly, the problem is that Spark's delay scheduling algorithm causes resources in the cluster to be in an unnecessary waiting state and reduces cluster resource utilization. In this paper, we propose an adaptive adjustment strategy for delay waiting time based on benefit assessment. This policy improves the resource utilization of the cluster by evaluating the benefit of delay waiting of the scheduler and dynamically adjusting the delay time based on the result of the evaluation. Finally, we implement the proposed strategy in Spark 3.0.0 and evaluate its performance using some representative benchmarks. The experimental results show that, compared with other task scheduling algorithms, the strategy proposed in this paper can effectively improve the execution efficiency of jobs, reduce the execution time of jobs by 15.8%–31.9%, and at the same time reduce the network traffic and improve the CPU utilization.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online Streaming Feature Selection Based on Hierarchical Neighborhood Consistency","authors":"Kuangfeng Gong, Guohe Li, Lingyun Guo, Yaojin Lin","doi":"10.1002/cpe.70262","DOIUrl":"https://doi.org/10.1002/cpe.70262","url":null,"abstract":"<div>\u0000 \u0000 <p>In a data-driven world, datasets frequently exhibit multiple complexities, such as high dimensionality, dynamic features, and long-tail distributions. From the perspective of label space, samples may also possess hierarchical relationships. These characteristics not only increase the complexity of data processing and analysis but also pose challenges in developing efficient and accurate predictive models. To tackle these issues, an Online Streaming Feature Selection (OSFS) method utilizing hierarchical neighborhood consistency is proposed in this paper. This method can dynamically select significant features from the unknown streaming feature space of long-tailed distribution datasets. Specifically, the number of neighbors for each sample is determined based on the number of instances within its class. Positive and negative samples within the neighborhood are identified using a sibling strategy. Based on this novel hierarchical neighborhood relationship, we define hierarchical neighborhood consistency at three levels: Individual samples, layers within the hierarchy, and the entire tree structure. Furthermore, we establish three criteria for evaluating dynamic features: Online correlation selection, online importance analysis, and online redundancy update. A framework for selecting online streaming features is also designed. Extensive experiments demonstrate that the proposed algorithm enhances the prediction accuracy of tail classes across multiple long-tailed distribution datasets, outperforming comparison algorithms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaobo Wang, Xuan Zhang, Jing Cheng, Kunpeng Du, Chen Gao, Zhuxian Ma, Bo Liu
{"title":"Promoting Unsupervised Data-To-Text Generation Using Retraining and Unified Linearization","authors":"Xiaobo Wang, Xuan Zhang, Jing Cheng, Kunpeng Du, Chen Gao, Zhuxian Ma, Bo Liu","doi":"10.1002/cpe.70254","DOIUrl":"https://doi.org/10.1002/cpe.70254","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, many studies have focused on unsupervised data-to-text generation methods. However, existing unsupervised methods still require a large amount of unlabeled sample training, leading to significant data collection overhead. We propose a low-resource unsupervised method called CycleRUR. This method first converts various forms of structured data (such as tables, knowledge graph(KG) triples, and meaning representations(MR)) into unified KG triples to improve the model's ability to adapt to different structured data. Additionally, CycleRUR incorporates a retraining module and a contrastive learning module within a cycle training framework, enabling the model to learn and converge from a small amount of unpaired KG triples and reference text corpus, thereby improving the model's accuracy and convergence speed. We evaluated the model's performance on the WebNLG and E2E datasets. Using only 10% of unpaired training data, our method achieved the effects of fully supervised fine-tuning. On the WebNLG dataset, it resulted in an 18.41% improvement in METEOR compared to supervised models. On the E2E dataset, it achieved improvements of 1.37% in METEOR and 4.97% in BLEU. Experiments also demonstrated that under unified linearization, CycleRUR exhibits good generalization capabilities.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehmet Kayakuş, Dilşad Erdoğan, Fatma Yiğit Açikgöz
{"title":"Deep Learning–Based Sentiment and Topic Analysis of Turkish Football Fans on X Platform","authors":"Mehmet Kayakuş, Dilşad Erdoğan, Fatma Yiğit Açikgöz","doi":"10.1002/cpe.70275","DOIUrl":"https://doi.org/10.1002/cpe.70275","url":null,"abstract":"<div>\u0000 \u0000 <p>This study examines social media fan comments using big data analytics to extract meaningful insights for football clubs. Comments from Turkey's top football clubs—Beşiktaş, Fenerbahçe, Galatasaray, and Trabzonspor—posted on the X platform between May 26, 2024, and November 11, 2024, were analyzed with sentiment analysis and deep learning techniques. The dataset of 20,000 comments was preprocessed and classified based on sentiment using a deep learning model implemented in Python via the X API. Additionally, text mining and topic modeling techniques identified frequently used words and key themes in the comments. The model's performance was evaluated using sensitivity, specificity, accuracy, and <i>F</i>1 score metrics. Sentiment analysis results demonstrated high performance, with Beşiktaş achieving precision, recall, and <i>F</i>1 scores of 0.957, 0.941, and 0.949, respectively, while Fenerbahçe scored 0.968, 0.941, and 0.954. Trabzonspor had the highest proportion of negative sentiment at 25.6%, reflecting the most critical fanbase. These findings highlight the effectiveness of deep learning and sentiment analysis in assessing fan engagement on social media, offering valuable insights for football clubs to better understand supporter sentiment and refine their strategies.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Q-Learning-Based Multi-Phase Grey Wolf Optimization Algorithm for Distributed No-Wait Job Shop Problem","authors":"Jie Yin, Li Liu, Shuning Zhang, Guanlong Deng","doi":"10.1002/cpe.70272","DOIUrl":"https://doi.org/10.1002/cpe.70272","url":null,"abstract":"<div>\u0000 \u0000 <p>As an extension of the no-wait job shop scheduling problem, the distributed no-wait job shop scheduling problem (DNWJSP) combining distributed scheduling with no-wait constraint exists commonly in real-world manufacturing. In this study, we formulate a mixed-integer linear programming (MILP) model for the problem and propose a Q-learning-based multi-phase grey wolf optimization (QMGWO) algorithm. First, the algorithm consists of two phases: the hunting phase and the local search phase. In the hunting phase, the information from three best solutions in the population is used to determine the search mode and reallocate some jobs for the current solution. In the local search phase, a local search is designed and performed on the solutions obtained from the hunting phase. Then, to prevent the algorithm from falling into local optimum, we design six local search strategies based on the key factory. Furthermore, to enhance the flexibility and efficiency of the algorithm, we propose a Q-learning method to dynamically select an appropriate local search strategy. Finally, the experimental results and statistical analysis based on benchmark instances demonstrate that the QMGWO algorithm has a significant advantage over several other high-performing algorithms. In addition, we validate the optimal solutions for all small instances by applying the CPLEX solver to the MILP model.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DDOSNAS: Efficient Neural Architecture Search With Weight Sharing for Ddos Attack Detection","authors":"Emmanuel Osei Owusu, Danlard Iddrisu, Griffith Selorm Klogo, Kwame Osei Boateng, Emmanuel Kofi Akowuah","doi":"10.1002/cpe.70281","DOIUrl":"https://doi.org/10.1002/cpe.70281","url":null,"abstract":"<div>\u0000 \u0000 <p>DDoS attacks continue to be one of the most prominent cybersecurity threats of this era as they overload network systems and paralyze vital services. Even though Deep Neural Networks have showcased strong detection capabilities, their detection and response efficiency is hindered due to high computational requirements, time-consuming processes, and limited resources prevalent in IoT, edge devices, and other time-sensitive environments. This study introduces DDoSNAS, a Transformer-controlled, multi-objective Neural Architecture Search (NAS) framework explicitly designed for DDoS attack detection. By integrating a hierarchical macro–micro Transformer controller with one-shot weight sharing and Pareto-based evolutionary search, DDoSNAS optimizes accuracy, latency, and FLOPs, producing high-performing and lightweight architectures. The search space is tailored for 1D network flow analysis and guided by an ensemble feature selection method, ensuring domain-specific relevance. On the CICIDS2017 dataset, DDoSNAS achieves 99.98% accuracy, 99.97% precision, and 100% recall with only 94 k FLOPs and 0.8 ms latency, outperforming state-of-the-art intrusion detection models in both predictive performance and efficiency. This work represents the first application of a Transformer-based NAS controller to cybersecurity, demonstrating that cutting-edge neural architecture search can yield models capable of real-time, on-device DDoS defense without sacrificing accuracy. The results establish DDoSNAS as a new benchmark for efficient, high-accuracy cyber threat detection and a blueprint for applying advanced NAS techniques to other security-critical domains.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}