Concurrency and Computation-Practice & Experience最新文献

筛选
英文 中文
Multi-Expert Dynamic Gating and Feature Decoupling Algorithm for Long-Tail Image Classification 长尾图像分类的多专家动态门控与特征解耦算法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-10 DOI: 10.1002/cpe.70287
Kaiyang Liao, Junwen Pang, Yuanlin Zheng, Keer Wang, Guangfeng Lin, Yunfei Tan
{"title":"Multi-Expert Dynamic Gating and Feature Decoupling Algorithm for Long-Tail Image Classification","authors":"Kaiyang Liao,&nbsp;Junwen Pang,&nbsp;Yuanlin Zheng,&nbsp;Keer Wang,&nbsp;Guangfeng Lin,&nbsp;Yunfei Tan","doi":"10.1002/cpe.70287","DOIUrl":"https://doi.org/10.1002/cpe.70287","url":null,"abstract":"<div>\u0000 \u0000 <p>The long-tail distribution is characterized by a large number of samples in a few categories (head classes) and a scarcity of samples in most categories (tail classes). This inherent class imbalance significantly degrades the performance of conventional classification models, particularly on tail classes. To tackle this challenge, we propose a Multi-Expert Dynamic Gating and Feature Decoupling Classification Algorithm based on Uniform Enhanced Sampling. The proposed method integrates multi-expert learning with data augmentation and enhances tail classes performance by jointly optimizing the loss function and the expert assignment network. Specifically, a uniform enhanced sampling strategy is introduced to augment tail classes samples and increase their sampling frequency through resampling. During the feature learning stage, the shared layers of a convolutional network extract general features, while multiple expert models are trained independently. A feature decoupling technique is employed to separate generic and class-specific features. In addition, a binary gating mechanism is designed to dynamically assign experts while preventing over-reliance on specific categories. Extensive experiments on three benchmark long-tailed classification datasets:CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—demonstrate that our method consistently outperforms existing state-of-the-art approaches. Ablation studies further confirm the effectiveness of the uniform enhanced sampling strategy and the joint optimization of multi-expert learning, showing that our algorithm successfully balances the model's attention across head and tail classes, thereby improving overall classification performance.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Underwater Blurred Target Detection Algorithm Based on RT-DETR 一种基于RT-DETR的水下模糊目标检测算法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-04 DOI: 10.1002/cpe.70267
Xiao Chen, Xiaoqi Ge, Qi Yang, Haiyan Wang
{"title":"A Novel Underwater Blurred Target Detection Algorithm Based on RT-DETR","authors":"Xiao Chen,&nbsp;Xiaoqi Ge,&nbsp;Qi Yang,&nbsp;Haiyan Wang","doi":"10.1002/cpe.70267","DOIUrl":"https://doi.org/10.1002/cpe.70267","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater target detection is crucial for monitoring marine resources and assessing their ecological health. However, the underwater environment is complex and variable, and factors such as light attenuation, scattering, and turbidity often cause optical images to be blurred and target details to be unclear, seriously affecting detection accuracy. Although deep learning-based methods have shown promise in the field of target detection, challenges remain in balancing real-time performance with high-precision detection of blurred targets. In response to the above situation, a novel algorithm is presented for underwater blurred target detection, designed to address the challenge of low detection accuracy resulting from indistinct optical image details in complex underwater environments. The proposed algorithm leverages the Real-Time Detection Transformer (RT-DETR) architecture. First, a lightweight feature extraction module, termed Faster-Rep (FARP), is developed to effectively reduce the model's parameter count while simultaneously enhancing the backbone network's ability to extract salient features from blurred targets. Second, an efficient additive attention module, called AIFI-Efficient Additive Attention (AIFI-EAA), is utilized in the coding phase, which enhances the model's global modeling capability while significantly reducing computational redundancy. Atlast, the Dynamic Cross-Scale Feature Fusion (DyCCFM) module enables dynamic fusion of feature information, thereby preserving critical characteristics of blurred targets and preventing information loss. The proposed algorithm demonstrates excellent detection performance on the URPC2020 dataset, where the mean Average Precision (mAP) is improved by 1.5% and the number of parameters is reduced by 14.5%, which also significantly improves the ability to detect ambiguous targets in intricate underwater environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Diagnostic Quality in Panoramic Radiography: A Comparative Evaluation of GAN Models for Image Restoration 提高全景放射成像的诊断质量:图像恢复GAN模型的比较评价
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-04 DOI: 10.1002/cpe.70289
Burak Kolukisa, Fatma Çelebi, Nihal Ersu, Kemal Selçuk Yücel, Emin Murat Canger
{"title":"Enhancing Diagnostic Quality in Panoramic Radiography: A Comparative Evaluation of GAN Models for Image Restoration","authors":"Burak Kolukisa,&nbsp;Fatma Çelebi,&nbsp;Nihal Ersu,&nbsp;Kemal Selçuk Yücel,&nbsp;Emin Murat Canger","doi":"10.1002/cpe.70289","DOIUrl":"https://doi.org/10.1002/cpe.70289","url":null,"abstract":"<div>\u0000 \u0000 <p>Panoramic imaging is a widely utilized technique to capture a comprehensive view of the maxillary and mandibular dental arches and supporting facial structures. This study evaluates the potential of the Generative Adversarial Network (GAN) models—Pix2Pix, CycleGAN, and RegGAN—in enhancing diagnostic quality by addressing combinations of common image distortions. A panoramic radiograph data set was processed to simulate four types of distortions: (i) blurriness, (ii) noise, (iii) combined blurriness and noise, and (iv) anterior-region-specific blurriness. Three GAN models were trained and analyzed using quantitative metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). In addition, two oral and maxillofacial radiologists conducted qualitative reviews to assess the diagnostic reliability of the generated images. Pix2Pix consistently outperformed CycleGAN and RegGAN, achieving the highest PSNR and SSIM values across all types of distortions. Expert evaluations also favored Pix2Pix, highlighting its ability to restore image accuracy and enhance clinical utility. CycleGAN showed moderate improvements in noise-affected images but struggled with combined distortions, while RegGAN yielded negligible enhancements. These findings underscore its potential for clinical application in refining radiographic imaging. Future research should focus on combining GAN techniques and utilizing larger datasets to develop universally robust image enhancement models.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to Parallel Symmetric Appearance-Motion Framework with Diffusion and Refinement Blocks for Video Anomaly Detection System 基于扩散和细化块的视频异常检测系统并行对称外观-运动框架校正
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-04 DOI: 10.1002/cpe.70242
{"title":"Correction to Parallel Symmetric Appearance-Motion Framework with Diffusion and Refinement Blocks for Video Anomaly Detection System","authors":"","doi":"10.1002/cpe.70242","DOIUrl":"https://doi.org/10.1002/cpe.70242","url":null,"abstract":"<p>\u0000 <span>Prasad, K.N.S.S.V.</span> and <span>Haritha, D.</span> (<span>2025</span>), <span>Parallel Symmetric Appearance-Motion Framework with Diffusion and Refinement Blocks for Video Anomaly Detection System</span>. <i>Concurrency Computat Pract Exper</i>, <span>37</span>: e70183. https://doi.org/10.1002/cpe.70183</p><p>In the originally published version of this manuscript, the email addresses of the corresponding author and co-author were incorrect. The correct information is as follows:</p><p><b>Corresponding author:</b></p><p>Kavitapu Naga Siva Shankara Vara Prasad</p><p>Email: <span>[email protected]</span></p><p><b>Co-author:</b></p><p>Dasari Haritha</p><p>Email: <span>[email protected]</span></p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transmednet: Transformer Medical Triad Neurology Networks Transmednet:变形医学三合一神经病学网络
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-04 DOI: 10.1002/cpe.70285
Kuo Zhang, Zhongyi Hu, Shuzhi Wu, Lei Xiao, Hui Huang
{"title":"Transmednet: Transformer Medical Triad Neurology Networks","authors":"Kuo Zhang,&nbsp;Zhongyi Hu,&nbsp;Shuzhi Wu,&nbsp;Lei Xiao,&nbsp;Hui Huang","doi":"10.1002/cpe.70285","DOIUrl":"https://doi.org/10.1002/cpe.70285","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, Transformers have gradually gained widespread application in the field of computer vision (CV). However, when applied to medical imaging, common slicing strategies often miss key information along the third dimension. To address this, we propose a novel diagnosis model, Transformer Medical Triad Neurology Networks (TransmedNet), designed to better capture 3D brain image features. The technological innovations of this model are primarily manifested in three aspects: Firstly, the model employs a hierarchical mechanism. The bottommost layer partitions the brain, enhancing the comprehension within each brain region. The intermediate layer employs a segmentation moving window mechanism to extract correlations between adjacent windows. The topmost layer utilizes global multi-head self-attention to focus on overall correlations. Secondly, the model adopts a combination of Transformer and convolutional neural network architectures to balance global and local features, enhancing the overall performance of the model. Lastly, the model thoroughly considers the three-dimensional features of brain images by incorporating a three-dimensional multi-head self-attention mechanism, ensuring equal importance is given to each dimension. Our experiments yielded promising results. The classification accuracy reached 99.21% for distinguishing Alzheimer's disease (AD) from cognitively normal (CN) subjects, and 97.46% for distinguishing autism spectrum disorder (ASD) from CN. The results demonstrate that the TransmedNet model enhances the classification performance of brain imaging diseases.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-Stage Frequency-Driven Network for Texture and Structure-Aware Underwater Image Enhancement 纹理和结构感知水下图像增强的双级频率驱动网络
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-04 DOI: 10.1002/cpe.70257
Jinzhang Li, Jue Wang, Bo Li
{"title":"A Dual-Stage Frequency-Driven Network for Texture and Structure-Aware Underwater Image Enhancement","authors":"Jinzhang Li,&nbsp;Jue Wang,&nbsp;Bo Li","doi":"10.1002/cpe.70257","DOIUrl":"https://doi.org/10.1002/cpe.70257","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater images often suffer from color distortion, texture degradation, and structural blurring due to wavelength-dependent absorption and scattering. To address these issues, we propose FD-DMTNet, a novel two-stage enhancement framework that integrates frequency-domain priors with fine-grained structural refinement. In the first stage, a frequency-aware U-Net is built using Frequency-Domain Correction Blocks (FDCB) and Multi-Scale Feature Stream Blocks (MSFS), while a Frequency-Domain Transformer (FETB) with multi-head self-attention enables global context learning. In the second stage, a Fine-Grained Enhancement Module (FGEN) comprising three branches is introduced: A Texture Enhancement Branch (TEB) for multiscale texture recovery, a Color Correction Branch (CCB) for frequency-guided color adjustment, and a Structure Refinement Branch (SRB) using edge-aware attention and FETB to restore structural details. Extensive experiments on multiple benchmark datasets demonstrate that FD-DMTNet significantly outperforms existing methods in terms of color accuracy, texture clarity, and structural consistency. Compared with state-of-the-art approaches, it achieves average improvements of 3.66%, 2.04%, 2.48%, and 1.83% in PSNR, SSIM, UIQM, and NIQE, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SC-YOLO: Robust Multi-Scale Small Object Detection for Intelligent Transportation 智能交通多尺度小目标鲁棒检测
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-03 DOI: 10.1002/cpe.70268
Keyou Guo, Jiangnan Wang, Haibing Jiang, Pei Zhang, Huangcheng Qin
{"title":"SC-YOLO: Robust Multi-Scale Small Object Detection for Intelligent Transportation","authors":"Keyou Guo,&nbsp;Jiangnan Wang,&nbsp;Haibing Jiang,&nbsp;Pei Zhang,&nbsp;Huangcheng Qin","doi":"10.1002/cpe.70268","DOIUrl":"https://doi.org/10.1002/cpe.70268","url":null,"abstract":"<div>\u0000 \u0000 <p>Existing multi-scale vehicle detection methods often falter in crowded traffic scenarios—particularly when it comes to locating small vehicles, resolving occlusions, and adapting to scale variations—which leads to a marked drop in overall accuracy. To overcome these challenges, we introduce <i>SC-YOLO</i>, a lightweight detection framework built upon YOLOv10n and optimized for greater efficiency. First, we replace the standard downsampling between the backbone and neck with a Space-to-Depth Convolution (SPDConv) module, preserving fine-grained details in the lower levels of the feature pyramid so that cues for small targets remain intact. Next, we propose a Context-Guided Rectangular Feature Pyramid Network (CGRFPN) equipped with a self-calibration mechanism; by enabling cross-scale interactions and adaptive feature-map calibration, it significantly enhances multi-scale fusion. Finally, guided by extensive empirical evaluation, we adopt the Wise-IoUv3 dynamic loss function, whose adaptive gradient allocation refines bounding-box regression. On the Pascal VOC, KITTI, and Cars datasets, SC-YOLO attains mAP@50 scores of 79.0%, 87.3%, and 74.6%, respectively—improving upon the YOLOv10n baseline by 2.5%, 2.1%, and 2.3%. Crucially, it maintains high accuracy under challenging traffic conditions, especially for small-vehicle detection and occlusion resolution, while scaling more efficiently, requiring fewer computations than other models with comparable parameter counts. These combined advantages underscore SC-YOLO's resource-efficient design and its practicality for intelligent transportation and autonomous-driving applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ERL-RTDETR: A Lightweight Transformer-Based Framework for High-Accuracy Apple Disease Detection in Precision Agriculture ERL-RTDETR:基于轻量级变压器的精准农业苹果病害检测框架
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-03 DOI: 10.1002/cpe.70276
Song Wang, Mingyu Liu, Shaocong Dong, Shiyu Chen
{"title":"ERL-RTDETR: A Lightweight Transformer-Based Framework for High-Accuracy Apple Disease Detection in Precision Agriculture","authors":"Song Wang,&nbsp;Mingyu Liu,&nbsp;Shaocong Dong,&nbsp;Shiyu Chen","doi":"10.1002/cpe.70276","DOIUrl":"https://doi.org/10.1002/cpe.70276","url":null,"abstract":"<div>\u0000 \u0000 <p>Apples are deeply favored by consumers for their crisp and sweet taste and play a significant role in agricultural production. However, apples often suffer from infections by various pathogens during their growth process, severely impacting fruit quality and yield, and subsequently causing economic losses. Therefore, timely detection and accurate intervention against diseases during apple growth are crucial for improving harvest management efficiency and economic benefits. Nonetheless, current research primarily focuses on the identification of single diseases, lacking multi-disease detection capabilities. This limitation results in inadequate timeliness and accuracy in disease management, thereby restricting practical application effectiveness. Additionally, apple disease detection models need to balance high accuracy, rapid response, and lightweight design to reduce hardware costs and application thresholds. To address these challenges, this paper proposes a lightweight detection model named ERL-RTDETR, which is based on RT-DETR. First, a dataset containing 3096 images of apple-leaf diseases was constructed, encompassing different camera angles, time spans, and lighting conditions in complex environments. Subsequently, by introducing an Efficient Multi-scale Attention (EMA) mechanism and integrating it with the backbone network, we designed a new feature extraction module (BasicBlock_EMA) to enhance the capture of fine-grained features. Meanwhile, in the neck network, the traditional convolutional module was replaced with a Lightweight Adaptive Extraction module (LAE), and a Generalized Efficient Lightweight Attention Network (GELAN) was introduced to optimize the convolutional blocks, thereby improving the model's training efficiency and detection performance for subtle targets. The construction of the ERL-RTDETR model was completed while ensuring detection accuracy and reducing model complexity. Experimental results demonstrate that ERL-RTDETR achieves a balanced performance in apple disease detection tasks, with a detection precision of 94.5% on the test set (a 3.2% improvement compared to RT-DETR) and increases in mAP50 and mAP50:95 by 2.7% and 2.2%, respectively. Simultaneously, the GFLOPs were reduced by 5.9 GFLOPs (a decrease of 10.3% compared to RT-DETR). In summary, the proposed ERL-RTDETR model provides an efficient, lightweight, and accurate method for apple disease detection, serving as an important reference for research and practical applications in related fields.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMG-Based Dual-Branch Deep Learning Framework With Transfer Learning for Lower Limb Motion Classification and Joint Angle Estimation 基于肌电图迁移学习的双分支深度学习框架用于下肢运动分类和关节角度估计
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-03 DOI: 10.1002/cpe.70263
Yang Yang, Qing Tao, Shiji Li, Shijie Fan
{"title":"EMG-Based Dual-Branch Deep Learning Framework With Transfer Learning for Lower Limb Motion Classification and Joint Angle Estimation","authors":"Yang Yang,&nbsp;Qing Tao,&nbsp;Shiji Li,&nbsp;Shijie Fan","doi":"10.1002/cpe.70263","DOIUrl":"https://doi.org/10.1002/cpe.70263","url":null,"abstract":"<div>\u0000 \u0000 <p>Wearable surface electromyography (sEMG) sensors capture neuromuscular signals for analyzing lower limb movements, exoskeleton robotics control, and rehabilitation application. However, simultaneous motion classification and continuous joint angle prediction remain challenging, particularly with limited patient data. This study introduces DBWCT-EMGNet, a novel deep learning framework with a dual-branch architecture augmented with transfer learning. The main structure integrates a Improve WaveNet fusion layer for multi-scale feature extraction, convolutional block attention module (CBAM) attention for enhanced feature focus. The classification branch integrates a Transformer encoder for robust motion recognition. The regression branch employs a Temporal Convolutional Attention network for precise joint angle prediction. Transfer learning adapts models trained on healthy subjects to patient data to mitigate data scarcity issues. Compared to models such as CNN-BiLSTM and CNN-TCN, DBWCT-EMGNet achieved superior intra-subject performance (classification accuracy: 99.86% ± 0.11%; joint angle <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mi>R</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {R}^2 $$</annotation>\u0000 </semantics></math>: 0.98 ± 0.04, RMSE: 1.40° ± 1.64°). Transfer learning improved inter-subject results by 21.7% in accuracy, 24.7% in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mi>R</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {R}^2 $$</annotation>\u0000 </semantics></math>, and 67.6% in RMSE. By enabling accurate motion analysis and generalization across subjects, DBWCT-EMGNet shows strong potential for developing advanced sensor-based assistive and rehabilitative technologies.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Harris Hawks Optimization Algorithm 改进的哈里斯鹰优化算法
IF 1.5 4区 计算机科学
Concurrency and Computation-Practice & Experience Pub Date : 2025-09-03 DOI: 10.1002/cpe.70270
Xiaopei Liu, Yong Zhang, Yanqin Li, Bai Yu, Qi Chen
{"title":"Improved Harris Hawks Optimization Algorithm","authors":"Xiaopei Liu,&nbsp;Yong Zhang,&nbsp;Yanqin Li,&nbsp;Bai Yu,&nbsp;Qi Chen","doi":"10.1002/cpe.70270","DOIUrl":"https://doi.org/10.1002/cpe.70270","url":null,"abstract":"<div>\u0000 \u0000 <p>The Harris Hawks Optimization (HHO) algorithm is a nature-inspired metaheuristic that mimics the cooperative hunting behavior of hawks. Despite its success in various optimization tasks, it suffers from several limitations, including low computational accuracy, a tendency to become trapped in local optima, and difficulty in balancing exploration and exploitation. To address these challenges, this paper proposes an enhanced version of HHO, named FL-HHO, which integrates four key improvements: the Halton sequence for enhanced population diversity, a modified Escaping Energy Factor E, an improved Frog-leaping mechanism, and a convergence trend analysis module. FL-HHO is evaluated on seven classical benchmark functions and 30 functions from the CEC2014 benchmark suite. The experimental results demonstrate that FL-HHO exhibits a significant advantage on classical benchmarks, achieving top performance in search precision across nearly all functions and reaching the theoretical optimum on three of them. In terms of computational efficiency, FL-HHO ranks third among all compared algorithms. On the CEC2014 benchmarks, it secures first place on over 50% of the functions, with slightly lower performance observed on certain multimodal functions. Ablation experiments further verify the effectiveness of each proposed component, particularly highlighting the contribution of the modified Frog-leaping mechanism to global exploitation and the Halton sequence to initialization robustness. In practical scenarios, FL-HHO is applied to industrial robot path planning, where it achieves the shortest travel distance among all evaluated methods, confirming its effectiveness in real-world tasks. The implementation code is publicly available at: \u0000https://github.com/zhu-cheng/FL-HHO/tree/main.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信