Chenxing Xia , Hailong Chen , Bin Ge , Xiaolong Peng , Chaofan Liu , Zihan Jia , Shishui Bao
{"title":"PSFS-Net: Dynamic frequency-spatial synergistic perception network for polyp segmentation via hierarchical context refinement and frequency-domain decoupling","authors":"Chenxing Xia , Hailong Chen , Bin Ge , Xiaolong Peng , Chaofan Liu , Zihan Jia , Shishui Bao","doi":"10.1016/j.bspc.2025.108920","DOIUrl":"10.1016/j.bspc.2025.108920","url":null,"abstract":"<div><div>Accurate polyp segmentation from colonoscopy images is pivotal for the early detection of colorectal cancer (CRC), significantly enhancing diagnostic efficiency and reliability in clinical practice. While recent methods have achieved notable progress, they often suffer from two critical limitations: (1) inadequate frequency and spatial feature representation, as most approaches remain biased toward spatial-domain learning and, even when incorporating frequency information, tend to overlook the hierarchical variability of frequency distributions across feature levels, resulting in suboptimal utilization of frequency cues; and (2) insufficient cross-level feature integration, limiting the ability to effectively capture both global semantics and fine-grained boundary details. To address these issues, we propose PSFS-Net, a novel dynamic frequency-spatial synergistic polyp segmentation framework that jointly leverages spatial and frequency domain information for hierarchical context refinement and cross-level fusion, which mainly includes Frequency-aware Cross-scale Fusion Module (FACFM), Dual-stream Global–Local Interaction Module (DGIM), and Dual Attention Cross-modulation Module (DCM). Specifically, FACFM is designed to extract frequency domain cues and adaptively decoupling high/low-frequency components from full-spectrum information by employs Discrete Fourier Transform and an adaptive Dynamic Gaussian Filters. DGIM is introduced to enable mutual refinement between high-level semantic representations and low-level spatial details through dedicated global and local processing branches. DCM is presented to further aggregate global contexts with local details via dual-attention mechanisms, alleviating semantic gaps. Extensive evaluations on five public polyp segmentation datasets demonstrate that PSFS-Net delivers competitive and excellent performances. Our code is available at <span><span>https://github.com/chljzh25/PSFS-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108920"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive fractional-order Pulse-Coupled Neural Networks with multi-scale optimization for Skin Image Segmentation","authors":"Xuewen Zhou , Jiejie Chen , Ping Jiang , Xinrui Zhang , Zhigang Zeng","doi":"10.1016/j.bspc.2025.108911","DOIUrl":"10.1016/j.bspc.2025.108911","url":null,"abstract":"<div><div>This study presents a novel image segmentation method termed Fractional Coati Optimization Algorithm-Pulse-Coupled Neural Network (FCOA-PCNN), which synergistically integrates a Fractional Coati Optimization Algorithm (FCOA) with a Pulse-Coupled Neural Network (PCNN). FCOA introduces a fractional-order calculus mechanism into the original Coati Optimization Algorithm (COA), leveraging its inherent memory characteristics to enhance global search capability and convergence speed. Furthermore, an adaptive order control strategy is proposed, enabling dynamic adjustment of the fractional order during iterations to improve robustness and optimization efficiency. To optimize the key parameters of the PCNN, researchers construct a composite fitness function based on image information entropy and edge matching metrics, effectively capturing both global structure and local edge features. Experimental results from the CEC2005 benchmark suite demonstrate FCOA’s superior optimization performance over state-of-the-art algorithms in terms of convergence precision and stability. Moreover, extensive evaluations on the ISIC 2016 skin lesion dataset validate the superior segmentation performance of FCOA-PCNN, which achieved a Dice Coefficient of 92.01% and a Jaccard Index of 85.40%, outperforming both deep learning-based and traditional segmentation methods. Ablation studies further confirm the critical role of fractional-order components in enhancing the segmentation accuracy and model robustness. These findings highlight the potential of FCOA-PCNN as an effective and efficient tool for medical image segmentation applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108911"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DilatedSkinNet: A feature fusion induced intelligent framework for skin lesion extraction","authors":"Ranjita Rout , Priyadarsan Parida , Manoj Kumar Panda , Akshya Kumar Sahoo , Thierry Bouwmans","doi":"10.1016/j.bspc.2025.108836","DOIUrl":"10.1016/j.bspc.2025.108836","url":null,"abstract":"<div><div>Melanoma is considered one of the most fatal skin cancer. It is harmful to human life if not detected early. Early detection and proper diagnosis are highly crucial to reduce the fatality rate due to melanoma. Therefore, in this article, we have developed a unique encoder–decoder-based DilatedSkinNet framework with several folds of novelties. The designed encoder network sandwiches a series of Lesion Detail Extraction (LDE) blocks and max pooling layers, capturing multi-scale features with reduced spatial dimensions. Also, the proposed encoder framework can extract diverse lesion features at various levels. The designed bridge block with a fine feature aggregator module connects the encoder to the decoder network, for a smooth transition of significant details while maintaining spatial relationships among the pixels. The developed decoder network projects in-depth features into segmented masks, with reduced extraction of healthy skin regions. The developed DilatedSkinNet network is trained on the ISIC 2016 dataset while tested on ISIC 2016 and unseen dermoscopic images from benchmarked datasets including ISIC 2017, ISIC 2018, and PH<sup>2</sup>. The robustness of the designed DilatedSkinNet model is validated by comparing the objective measures, including accuracy, sensitivity, specificity, Dice Coefficient, and Jaccard Index, against 70 existing approaches. Furthermore, the efficacy of the developed DilatedSkinNet framework is corroborated using visual demonstration. Extensive experiments show that the designed DilatedSkinNet model shows its superiority compared to state-of-the-art methods and attains better performance in an unseen setup.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108836"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-screening of noisy labels and hard sample color normalization in histopathology image classification with spatial-texture attention networks","authors":"Hongbo Zhao , Miao Zhang , Ping Jiang , Yi Shen","doi":"10.1016/j.bspc.2025.108854","DOIUrl":"10.1016/j.bspc.2025.108854","url":null,"abstract":"<div><div>Deep learning-based pathological image classification has emerged as a promising tool to aid pathologists in diagnostics. However, two critical challenges hinder its accuracy and generalization: noisy labels in datasets due to annotation errors or subjective judgments, and unique image characteristics such as multiscale textures and staining variations. This study aims to address these challenges by proposing a novel framework that combines noise-robust learning and feature-specific attention mechanisms. A noise-resistant algorithm integrating noisy sample self-screening and hard sample color normalization (NSS-HSCN) was proposed, along with a dual-stream spatial and texture attention (DSSTA) framework. In the NSS-HSCN stage, a self-screening network filtered out noisy samples, and hard samples were subjected to color normalization. The DSSTA framework utilized a multi-scale spatial attention module and a texture-enhanced attention module to extract and fuse features. Experiments were carried out on the Chaoyang and HITAFH datasets. The method outperformed other leading methods on both datasets. On the Chaoyang and HITAFH datasets, our method achieved 86.02% and 90.16% accuracy, respectively, outperforming state-of-the-art methods in all metrics. Grad-CAM visualization verified the model’s ability to focus on target areas and extract valuable features. The noise-screening network strengthened model robustness, and the dual-stream network effectively integrated features. Integration with the automated diagnostic system optimized the diagnostic process, thereby highlighting its potential for improving pathological image classification accuracy in real-world applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108854"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Yang , Yajie Wan , Su Diao , Osama Alfarraj , Fahad Alblehai , Amr Tolba , Zaffar Ahmed Shaikh , Lip Yee Por , Roohallah Alizadehsani , Yudong Zhang
{"title":"Melanoma Detection through Combining Reinforcement Learning, Generative Adversarial Network, and Bayesian Optimization","authors":"Jing Yang , Yajie Wan , Su Diao , Osama Alfarraj , Fahad Alblehai , Amr Tolba , Zaffar Ahmed Shaikh , Lip Yee Por , Roohallah Alizadehsani , Yudong Zhang","doi":"10.1016/j.bspc.2025.108668","DOIUrl":"10.1016/j.bspc.2025.108668","url":null,"abstract":"<div><div>Melanoma, a highly aggressive form of skin cancer, is primarily driven by DNA alterations often linked to environmental factors such as ultraviolet radiation. Addressing the need for improved early detection, this study tackles the key limitations of current methods, which frequently employ convolutional neural networks (CNNs) but struggle with feature selection, class imbalance, hyperparameter tuning, and generalizability. Our strategy leverages dilated convolution (DC) layers trained using reinforcement learning (RL). Unlike other RL-based approaches that handle these challenges in isolation, our method introduces a multi-stage architecture. It integrates RL for feature selection and class balancing. Shapley additive explanations (SHAP) guide feature identification, while augmented rewards for underrepresented classes help mitigate data imbalance. Bayesian optimization hyperband (BOHB) is used for hyperparameter tuning in a unified training process. BOHB combines the predictive strength of Bayesian optimization with the efficiency of hyperband, accelerating model tuning. It also includes an online GAN module for dynamic data augmentation that responds to the evolving output of the RL agent. A novel regularization technique stabilizes GAN training and prevents mode collapse. Importantly, existing RL methods face the challenge of balancing exploration and exploitation. In our RL model, the scope loss function (SLF), integrated with RL, balances exploration and exploitation, thereby ensuring accuracy and generalizability. Collectively, the model jointly tackles four persistent challenges in earlier RL-based approaches: poor exploration–exploitation balance, unstable reward dynamics, static data augmentation, and manual hyperparameter tuning. The model achieved F-measures of 94.3 %, 93.7 %, and 91.5 % on ISIC-2020, HAM10000, and PH2, respectively. This advancement significantly improves early melanoma detection and supports more accurate treatment decisions, contributing valuably to the ongoing effort to combat this lethal cancer.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108668"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Shi , Xinran Huang , Wenchi Ke , Hrvoje Brkić , Yuchi Zhou , Ting Lu , Xian’e Tang , Lirong Qiu , Shuai Luo , Xingtao Zhang , Ziqi Cheng , Yushan Lin , Peixi Liao , Hu Chen , Yi Zhang , Yijiu Chen , Zhenhua Deng , Fei Fan
{"title":"Enhanced pediatric age estimation from head MRI via self-distillation hybrid-attention network","authors":"Lei Shi , Xinran Huang , Wenchi Ke , Hrvoje Brkić , Yuchi Zhou , Ting Lu , Xian’e Tang , Lirong Qiu , Shuai Luo , Xingtao Zhang , Ziqi Cheng , Yushan Lin , Peixi Liao , Hu Chen , Yi Zhang , Yijiu Chen , Zhenhua Deng , Fei Fan","doi":"10.1016/j.bspc.2025.108748","DOIUrl":"10.1016/j.bspc.2025.108748","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Age estimation is crucial in pediatrics, developmental medicine, often conducted by radiographic techniques exposing children to ionizing radiation. Magnetic Resonance Imaging (MRI) offers a safer, radiation-free alternative. Automatic age estimation is rapidly advancing, offering an efficient approach that reduces human bias and saves manpower. This study aims to exploit the potential of head MRI in automatic age estimation in pediatric population via deep learning.</div></div><div><h3>Methods and materials:</h3><div>We propose a self-distillation and hybrid-attention network (SDHA) to estimate age from 3-T head MRI from children. We train SDHA network utilizing self-distillation and integrating Squeeze-and-Excitation (SE), Spatial Transformer (ST) attention mechanisms. Four stacked attention modules (SE, ST) were embedded to backbone network ResNet50 (teacher), generating deeper predictions; early exit branches (students) were added to generate shallower predictions. Three types of losses are employed to achieve knowledge distillation to enhance both performance and computational efficiency. SDHA is evaluated against manual and traditional CNN methods by mean absolute error (MAE) and root mean squared error (RMSE).</div></div><div><h3>Results:</h3><div>SDHA (MAE = 0.34 years) yielded a lower MAE than manual method (MAE = 0.44 years). MAE decreased by 63.4% with SDHA compared to non-distilled SENet (MAE = 0.93 years). Prediction error density curve shows higher precision by SDHA. Grad-CAM visualization revealed that SDHA adaptively focuses on age-relevant dental, facial and brain structures. SDHA reduced prediction time from 120 s (manual assessment) to 0.11 s per subject.</div></div><div><h3>Conclusion:</h3><div>The proposed SDHA demonstrates superior performance over manual and existing CNN methods for dental age estimation from head MRI. Its adaptive attention to age-relevant anatomical structures and significant efficiency gains make it valuable for applications in pediatric age estimation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108748"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaochen Huang , Haiyun Li , Jun Ma , Xiaochan Bi , Fanzun Meng , Wenjing Jiang , Xin Ma
{"title":"Facial expression-based hypomimia detection for Parkinson’s disease diagnosis: A static-dynamic mixed feature approach","authors":"Xiaochen Huang , Haiyun Li , Jun Ma , Xiaochan Bi , Fanzun Meng , Wenjing Jiang , Xin Ma","doi":"10.1016/j.bspc.2025.108762","DOIUrl":"10.1016/j.bspc.2025.108762","url":null,"abstract":"<div><h3>Objective:</h3><div>Parkinson’s disease (PD) is a prevalent neurodegenerative disorder primarily affecting individuals over 65, poses diagnostic challenges due to its complex symptoms. This study aims to detect hypomimia, a characteristic PD symptom, by analyzing static and dynamic facial features from patients performing various facial expressions.</div></div><div><h3>Methods:</h3><div>Our method integrates static and dynamic facial features to facilitate PD auxiliary diagnosis. For static features, we conduct the similarity comparison in performing happy expressions between PD patients and healthy individuals utilizing a generative network. Subsequently, facial expression completion is assessed through the analysis of static facial images. For dynamic features, we conduct dynamic analysis by examining the patients’ facial movements, particularly focusing on eyelid and perioral movements in the expression videos. These features are processed through a specialized static-dynamic feature fusion network, enabling precise discrimination of PD. The integration of static and dynamic features is a novel aspect of our study.</div></div><div><h3>Results:</h3><div>The proposed method achieves a prediction accuracy (0.94) and recall (0.97), outperforming existing in-vitro diagnostic techniques due to its comprehensive analysis of facial expressions. To address data scarcity, we compiled Parkinson’s Disease Facial Expression Videos (PD-FEV) dataset, offering a valuable resource on facial expression analysis for PD diagnosis.</div></div><div><h3>Conclusion:</h3><div>This study enhances PD diagnosis by introducing an innovative approach to hypomimia detection through the integration of static and dynamic features, providing improved diagnostic accuracy and greater convenience for patients. Additionally, the PD-FEV dataset offers valuable data resources, advancing PD diagnosis in clinical practice.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108762"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A pilot control intention recognition method based on EEG in simulated flights","authors":"Yining Zeng , Youchao Sun , Yuwen Jie , Xun Liu","doi":"10.1016/j.bspc.2025.108739","DOIUrl":"10.1016/j.bspc.2025.108739","url":null,"abstract":"<div><div>Pilot control intentions reflect the subjective desire to manipulate aircraft attitude through specific maneuvers. Accurate recognition of pilot control intentions is crucial for the development of autopilot systems and active safety technologies in flight control. A significant challenge arises from the similarity in workload between takeoff and landing, which complicates the identification of climb and descent intentions. This paper proposes an approach using a spatial attention EEGNet (SA-EEGNet) to identify pilot control intentions based on electroencephalography (EEG) signals. To address issues related to convolutional kernel sharing and network complexity, receptive field attention and spatial convolution were incorporated to enhance feature extraction and reduce redundancy. Designed for three-class classification, SA-EEGNet achieves 95% accuracy in subject-dependent data (5-fold cross-validation) and 93% accuracy in subject-independent data (7-fold cross-validation).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108739"},"PeriodicalIF":4.9,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Huo , Lei Qin , Shengyu Wu , Ming Li , Wei Yan , Long Chang , Shengli Mi
{"title":"Unsupervised confocal superficial eyelid image stitching: Flexible, accurate and smooth","authors":"Lei Huo , Lei Qin , Shengyu Wu , Ming Li , Wei Yan , Long Chang , Shengli Mi","doi":"10.1016/j.bspc.2025.108760","DOIUrl":"10.1016/j.bspc.2025.108760","url":null,"abstract":"<div><div>Confocal laser scanning microscope enables non-invasive ocular Demodex screening but faces field-of-view (FOV) limitations. Although image stitching can theoretically expand FOV, traditional methods only achieve approximately 60% success rate due to low illumination, weak textures and repetitive patterns. To address these challenges, we propose an unsupervised deep learning-based image stitching framework with dual-stage alignment and generative adversarial network (GAN)-based fusion. Our dual-stage alignment network combines homography matrix and Thin Plate Spline (TPS) transformations to accommodate tissue deformation during imaging, supported by a Non-Maximum Suppression Feature Displacement Layer that simultaneously considers both long-range and short-range dependencies, yielding more accurate results with reduced memory consumption. To achieve smooth and seamless image fusion, we employ a GAN framework where the generator is designed to produce fusion probability maps that eliminate noticeable blending seams and fusion artifacts. This is an innovative attempt to apply deep learning for precise image stitching in confocal superficial eyelid image, demonstrating 40% higher success rate than traditional methods. Quantitative evaluations show 13.37% and 3.25% improvements in mPSNR and mSSIM over the state-of-the-art model, with 11.29% and 3.76% reductions in NIQE and PIQE metrics.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108760"},"PeriodicalIF":4.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MCM-UNet: Mamba convolutional mixing network for skin lesion image segmentation","authors":"Minchen Yang, Nur Intan Raihana Ruhaiyem","doi":"10.1016/j.bspc.2025.108791","DOIUrl":"10.1016/j.bspc.2025.108791","url":null,"abstract":"<div><div>Dermatological lesion image segmentation is a critical step in clinical diagnosis. However, due to the complex morphology, blurred boundaries, and variable sizes of lesions, accurate segmentation remains a significant challenge in medical image processing. Traditional methods often struggle to simultaneously capture both the overall contour and local details of lesion regions, severely constraining the accuracy of computer-assisted diagnosis. To address this issue, we propose MCM-UNet. We carefully design modules at the network’s shallow, deep, and skip connection stages to enhance spatial detail extraction, global dependency modeling, and cross-layer feature fusion. Through innovative feature extraction and fusion strategies, we effectively tackle the complexity of skin lesion segmentation. Based on this architecture, our network significantly improves the accuracy and robustness of dermatological lesion segmentation with a lightweight model of only 0.6M parameters. Experimental results on PH2, ISIC2017, and ISIC2018 public datasets demonstrate outstanding segmentation capabilities, achieving superior performance compared to existing methods and providing a novel solution for precise skin lesion segmentation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108791"},"PeriodicalIF":4.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}