Biomedical Signal Processing and Control最新文献

筛选
英文 中文
ANFFractalNet: Adaptive neuro-fuzzy FractalNet for iris recognition 用于虹膜识别的自适应神经模糊FractalNet
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-03 DOI: 10.1016/j.bspc.2025.107984
R. Prabhu , R. Nagarajan
{"title":"ANFFractalNet: Adaptive neuro-fuzzy FractalNet for iris recognition","authors":"R. Prabhu ,&nbsp;R. Nagarajan","doi":"10.1016/j.bspc.2025.107984","DOIUrl":"10.1016/j.bspc.2025.107984","url":null,"abstract":"<div><div>During the past few years, iris recognition is a trending research topic owing to its broad security applications from airports to homeland security border control. Nevertheless, because of the maximum cost of tools and several shortcomings of the module, iris recognition failed to apply in real life on large-scale applications. Moreover, the segmentation methods of the iris region are tackled with more issues like invalid off-axis rotations, and non-regular reflections in the eye region. To address this issue, iris recognition enabled ANFFractalNet is designed. In this investigation, Kuwahara Filter and RoI extraction are employed to pre-process an image. Moreover, the Daugman Rubber sheet model is considered for segmenting pre-processed images and then feature extraction is performed to reduce the dimensionality of data. Hence, in this framework, the iris recognition is performed utilizing the module named ANFFractalNet. Furthermore, the efficacy of ANFFractalNet utilized some analytic metrics namely, Accuracy, FAR, FRR, and loss obtained effectual values of 91.594%, 0.537%, 2.482%, and 0.084%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107984"},"PeriodicalIF":4.9,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alzheimer’s disease prediction using CAdam optimized reinforcement learning-based deep convolutional neural network model 基于CAdam优化强化学习的深度卷积神经网络模型预测阿尔茨海默病
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-03 DOI: 10.1016/j.bspc.2025.107968
Puja A. Chaudhari, Suhas S. Khot
{"title":"Alzheimer’s disease prediction using CAdam optimized reinforcement learning-based deep convolutional neural network model","authors":"Puja A. Chaudhari,&nbsp;Suhas S. Khot","doi":"10.1016/j.bspc.2025.107968","DOIUrl":"10.1016/j.bspc.2025.107968","url":null,"abstract":"<div><h3>Background</h3><div>Alzheimer’s Disease (AD), a neurological disorder, gradually declines cognitive ability, but detecting it at an early stage can effectively mitigate symptoms. Due to the shortage of expertise medical staff, automatic diagnosis becomes highly important, however, a detailed analysis of brain disorder tissues is required for accurate diagnosis using magnetic resonance imaging (MRI). Various detection methods are introduced to detect AD through MRI, but extracting the optimal brain regions and informative features is still a complicated and time-consuming factor. Moreover, the class imbalance issue of the OASIS and ADNI datasets needs to be addressed.</div></div><div><h3>Method</h3><div>Here, a Coyote Adam optimized Reinforcement Learning-Deep Convolutional Neural Network (CAdam-RL-DCNN) is proposed to address the aforementioned issues on AD detection using MRI. The effectiveness of the proposed method relies on effectively detecting the features automatically and SMOTE handles the class imbalance issues of the dataset through the minority samples. The computational complexity of the model is reduced through the appropriate model training using the proposed CAdam optimizer, which incorporates adaptive parameters of Adam using social behaviors and invasive hunting of coyote optimizer. In addition, the hybrid features combining the ResNet features, statistical features and modified textural pattern reduces the data complexity and promotes the model training towards an improved performance in AD prediction.</div></div><div><h3>Result</h3><div>The proposed model attains 96.31% accuracy, 97.50% sensitivity, 94.06% specificity, 93.87% precision, 97.50% recall, and 95.65% F1-score using ADNI dataset. Furthermore, the proposed model attains the superior performance achieving 95.09% accuracy, 94.52% sensitivity, 95.57% specificity, 93.14% precision, 94.52% recall, and 93.83% F1-score using OASIS dataset respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107968"},"PeriodicalIF":4.9,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An innovative transfer learning-polyp detection from wireless capsule endoscopy videos with optimal key frame selection and depth estimation 一种创新的迁移学习-基于最佳关键帧选择和深度估计的无线胶囊内窥镜视频息肉检测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-03 DOI: 10.1016/j.bspc.2025.107963
Madhura Prakash M, Krishnamurthy G.N
{"title":"An innovative transfer learning-polyp detection from wireless capsule endoscopy videos with optimal key frame selection and depth estimation","authors":"Madhura Prakash M,&nbsp;Krishnamurthy G.N","doi":"10.1016/j.bspc.2025.107963","DOIUrl":"10.1016/j.bspc.2025.107963","url":null,"abstract":"<div><div>In modern technology, the Wireless Capsule Endoscopy (WCE) is used to analyze and diagnose the small intestine part of the human body in a non-invasive manner. It consists of similar and non-modified information regarding texture and color, the absence of shot boundary makes the conventional keyframe mining and shot detection technique ineffective for this purpose. To resolve this issue, a deep learning-oriented keyframe mining method for extracting the keyframes from the captured video using the WCE procedure is suggested in this research work. With the help of benchmark databases, the endoscopic videos are collected. The gathered video is composed of numerous images frame in order to estimate the depth of the video. Here, the transfer learning techniques are adopted here for estimating the depths. The MobileNetV2 model’s encoder-decoder unit is attached to the UNet model of TransUNet + system to improve the rate of estimation of the depths. The deployed transfer learning model is named as TransUnet + with MobileNetv2 (TU-MNetv2). A new heuristic algorithm called the Improved Fitness-based American Zebra Optimization Algorithm (IF-AZOA) is implemented to select the ideal keyframes in terms of constraints like entropy information, moment of the images in depth estimated frame, key points and edge density. The estimated depth results are compared with the results obtained from several conventional classifiers and heuristic algorithms in order to prove the better performance obtained by the implemented depth estimation technique.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107963"},"PeriodicalIF":4.9,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atrous spatial pyramid pooling assisted automatic segmentation model and ellipse fitting approach based fetal head segmentation and head circumference measurement 基于空间金字塔池化辅助的自动分割模型和基于椭圆拟合方法的胎儿头分割和头围测量
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-03 DOI: 10.1016/j.bspc.2025.107992
Somya Srivastava , Tapsi Nagpal , Kamaljit Kaur , Charu Jain , Nripendra Narayan Das , Aarti Chugh
{"title":"Atrous spatial pyramid pooling assisted automatic segmentation model and ellipse fitting approach based fetal head segmentation and head circumference measurement","authors":"Somya Srivastava ,&nbsp;Tapsi Nagpal ,&nbsp;Kamaljit Kaur ,&nbsp;Charu Jain ,&nbsp;Nripendra Narayan Das ,&nbsp;Aarti Chugh","doi":"10.1016/j.bspc.2025.107992","DOIUrl":"10.1016/j.bspc.2025.107992","url":null,"abstract":"<div><div>Fetal head circumference (HC) is an important biometric measurement that is useful in obstetric clinical practice to assess fetal development. Existing methods for fetal head circumference measurement have limitations in accurately capturing the shape of the fetal skull, leading to potential errors in clinical assessments. In this study, the Atrous spatial pyramid pooling assisted multi-scale feature aggregation automatic Segmentation (ASPPA-MSFAAS) model is introduced. The ASPPA-MSFAAS model addresses these limitations by incorporating multi-scale feature extraction and aggregation, enabling more precise segmentation and measurement of the fetal head. The objective of the multi-scale segmentation model is to improve fine-grained HC measurement and segmentation performance by learning multiple features under different sensitivity fields. Initially, pre-processing stages are applied to input images in order to eliminate undesired distortions. The ASPPA-MSFAAS model contains three modules: Atrous spatial pyramid pooling multi-scale feature extraction module (ASPP-MSFEM), multi-scale feature aggregation module (MSFAM), and Attention module. During the training and testing stages, these three modules are utilized to precisely segment the intricate location of the fetal head (FH). Post-processing operations are then used to smooth the region and eliminate extraneous artifacts from the segmentation results. Post-processing results are subjected to an ellipse fitting approach to get HC. Evaluation results show that the proposed approach attains 99.12 %±0.6 DSC and 99 %±1.99 MIoU using the HC 18 grand challenge dataset. Also, the proposed approach attained 98.99 % DSC, 1.287 HD, and 0.334 ASD performance for the Large-scale annotation dataset (National Library of Medicine).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 107992"},"PeriodicalIF":4.9,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A one-stage multi-task network for molecular subtyping, grading, and segmentation of glioma 一个用于胶质瘤分子分型、分级和分割的单阶段多任务网络
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.107923
Shen Wen , Shuang Yang , Ling Xu , Yan Yang , Yangzhi Qi , Ping Hu , Qianxue Chen , Dong Zhang
{"title":"A one-stage multi-task network for molecular subtyping, grading, and segmentation of glioma","authors":"Shen Wen ,&nbsp;Shuang Yang ,&nbsp;Ling Xu ,&nbsp;Yan Yang ,&nbsp;Yangzhi Qi ,&nbsp;Ping Hu ,&nbsp;Qianxue Chen ,&nbsp;Dong Zhang","doi":"10.1016/j.bspc.2025.107923","DOIUrl":"10.1016/j.bspc.2025.107923","url":null,"abstract":"<div><div>The World Health Organization tumor classification emphasizes the key role of molecular biomarkers in glioma diagnosis, particularly the importance of isocitrate dehydrogenase (IDH) mutation status and 1p/19q co-deletion status. There’s little research that combines glioma segmentation with the prediction of their genetic or histological characteristics using multimodal magnetic resonance imaging (MRI) scans. We proposed a one-stage multi-task network that uses MRI scans to predict IDH mutation status, 1p/19q co-deletion status, and glioma grading while simultaneously segmenting tumors. The network features an encoder-decoder architecture with three main components: an encoder that extracts multi-scale features, a decoder that gradually aggregates these features for segmentation, and a masked multi-scale fusion module that merges the features with the segmentation output to perform classification. A multi-task learning loss is then used to balance all tasks. The proposed method was evaluated using a public dataset and a local hospital’s dataset. The results demonstrate that the proposed method achieves superior performance while consuming fewer computational resources compared to existing networks. In the testset of the public dataset, it achieves Area Under Curves (AUC) of 0.9851 (IDH), 0.7695 (1p/19q), and 0.8949 (grade) with a mean dice score of 0.8485 and a mean Hausdorff distance of 19.60 mm; in the local hospital’s dataset, the AUCs were 0.9313, 0.8254, and 0.8638, with a mean dice score of 0.7490 and a mean Hausdorff distance of 24.50 mm. The proposed method can be potentially used in clinical practice to alleviate patient suffering, serving as a diagnostic tool for glioma patients.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107923"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer utilizing bidirectional cross-attention for multi-modal classification of Age-Related Macular Degeneration 利用双向交叉注意的变压器对老年性黄斑变性进行多模态分类
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.107887
Jianfeng Li , Zongda Wang , Yuanqiong Chen , Chengzhang Zhu , Mingqiang Xiong , Harrison Xiao Bai
{"title":"A Transformer utilizing bidirectional cross-attention for multi-modal classification of Age-Related Macular Degeneration","authors":"Jianfeng Li ,&nbsp;Zongda Wang ,&nbsp;Yuanqiong Chen ,&nbsp;Chengzhang Zhu ,&nbsp;Mingqiang Xiong ,&nbsp;Harrison Xiao Bai","doi":"10.1016/j.bspc.2025.107887","DOIUrl":"10.1016/j.bspc.2025.107887","url":null,"abstract":"<div><div>Age-related Macular Degeneration (AMD) ranks among the leading causes of blindness globally, especially among people over 50. Color fundus photograph (CFP) and optical coherence tomography (OCT) B-scan image are both widely applied in diagnostic phase of AMD. However, most existing multimodal approaches are based on traditional convolutional neural networks (CNN), which usually have limited local receptive fields when processing cross-modal information. Therefore, we propose a model that integrates CNN and transformer architectures for the diagnosis of AMD. Specifically, we first extract features through CNN to learn local representations of the images, and then employ bidirectional cross attention blocks with intramodal and intermodal attention. This allows the model to learn global representations from all input modalities, enabling it to capture long-range dependencies and enhance multimodal feature fusion through its global modeling capabilities. Moreover, we apply data augmentation for effective training. Our data augmentation approach leverages class activation mapping (CAM) as a conditional input to guide a GAN-based network in synthesizing high-resolution CFP and OCT images. Extensive experiments were conducted were carried out on a publicly available AMD dataset to assess the effectiveness of our model. Our method achieves an F1-score of 0.897 and an accuracy of 84.3% on the test set. The results indicate that our proposed approach significantly outperforms multiple baselines for multimodal AMD diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 107887"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BRAIN-SCN-PRO: A machine learning model for the classification of brain tumors using a convolutional neural network architecture brain - scn - pro:一个使用卷积神经网络架构进行脑肿瘤分类的机器学习模型
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.107942
Subrata Sinha , Saurav Mali , Amit Kumar Pathak , Sanchaita Rajkhowa
{"title":"BRAIN-SCN-PRO: A machine learning model for the classification of brain tumors using a convolutional neural network architecture","authors":"Subrata Sinha ,&nbsp;Saurav Mali ,&nbsp;Amit Kumar Pathak ,&nbsp;Sanchaita Rajkhowa","doi":"10.1016/j.bspc.2025.107942","DOIUrl":"10.1016/j.bspc.2025.107942","url":null,"abstract":"<div><div>Accurate detection and classification of brain tumors from MRI images are pivotal in IoT healthcare systems, enabling early diagnosis and tailored treatment strategies. Deep learning algorithms, specifically Convolutional Neural Networks (CNNs), have demonstrated significant potential for enhancing the accuracy of Computer-Aided Diagnostic Systems (CADS) for brain tumor identification. This study aimed to develop a CNN-based machine learning model trained on a comprehensive dataset comprising 17,000 T1-weighted contrast-enhanced MRI scans to achieve precise and reliable classification of various brain tumor types. The experimental results demonstrated the remarkable capabilities of the proposed model, with an impressive classification accuracy of 99.37 %. This high level of accuracy suggests that the proposed model has the potential to become a decision support system for radiologists, aiding them in making swift and accurate diagnoses, as well as formulating tailored treatment regimens for patients. This study represents a significant step forward in the realm of IoT healthcare systems, offering a highly accurate and easily accessible solution for brain-tumor classification. The application of BRAIN-SCN-PRO in clinical practice has the potential to revolutionize early detection and management of brain tumors, ultimately improving patient outcomes and quality of life. The model has been made available as an Android mobile application called BRAIN-SCN-PRO on Google Play Store.</div><div>(<span><span>https://play.google.com/store/apps/details?id=com.ap360.brscn</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107942"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An SVD-based method for DBS artifact removal: High-fidelity restoration of local field potential 基于svd的DBS伪影去除方法:局部场电位的高保真恢复
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.107908
Long Chen , Zhebing Ren , Jing Wang
{"title":"An SVD-based method for DBS artifact removal: High-fidelity restoration of local field potential","authors":"Long Chen ,&nbsp;Zhebing Ren ,&nbsp;Jing Wang","doi":"10.1016/j.bspc.2025.107908","DOIUrl":"10.1016/j.bspc.2025.107908","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Deep brain stimulation (DBS) is widely used to treat neurological disorders. Recent advances have integrated DBS with local field potential (LFP) recordings to elucidate pathophysiological mechanisms and enhance therapeutic efficacy. However, the DBS pulse-induced artifacts severely contaminate LFP recordings and hinder accurate physiological signal retrieval and neural signal analysis. To solve this problem, we proposed an artifact removal method based on singular value decomposition (SVD) for effectively removing DBS-induced artifacts from LFP, enabling high-fidelity restoration of LFP signals during DBS procedure.</div></div><div><h3>Methods:</h3><div>The DBS-contaminated LFP signal undergoes detrending, and z-score normalization using the pre-DBS segment as baseline. Artifacts are detected via a z-threshold and further extended to include post-pulse direct current (DC) bias. The aligned segments are processed with SVD to extract and remove the artifact components, followed by linear interpolation for residual artifacts correction. The artifact-free segments are then reinserted into the original signal to produce an artifact-free signal output. Validation is conducted on both synthetic dataset and the real-world datasets from animal and human recordings.</div></div><div><h3>Results:</h3><div>Our method achieves over 98% signal restoration on synthetic datasets, outperforming three common artifact removal techniques while maintaining a comparable computational speed of <span><math><mo>∼</mo></math></span>200 ms. It successfully restores LFP features and identifies key biomarkers in both animal and human DBS data.</div></div><div><h3>Conclusion:</h3><div>The proposed SVD-based method effectively removes DBS artifacts and restores physiological signals with high fidelity. It shows strong potential for identifying neural biomarkers essential for DBS and brain–computer interfaces (BCI), enhancing their precision and advancing the understanding of neural mechanisms in neurological disorders.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107908"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable AI for breast cancer classification using vision Transformer (ViT) 基于视觉变压器(vision Transformer, ViT)的可解释乳腺癌分类人工智能
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.108011
Marwa Naas , Hiba Mzoughi , Ines Njeh , Mohamed BenSlima
{"title":"An explainable AI for breast cancer classification using vision Transformer (ViT)","authors":"Marwa Naas ,&nbsp;Hiba Mzoughi ,&nbsp;Ines Njeh ,&nbsp;Mohamed BenSlima","doi":"10.1016/j.bspc.2025.108011","DOIUrl":"10.1016/j.bspc.2025.108011","url":null,"abstract":"<div><div>Manual classification of breast cancer (BC) through an optical microscope is regarded as an essential task throughout clinical routines, necessitating highly skilled pathologists. Computer-aided diagnosis (CAD) techniques based on deep learning (DL) are developed to assist the pathologists in making diagnostic decisions. Nevertheless, the black-box nature and the absence of interpretability and transparency of these DL-based models render their application highly difficult in sensitive and critical medical applications. In addition to providing explanations for the model predictions, explainable artificial intelligence (XAI) strategies help to gain the trust of clinicians. The current Convolutional Neural Network (CNN) architectures have limitations in capturing the global feature information details present in BC histopathological images. To overcome the challenge of long-range dependenciesin CNN-based models, Vision Transformer (ViT) architectures have recently been created.</div><div>These architectures have a self-attention mechanism that enables the analysis of images. As a result, the network is able to record the deep long-range dependence between pixels. The present work aims to develop an effective CAD tool for BC classification. In this study, we investigated a deep ViT architecture trained to perform binary lesions classification (malignant versus benign) using histopathology images. Various XAI techniques have been implemented: Gradient-Weighted Class Activation Mapping (Grad-CAM), Vanilla gradient, Integrated gradients, Saliency Maps, Local Interpretable Model Agnostic Explanation (LIME), and Attention Maps to highlight the most important features of the model prediction outcomes. The evaluation task was performed using the publicly accessible benchmark dataset BreakHis. Based on the research outcomes, our suggested ViT architecture demonstrates competitive performance, surpassing state-of-the-art CNN models in the analysis of histopathological images. Furthermore, the proposed models provide precise and accurate interpretations, reinforcing their reliability. Therefore, we can affirm that the proposed CAD system can be effectively integrated into clinical diagnostic routines, offering enhanced support for medical professionals.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 108011"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo Label-Guided Data Fusion and output consistency for semi-supervised medical image segmentation 半监督医学图像分割的伪标签引导数据融合与输出一致性
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-05-02 DOI: 10.1016/j.bspc.2025.107956
Tao Wang , Xinlin Zhang , Yuanbin Chen , Yuanbo Zhou , Longxuan Zhao , Bizhe Bai , Tao Tan , Tong Tong
{"title":"Pseudo Label-Guided Data Fusion and output consistency for semi-supervised medical image segmentation","authors":"Tao Wang ,&nbsp;Xinlin Zhang ,&nbsp;Yuanbin Chen ,&nbsp;Yuanbo Zhou ,&nbsp;Longxuan Zhao ,&nbsp;Bizhe Bai ,&nbsp;Tao Tan ,&nbsp;Tong Tong","doi":"10.1016/j.bspc.2025.107956","DOIUrl":"10.1016/j.bspc.2025.107956","url":null,"abstract":"<div><div>Supervised learning algorithms have become the benchmark for medical image segmentation tasks, but their effectiveness heavily relies on a large amount of labeled data which is a laborious and time-consuming process. Consequently, semi-supervised learning methods are increasingly becoming popular. We propose the Pseudo Label-Guided Data Fusion framework, which builds upon the mean teacher network for segmenting medical images with limited annotation. We introduce a pseudo-labeling utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively. Additionally, we enforce the consistency between different scales in the decoder module of the segmentation network and propose a loss function suitable for evaluating the consistency. Moreover, we incorporate a sharpening operation on the predicted results, further enhancing the accuracy of the segmentation. Extensive experiments on the Pancreas-CT, LA, BraTS2019 and BraTS2023 datasets demonstrate superior performance, with Dice scores of 80.90%, 89.80%, 85.47% and 89.39% respectively, when 10% of the dataset is labeled. Compared to MC-Net, our method achieves improvements of 10.9%, 0.84%, 5.84% and 0.63% on these datasets, respectively. The codes for this study are available at <span><span>https://github.com/ortonwang/PLGDF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107956"},"PeriodicalIF":4.9,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信