Biomedical Signal Processing and Control最新文献

筛选
英文 中文
CardioECGNet: A novel deep learning architecture for accurate and automated ECG signal classification across diverse cardiac conditions CardioECGNet:一种新颖的深度学习架构,可对各种不同的心脏状况进行准确、自动的心电图信号分类
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-23 DOI: 10.1016/j.bspc.2025.107720
Mohamed Hammad , Mohammed ElAffendi , Ahmed A. Abd El-Latif
{"title":"CardioECGNet: A novel deep learning architecture for accurate and automated ECG signal classification across diverse cardiac conditions","authors":"Mohamed Hammad ,&nbsp;Mohammed ElAffendi ,&nbsp;Ahmed A. Abd El-Latif","doi":"10.1016/j.bspc.2025.107720","DOIUrl":"10.1016/j.bspc.2025.107720","url":null,"abstract":"<div><div>Cardiovascular diseases are a leading cause of death worldwide. Hence, early detection of cardiac arrhythmia is crucial for effective treatment. Electrocardiogram (ECG) signals provide essential information about the heart’s electrical activity. However, analyzing them manually is time-consuming, and requires specialized expertise. Therefore, there is a need for automated systems that can accurately classify ECG signals and detect arrhythmia. In this study, we introduce CardioECGNet, a new deep neural network architecture designed specifically for ECG signal classification and arrhythmia detection. CardioECGNet consists of <em>four</em> layers, simulating the electrical conduction system of the heart. The CardioECGNet layers are the Sinoatrial (SA) layer, the Atrioventricular (AV) layer, the Bundle layer, and the Purkinje fibers layer. These layers enable the network to learn the complex patterns of ECG signals and classify them accurately. We evaluated our model on multiple case studies, yielding notable outcomes in the accurate classification of various cardiovascular conditions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107720"},"PeriodicalIF":4.9,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global Partition with Local Enhancement network for multitask learning of malignant melanoma 用于恶性黑色素瘤多任务学习的全局分区与局部增强网络
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-23 DOI: 10.1016/j.bspc.2025.107702
Yang Zuo , Chen Pang , Chunyu Hu , Chunmeng Kang , Hongbin Lv , Lei Lyu
{"title":"Global Partition with Local Enhancement network for multitask learning of malignant melanoma","authors":"Yang Zuo ,&nbsp;Chen Pang ,&nbsp;Chunyu Hu ,&nbsp;Chunmeng Kang ,&nbsp;Hongbin Lv ,&nbsp;Lei Lyu","doi":"10.1016/j.bspc.2025.107702","DOIUrl":"10.1016/j.bspc.2025.107702","url":null,"abstract":"<div><div>Accurate recognition of melanoma and non-melanoma lesions in dermoscopy images is crucial for skin cancer diagnosis. With the advancements in deep learning, automated skin lesion segmentation and classification have evolved into two constituents of computer-aided diagnosis systems. Despite their success, the existing approaches considered these two tasks separately and ignored their complementary connection, often resulting in suboptimal performance.</div><div>In this paper, we propose a Transformer-based Global Partition with Local Enhancement (GPLE) Multitask Network that integrates segmentation and classification tasks into a unified framework. We commence by training the segmentation branch independently and subsequently utilize these trained weights to initialize the classification branch. Our segmentation and classification branches share an encoder, promoting efficient feature extraction by leveraging shared learning between two tasks. Within the shared encoder, we improved the shifted window partition method and added a variant shifted window configuration in the Swin Transformer block. Meanwhile, we updated the attention calculation formula by utilizing the scale and unit invariance of sine functions, which provides robustness to variations in lesion size and shape. To enhance lesion detection across different stages in the Swin Transformer block, a Global Multi-resolution Fusion (GMF) module has been devised.</div><div>Extensive evaluations on ISIC2016, ISIC2017, and ISIC2018 datasets demonstrate GPLE’s effectiveness in both tasks. For segmentation, GPLE achieves a Dice coefficient of 0.903 and a Jaccard index of 0.872, surpassing current state-of-the-art methods by 1.3% and 1.6%. In classification, GPLE achieves a precision of 0.835. These results highlight GPLE’s potential to enhance melanoma diagnosis through lesion analysis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107702"},"PeriodicalIF":4.9,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized theory-guided convolutional neural network for lung cancer classification using CT images with advanced FPGA implementation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-22 DOI: 10.1016/j.bspc.2025.107719
S. Manikandan , P. Karthigaikumar
{"title":"Optimized theory-guided convolutional neural network for lung cancer classification using CT images with advanced FPGA implementation","authors":"S. Manikandan ,&nbsp;P. Karthigaikumar","doi":"10.1016/j.bspc.2025.107719","DOIUrl":"10.1016/j.bspc.2025.107719","url":null,"abstract":"<div><div>Lung carcinoma, commonly referred to as lung cancer is a severe disease with higher global mortality rate. The uncontrolled growth of cells in lung tissues is the reason for it. Detecting and treating lung cancer early is important for curing it, and diagnostic methods commonly include Computed Tomography (CT) scans and blood tests. However, accurately detecting and classifying pulmonary nodules in CT images remains a challenge due to the complexity of the data, higher computational demands,require for real-time processing. Existing systems often face limitations, such as high power consumption, prolonged processing times, and scalability issues, reducing their effectiveness in clinical environments. To overcome these challenges, this manuscript proposes an Optimized Theory-Guided Convolutional Neural Network for Lung Cancer Classification utilizing CT Images with Advanced FPGA Implementation (OTCNN-LCT-FPGA). Computed Tomography Image (CTI) from the LIDC-IDRI dataset are pre-processed using Variational Bayesian Robust Adaptive Filtering (VBRAF) technique, which removes noise and converts RGB images into binary format. The pre-processed images are classified as benign or malignant using Theory-Guided Convolutional Neural Network (TCNN). The Polar Coordinate Bald Eagle Search Algorithm (PBESA) is introduced to enhance the weight parameters of TCNN method while reducing resource utilization and increasing processing speed. The TCNN classifier is executed on Field-Programmable Gate Array (FPGA) to further decrease the computation time. The proposed OTCNN-LCT-FPGA method demonstrates significant improvements. How if, it achieves 6.26 %, 7.22 %, and 5.27 higher specificity and 2.96 %, 3.46 %, and 5.80 % higher F1-Score when compared to the existing methods, such as FCFNN-LCC, ISNeT-DLC-CT and DNN-LCC-EOS respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107719"},"PeriodicalIF":4.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sift-BCD: SIFT-CNN integrated machine learning-based breast cancer detection
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-22 DOI: 10.1016/j.bspc.2025.107686
Vimala Mannarsamy , Ponnrajakumari Mahalingam , Thilagam Kalivarathan , K Amutha , Ranjith Kumar Paulraj , S. Ramasamy
{"title":"Sift-BCD: SIFT-CNN integrated machine learning-based breast cancer detection","authors":"Vimala Mannarsamy ,&nbsp;Ponnrajakumari Mahalingam ,&nbsp;Thilagam Kalivarathan ,&nbsp;K Amutha ,&nbsp;Ranjith Kumar Paulraj ,&nbsp;S. Ramasamy","doi":"10.1016/j.bspc.2025.107686","DOIUrl":"10.1016/j.bspc.2025.107686","url":null,"abstract":"<div><div>Globally, breast cancer (BC) has become one of the important reasons of death among women by emphasizing the necessity for early detection systems. Early detection is key to providing the best treatment outcomes and saving lives. Medical imaging techniques have been extensively used to diagnose and detect BC. However, manually diagnosing each image pattern requires a lot of time when using these techniques. To overcome this issue, a novel SIFT-CNN Integrated Fuzzy decision Tree based Breast Cancer Detection (SIFT-BCD) method is proposed for identifying BC cases in an early stage with minimal time. Initially, the mammogram images are taken from the CBIS-DDSM dataset to detect BC. The proposed SIFT-BCD method has three phases: pre-processing, segmentation, feature extraction, and classification. The mammogram images are given to the trilateral filter for eliminating the noisy distortions. The ROI based Unet is used to segment relevant areas in the noise-free images. The SIFT-CNN is utilized to retrieve the fine features from the mammogram images. The fuzzy decision tree is employed for classifying mammography images into three classes: malignant, benign, and normal. Specific metrics including accuracy, specificity, and sensitivity are used for evaluating the overall efficiency of the proposed SIFT-BCD method. The proposed SIFT-BCD achieves a better accuracy of 99.20% for identifying BC in their early stages. The proposed SIFT-BCD approach improves the overall accuracy of 2.7%, 0.87%, 3.5% and 0.92% better than Modified YOLOv5, IPBCS-DL, Modified AlexNet DCNN and BCCNN respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107686"},"PeriodicalIF":4.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MS-HyFS: A novel multiscale hybrid framework with Scalable electrodes for motor imagery classification
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107706
Ziheng Guo , Yuan Feng , Ming Ma , Yudi Sun , Likun Xia
{"title":"MS-HyFS: A novel multiscale hybrid framework with Scalable electrodes for motor imagery classification","authors":"Ziheng Guo ,&nbsp;Yuan Feng ,&nbsp;Ming Ma ,&nbsp;Yudi Sun ,&nbsp;Likun Xia","doi":"10.1016/j.bspc.2025.107706","DOIUrl":"10.1016/j.bspc.2025.107706","url":null,"abstract":"<div><div>Hybrid deep neural networks have been developed to enrich features in spatial–temporal domains from electroencephalogram (EEG) based on motor imagery (MI) classification. However, these networks primarily focus on forming a subject-independent model that disregards individual variations/difference caused by various reaction time associated with fixed time window and insufficient spatial information due to different brain functional connectivity. Additionally, analyzing such complex networks may incur significant computational costs. This study proposes a novel Multiscale Hybrid Framework with Scalable electrodes (MS-HyFS), which includes of a multiscale filter bank CSP (MS-FBCSP) algorithm to deal with fixed time window issue by extracting multiscale CSP features, followed by a combination of a multiscale Hybrid network with a 1D-CNN and LSTM (MS-HyCaL) to enrich the spatial–temporal features from local and global perspectives. We reduce the computational costs by selecting critical electrodes based on the brain’s asymmetric properties and neural activity areas. MS-HyFS was evaluated across two publicly available EEG datasets [BCIIV-2a and BCIIV-2b]. These are divided into training and test datasets using an 8:2 ratio, and the training data are further divided into training and validation sets using a fivefold cross-validation (CV) method, in which the model with the highest accuracy among the five was selected. The model is trained once more with the full training set, and the test data were then used to evaluate its performance. This approach achieved average classification accuracies of 84.3% and 64.0% for the BCIIV-2a and BCIIV-2b datasets, respectively. Experimental results showed MS-HyFS was superior to state of-the-art algorithms.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107706"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-OCTA: Prompting segment-anything for OCTA image segmentation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107698
Xinrun Chen , Chengliang Wang , Haojian Ning , Shiying Li , Mei Shen
{"title":"SAM-OCTA: Prompting segment-anything for OCTA image segmentation","authors":"Xinrun Chen ,&nbsp;Chengliang Wang ,&nbsp;Haojian Ning ,&nbsp;Shiying Li ,&nbsp;Mei Shen","doi":"10.1016/j.bspc.2025.107698","DOIUrl":"10.1016/j.bspc.2025.107698","url":null,"abstract":"<div><div>Detailed analysis of a local specific biomarker in optical coherence tomography angiography (OCTA) images is essential for medical diagnosis, yet current methods primarily focus on global segmentation, such as of retinal vessel (RV) network. We propose SAM-OCTA, which fine-tunes the Segment Anything Model (SAM) with low-rank adaptation (LoRA) for segmentation tasks in OCTA. Our method enhances the semantic comprehension and prompt mechanism of SAM for OCTA en-face images and achieves a more flexible segmentation approach. The experiments explore the impact of prompt points with both global and local segmentation modes with the OCTA-500 and ROSE-O datasets, using random selection and special annotation prompt generation strategies. Considering practical usage, we evaluate model feasibility at smaller scales and demonstrate the necessity of fine-tuning. Comprehensive experiments demonstrate that SAM-OCTA achieves state-of-the-art performance in RV and FAZ segmentation and excels in artery–vein and localized single-vessel segmentation. The code is available at <span><span>https://github.com/ShellRedia/SAM-OCTA-extend</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107698"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Repeated interval random frog (RIRF) algorithm based on FTIR and application to quantitative analysis of serum proteins
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107730
Yansong Yue , Ruojing Zhang , Yuxiang Yang , Zhushanying Zhang , Yuan Gao , Huimin Cao
{"title":"Repeated interval random frog (RIRF) algorithm based on FTIR and application to quantitative analysis of serum proteins","authors":"Yansong Yue ,&nbsp;Ruojing Zhang ,&nbsp;Yuxiang Yang ,&nbsp;Zhushanying Zhang ,&nbsp;Yuan Gao ,&nbsp;Huimin Cao","doi":"10.1016/j.bspc.2025.107730","DOIUrl":"10.1016/j.bspc.2025.107730","url":null,"abstract":"<div><div>Infrared spectroscopy is widely used for biomarker detection due to its non-invasive, rapid, and sensitive nature, but current methods exhibit limited stability in feature band extraction. This study introduces the repeated interval random frog (RIRF) algorithm to optimize protein feature band extraction in serum. The FTIR spectra of 66 collected blood samples were used for the study, and the IRF algorithm was run several times in order to obtain different subsets of feature variables. The feature bands in these subsets were then analyzed and selected for fusion with high contribution to protein content prediction. The feature bands with high contribution rate and high stability were finally extracted, which in turn improved the prediction accuracy of the quantitative model. In comparative analysis, RIRF outperformed IRF, CARS, SPA, and UVE for apolipoprotein B extraction, increasing the training correlation coefficient (<em>R</em>c) and prediction correlation coefficient (<em>R</em>p) to 0.9144 and 0.8504, respectively, while reducing feature numbers from 137 to 77. Additionally, feature fusion of albumin, apolipoprotein, C-reactive protein, total protein, and immunoglobulin A showed significant improvements in model predictive power. The <em>R</em>p of albumin was improved from 0.9515 to 0.9601 for the single best extraction; the <em>R</em>p of apolipoprotein A1 was improved from 0.6526 to 0.7312; the <em>R</em>p of C-reactive protein was improved from 0.7532 to 0.8214; and the <em>R</em>p of total protein was improved from 0.9595 to 0.9691. The RIRF algorithm significantly improves the prediction accuracy and provides an important reference in the application of biomarker detection and infrared spectral analysis techniques.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107730"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote heart rate measurement based on video color magnification and spatiotemporal self-attention
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107677
Ning Sun , Peixian He , Jixin Liu , Lei Chai , Cong Wu , Xiujuan Liu
{"title":"Remote heart rate measurement based on video color magnification and spatiotemporal self-attention","authors":"Ning Sun ,&nbsp;Peixian He ,&nbsp;Jixin Liu ,&nbsp;Lei Chai ,&nbsp;Cong Wu ,&nbsp;Xiujuan Liu","doi":"10.1016/j.bspc.2025.107677","DOIUrl":"10.1016/j.bspc.2025.107677","url":null,"abstract":"<div><div>Remote photoplethysmography (rPPG) for heart rate measurement has garnered significant attention due to its non-contact advantages. The challenge in video-based remote heart rate measurement lies in accurately capturing subtle changes in facial color. We propose an end-to-end deep learning model named Video Color Magnification and Spatiotemporal Feature Extraction Network (VS-Net). VS-Net comprises three main modules: video color magnification, spatiotemporal self-attention feature extraction, and contrastive learning. The video color magnification module, implemented using a deep neural network, initially magnifies subtle facial color changes in the input video. The magnified color features are then fed into the spatiotemporal self-attention feature extraction module. This module utilizes a multi-head self-attention mechanism along with convolutional neural networks to locally and globally model information exchange across magnified video frames, capturing long-term dependencies and extracting spatiotemporal features. Additionally, the model incorporates a contrastive learning module designed to improve weak signal detection in facial videos. By generating positive and negative samples based on video frequency resampling, the model captures similarities and differences among input samples, thereby learning more robust semantic feature representations. Comprehensive experiments were conducted on three public datasets: UBFC-RPPG, PURE, and MAHNOB-HCI. The results demonstrate that VS-Net effectively extracts rPPG signals from facial videos and outperforms state-of-the-art methods in heart rate measurement.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107677"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PKMT-Net: A pathological knowledge-inspired multi-scale transformer network for subtype prediction of lung cancer using histopathological images
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107742
Zhilei Zhao , Shuli Guo , Lina Han , Gang Zhou , Jiaoyu Jia
{"title":"PKMT-Net: A pathological knowledge-inspired multi-scale transformer network for subtype prediction of lung cancer using histopathological images","authors":"Zhilei Zhao ,&nbsp;Shuli Guo ,&nbsp;Lina Han ,&nbsp;Gang Zhou ,&nbsp;Jiaoyu Jia","doi":"10.1016/j.bspc.2025.107742","DOIUrl":"10.1016/j.bspc.2025.107742","url":null,"abstract":"<div><div>The precise subtyping of lung cancer remains a significant and challenging task in clinical practice, and existing computer-aided diagnostic systems often overlook complex and specialized medical knowledge. In response to these challenges, a Pathological Knowledge-inspired Multi-scale Transformer Network (PKMT-Net) was proposed for predicting lung cancer subtypes using histopathological images. PKMT-Net consists of three key modules: a multi-scale soft segmentation module, a cross-attention module, and a weighted multi-scale fusion module. Initially, the multi-scale soft segmentation module simulated the pathologist’s reading of histopathological images at various scales, capturing both macroscopic and microscopic characteristics. This module implements a novel soft patch generation strategy to mitigate semantic information loss. Next, the cross-attention module, equipped with skip connections, emulated the pathologist’s way of correlating macroscopic and microscopic tumor characteristics. Lastly, the weighted multi-scale fusion module modeled the pathologist’s decision-making process by integrating macroscopic and microscopic characteristics. After iterative training, the PKMT-Net model delivered an outstanding performance, attaining Area Under the Curve (AUC) values of 0.9992 for the training set, 0.9959 for the validation set, and 0.9970 for an unseen test set. Compared to single-scale models, PKMT-Net’s AUC improved by at least 0.0210. The model’s interpretability, clinical utility, as well as the outcomes of ablation studies were evaluated comprehensively. Furthermore, the PKMT-Net model’s generalizability was demonstrated through additional datasets. These results underscore the feasibility and high performance of the PKMT-Net for the processing of histopathology images. The supporting codes of this work can be found at: <span><span>https://github.com/zzl2022/PKMT-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107742"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFFKI-Net: A dual-domain feature fusion deep convolutional neural network for under-sampled MR image reconstruction
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-02-21 DOI: 10.1016/j.bspc.2025.107732
Fuqiang Lu , Xia Xiao , Zengxiang Wang , Yu Liu , Jiannan Zhou
{"title":"DFFKI-Net: A dual-domain feature fusion deep convolutional neural network for under-sampled MR image reconstruction","authors":"Fuqiang Lu ,&nbsp;Xia Xiao ,&nbsp;Zengxiang Wang ,&nbsp;Yu Liu ,&nbsp;Jiannan Zhou","doi":"10.1016/j.bspc.2025.107732","DOIUrl":"10.1016/j.bspc.2025.107732","url":null,"abstract":"<div><div>Reducing k-space sampling points by specifying the sampling ratios and paths is essential for accelerating MRI. However, overcoming the degradation of image quality due to under-sampling remains a significant challenge. Current convolutional neural networks (CNNs) that focus on either the k-space or image domain fail to fully utilize the complementary information available in both domains, limiting reconstruction accuracy. Additionally, cross-domain reconstruction networks with dual-domain features frequently rely on repetitive stacking of identical interaction modules and lack essential multi-scale and multi-resolution feature compensation throughout the interaction process. To address these issues, we propose a dual-domain feature fusion deep convolutional neural network (DFFKI-Net). DFFKI-Net is innovatively designed as a three-part cascade consisting of the K-space Domain Reconstruction Module (KDRM), Image Domain Reconstruction Module (IDRM), and Deep Parallel Cross-domain Feature Fusion Module (DPCFFM). The under-sampled k-space data is preliminarily processed in the KDRM to enhance the completeness of global frequency information, followed by further extraction of image details and edge features in the IDRM. A key innovation of this work is the cross-module compensation of multi-scale wavelet features from KDRM to IDRM, effectively reducing frequency information loss during propagation. Finally, the DPCFFM enables efficient cross-domain feature interaction and fusion, achieving more comprehensive feature extraction. Experimental results on fastMRI and CC359 datasets show that DFFKI-Net can adapt to different under-sampling rates and types of MRI data, achieving high reconstruction accuracy with limited parameters: NMSE of 0.0272, PSNR of 33.5 dB, and SSIM of 0.7803 on fastMRI 4 × single-coil knee dataset, with a total parameter count of less than one-fifteenth of the optimal method.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107732"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信