Biomedical Signal Processing and Control最新文献

筛选
英文 中文
A review of deep learning-based segmentation and registration of infant brain MRI 基于深度学习的婴儿脑MRI分割配准研究进展
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-03 DOI: 10.1016/j.bspc.2025.108676
Yijing Fang , Shirui Wang , Yihan Zhang , Chengxiu Yuan , Li Song , Peng Zhao , Fei Su , Jun Liu , Liang Wu
{"title":"A review of deep learning-based segmentation and registration of infant brain MRI","authors":"Yijing Fang ,&nbsp;Shirui Wang ,&nbsp;Yihan Zhang ,&nbsp;Chengxiu Yuan ,&nbsp;Li Song ,&nbsp;Peng Zhao ,&nbsp;Fei Su ,&nbsp;Jun Liu ,&nbsp;Liang Wu","doi":"10.1016/j.bspc.2025.108676","DOIUrl":"10.1016/j.bspc.2025.108676","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) of the infant brain plays an important role in studying neonatal brain development as well as diagnosing and treating early brain diseases. Deep learning (DL)-based adult brain MR images processing techniques have developed relatively rapidly, with segmentation and registration techniques being the most common. The special characteristics of infant brain structure and rapid changes during developmental period make segmentation and registration of infant brain MR images challenging. This review critically assesses recent advances in infant brain MR images segmentation and registration. The performance of different DL models for processing infant brain MR images and metrics for evaluating the performance of segmentation and registration algorithms are discussed. This review selects and discusses more than 100 papers in related fields, covering technical aspects of data processing, neural network architectures, and attention mechanisms. We also provide an outlook on the future of infant brain MR images processing.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108676"},"PeriodicalIF":4.9,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MT-DENet: Prediction of post-therapy OCT images in diabetic macular edema by multi-temporal disease evolution network MT-DENet:多颞叶疾病进化网络预测糖尿病黄斑水肿治疗后OCT图像
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-03 DOI: 10.1016/j.bspc.2025.108721
Xiaohui Li , Kun Huang , Yuhan Zhang , Songtao Yuan , Sijie Niu , Qiang Chen
{"title":"MT-DENet: Prediction of post-therapy OCT images in diabetic macular edema by multi-temporal disease evolution network","authors":"Xiaohui Li ,&nbsp;Kun Huang ,&nbsp;Yuhan Zhang ,&nbsp;Songtao Yuan ,&nbsp;Sijie Niu ,&nbsp;Qiang Chen","doi":"10.1016/j.bspc.2025.108721","DOIUrl":"10.1016/j.bspc.2025.108721","url":null,"abstract":"<div><div>Predicting the therapeutic response of patients with diabetic macular edema (DME) to anti-vascular endothelial growth factor (anti-VEGF) therapy in advance is of great clinical importance, as it can support more informed treatment decisions. However, most existing works rely on a single follow-up scan, which fails to capture patient-specific factors such as individual variability and lifestyle habits, thereby introducing considerable uncertainty into predictions. Furthermore, current models rarely incorporate prior medical knowledge of disease progression, often producing anatomically implausible retinal structures. To address these issues, we propose MT-DENet, a multi-temporal disease evolution network designed to forecast post-therapy optical coherence tomography (OCT) images using pre-therapy multi-temporal OCT data. Specifically, we develop a multi-temporal cascaded graph evolution module that separately extracts features from each follow-up and sequentially evolves disease progression using a graph network and the weighted fusion of features from the current and previous time points. This design allows the model to capture patient-specific lesion evolution trends and guide subsequent prediction. In addition, we incorporate prior knowledge of anti-VEGF treatment effects into the framework and introduce a feature similarity prior constraint to reduce structural aberrations, such as abnormal retinal structures and local details. Extensive experiments on a prospective clinical DME trial dataset demonstrate that our method generates accurate and anatomically reliable OCT predictions, outperforming state-of-the-art baselines in both image quality and lesion volume estimation. The implementation is publicly available at: <span><span>https://github.com/bemyself96/MT-DENet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108721"},"PeriodicalIF":4.9,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145218972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MACA-Net: Multi-aperture curvature aware network for instance-nuclei segmentation MACA-Net:多孔径曲率感知网络的实例核分割
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-02 DOI: 10.1016/j.bspc.2025.108711
Siyavash Shabani , Sahar A Mohammed , Muhammad Sohaib , Bahram Parvin
{"title":"MACA-Net: Multi-aperture curvature aware network for instance-nuclei segmentation","authors":"Siyavash Shabani ,&nbsp;Sahar A Mohammed ,&nbsp;Muhammad Sohaib ,&nbsp;Bahram Parvin","doi":"10.1016/j.bspc.2025.108711","DOIUrl":"10.1016/j.bspc.2025.108711","url":null,"abstract":"<div><div>Nuclei instance segmentation is one of the most challenging tasks and is considered the first step in automated pathology. The challenges stem from technical biological variations, and high cellular density that lead adjacent nuclei to form perceptual boundaries. This paper demonstrates that a multi-aperture representation encoded by the fusion of Swin Transformers and Convolutional blocks improves nuclei segmentation. The loss function is augmented with the curvature and centroid consistency terms between the growth truth and the prediction to preserve morphometric fidelity and localization. These terms are used to panelize for the loss of shape localization (e.g., a mid-level attribute) and mismatches in low and high-frequency boundary events (e.g., a low-level attribute). The proposed model is evaluated on three publicly available datasets: PanNuke, MoNuSeg, and CPM17, reporting improved Dice and binary Panoptic Quality (PQ) scores. For example, the PQ scores for PanNuke, MoNuSeg, and CPM17 are 0.6888 ± 0.032, 0.634 ± 0.003, and 0.716 ± 0.002, respectively. The code is located at <span><span>https://github.com/Siyavashshabani/MACA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108711"},"PeriodicalIF":4.9,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated multimodal severity assessment of diabetic retinopathy using ultra-widefield color fundus photography and clinical tabular data 使用超宽视场彩色眼底摄影和临床表格数据自动评估糖尿病视网膜病变的多模式严重程度
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-01 DOI: 10.1016/j.bspc.2025.108673
Alireza Rezaei , Sarah Matta , Rachid Zeghlache , Pierre-Henri Conze , Capucine Lepicard , Pierre Deman , Laurent Borderie , Deborah Cosette , Sophie Bonnin , Aude Couturier , Béatrice Cochener , Mathieu Lamard , Mostafa El Habib Daho , Gwenolé Quellec
{"title":"Automated multimodal severity assessment of diabetic retinopathy using ultra-widefield color fundus photography and clinical tabular data","authors":"Alireza Rezaei ,&nbsp;Sarah Matta ,&nbsp;Rachid Zeghlache ,&nbsp;Pierre-Henri Conze ,&nbsp;Capucine Lepicard ,&nbsp;Pierre Deman ,&nbsp;Laurent Borderie ,&nbsp;Deborah Cosette ,&nbsp;Sophie Bonnin ,&nbsp;Aude Couturier ,&nbsp;Béatrice Cochener ,&nbsp;Mathieu Lamard ,&nbsp;Mostafa El Habib Daho ,&nbsp;Gwenolé Quellec","doi":"10.1016/j.bspc.2025.108673","DOIUrl":"10.1016/j.bspc.2025.108673","url":null,"abstract":"<div><div>This study introduces an automatic deep-learning-based approach to diabetic retinopathy (DR) severity assessment by integrating two modalities: Ultra-Widefield Color Fundus Photography (UWF-CFP) from the CLARUS 500 device (Carl Zeiss Meditec Inc., Dublin, CA, USA) and a comprehensive set of clinical data from the EVIRED project. We propose a framework that combines the information from 2D UWF-CFP images and a set of 76 tabular features, including demographic, biochemical, and clinical parameters, to enhance the classification accuracy of DR stages. Our model uses advanced machine learning techniques to address the complexities of synthesizing heterogeneous data types, providing a holistic view of patient health status. Results indicate that this fusion outperforms traditional methods that rely solely on imaging or clinical data, suggesting a robust model which can provide practitioners with a supportive second opinion on DR severity, particularly useful in screening workflows. We measured a multiclass accuracy of 63.4% and kappa of 0.807 for our fusion model which is 2.1% higher in accuracy and 0.022 higher in kappa compared to the image unimodal classifier. Several interpretation methods are used to provide practitioners with an inside view of the workings of classification methods and allow them to discover the most important clinical features.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108673"},"PeriodicalIF":4.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FVCM-Net: Interpretable privacy-preserved attention driven lung cancer detection from CT scan images with explainable HiRes-CAM attribution map and ensemble learning FVCM-Net:基于可解释的hire - cam归因图和集成学习的CT扫描图像中可解释的隐私保护注意力驱动肺癌检测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-01 DOI: 10.1016/j.bspc.2025.108719
Abu Sayem Md Siam, Md. Mehedi Hasan, Yeasir Arafat, Md Muzadded Chowdhury, Sayed Hossain Jobayer, Fahim Hafiz, Riasat Azim
{"title":"FVCM-Net: Interpretable privacy-preserved attention driven lung cancer detection from CT scan images with explainable HiRes-CAM attribution map and ensemble learning","authors":"Abu Sayem Md Siam,&nbsp;Md. Mehedi Hasan,&nbsp;Yeasir Arafat,&nbsp;Md Muzadded Chowdhury,&nbsp;Sayed Hossain Jobayer,&nbsp;Fahim Hafiz,&nbsp;Riasat Azim","doi":"10.1016/j.bspc.2025.108719","DOIUrl":"10.1016/j.bspc.2025.108719","url":null,"abstract":"<div><div>Lung cancer is a predominant cause of cancer-related deaths globally, and early detection is essential for improving patient prognosis. Deep learning models with attention mechanisms have shown promising accuracy in detecting lung cancer from medical imaging data. However, privacy concerns and data scarcity present significant challenges in developing robust and generalizable models. This paper proposes a novel approach for lung cancer detection, ‘FVCM-Net’, integrating federated learning with attention mechanisms and ensemble learning to address these challenges. Federated learning is employed to train the model across multiple decentralized institutions, allowing for collaborative model development without sharing sensitive patient data and minimizing the risk of such sensitive data being misused. Furthermore, this approach enables the development of more accurate and generalized models by leveraging diverse datasets from multiple sources. We employed ensemble learning to produce more accurate predictions than a single model. For interpretability of the lung cancer identification model, we employ XAI (Explainable Artificial Intelligence) techniques such as SHAP (SHapley Additive exPlanations) and HiResCAM (High-Resolution Class Activation Mapping). These techniques help us understand how the model makes its decisions and predictions. This study utilizes a diverse collection of lung CT scan images from four datasets, including LIDC-IDRI, IQ-OTH/NCCD, a public Kaggle dataset, and additional online sources. Experimental results revealed that the proposed method achieved higher performance in lung cancer detection with 98.26% average accuracy and 97.37% average F-1 score. The high performance of FVCM-Net and ensemble learning has the potential to significantly impact medical imaging, helping radiologists make better clinical decisions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108719"},"PeriodicalIF":4.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLIMSeg: Pathology language–image matching for weakly supervised semantic segmentation of histopathology images 组织病理图像弱监督语义分割的病理语言-图像匹配
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-01 DOI: 10.1016/j.bspc.2025.108669
Meidan Ding , Xuechen Li , Wenting Chen , Songhe Deng , Linlin Shen , Zhihui Lai
{"title":"PLIMSeg: Pathology language–image matching for weakly supervised semantic segmentation of histopathology images","authors":"Meidan Ding ,&nbsp;Xuechen Li ,&nbsp;Wenting Chen ,&nbsp;Songhe Deng ,&nbsp;Linlin Shen ,&nbsp;Zhihui Lai","doi":"10.1016/j.bspc.2025.108669","DOIUrl":"10.1016/j.bspc.2025.108669","url":null,"abstract":"<div><div>Semantic segmentation of tissues is crucial in aiding clinical diagnosis by quantitatively and objectively linking morphological characteristics to clinical outcomes. More and more tissue segmentation methods rely on weakly-supervised methods due to the time-consuming and labor-intensive burden of manual pixel-level annotations. Although current weakly supervised semantic segmentation (WSSS) methods achieve significant performance by Class Activation Maps (CAM), these methods cannot perform well on histopathological images due to the homogeneous features of different tissue types. Moreover, some pathology Contrastive Language–Image Pretraining (CLIP) models have great representation capability for histopathology, but they have not been fully used to capture homogeneous features in histopathology. To solve these challenges, we propose a novel framework named PLIMSeg (Pathology Language–Image Matching for Weakly Supervised Semantic Segmentation), which aims to leverage Contrastive Language–Image Pretraining into WSSS. Specifically, PLIMSeg utilizes pathology CLIP as the feature extractor, aiming to utilize the strong representation capability of pre-trained language–image models to represent features. Then we design three losses based on pathology language–image matching (PLIM), to constrain the CAMs generated by the original image encoder. With these constraints, PLIMSeg can generate a more complete and precise pseudo mask for segmentation. Our PLIMSeg has a better performance compared with other weakly supervised segmentation methods of pathology on LUAD-HistoSeg and BCSS-WSSS datasets, setting a new state-of-the-art for WSSS of histopathology images.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108669"},"PeriodicalIF":4.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UTMT: Semi-supervised segmentation of surgical endoscopic images based on Uncertainty guided Twin Mean Teacher UTMT:基于不确定性引导Twin Mean Teacher的手术内镜图像半监督分割
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-01 DOI: 10.1016/j.bspc.2025.108735
Baosheng Zou , Ying Han , Zongguang Zhou , Kang Li , Guotai Wang
{"title":"UTMT: Semi-supervised segmentation of surgical endoscopic images based on Uncertainty guided Twin Mean Teacher","authors":"Baosheng Zou ,&nbsp;Ying Han ,&nbsp;Zongguang Zhou ,&nbsp;Kang Li ,&nbsp;Guotai Wang","doi":"10.1016/j.bspc.2025.108735","DOIUrl":"10.1016/j.bspc.2025.108735","url":null,"abstract":"<div><div>Semantic segmentation of surgical endoscopic images plays an important role in surgical skill analysis and guidance. Deep learning based on fully supervised learning has achieved remarkable performance in this task, but it relies on a large amount of training images with pixel-level annotations that are time-consuming and difficult to collect. To reduce the annotation cost, we propose a novel semi-supervised segmentation framework based on Uncertainty guided Twin Mean Teacher (UTMT) for surgical endoscopic image segmentation. UTMT has two parallel teacher–student structures to deal with unannotated training images, where each student is supervised by pseudo-labels obtained by not only its mean teacher, but also its fellow student. The combination of mean teacher and fellow student can reduce the inherent bias of a single model and improve the quality of pseudo-labels. In addition, considering that the pseudo-label may be noisy, we propose an uncertainty-based correction method to emphasize high-confidence pseudo-labels obtained from different networks and suppress the unreliable parts for more robust learning. Experimental results on two public surgical endoscopic image datasets demonstrated that UTMT significantly improved the segmentation performance when only 1% or 5% of the training images were labeled, and it outperformed six state-of-the-art semi-supervised segmentation methods. Furthermore, compared with fully supervised learning, UTMT achieved a similar performance while reducing the annotation cost by 90% on the two datasets.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108735"},"PeriodicalIF":4.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polyp segmentation for colonoscopy images via Hierarchical Interworking Decoding 基于分层互联解码的结肠镜图像息肉分割
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-10-01 DOI: 10.1016/j.bspc.2025.108737
Chengang Dong , Guodong Du
{"title":"Polyp segmentation for colonoscopy images via Hierarchical Interworking Decoding","authors":"Chengang Dong ,&nbsp;Guodong Du","doi":"10.1016/j.bspc.2025.108737","DOIUrl":"10.1016/j.bspc.2025.108737","url":null,"abstract":"<div><div>Efficient and accurate identification, localization, and segmentation of polyp tissues are critical steps in colonoscopy, essential for the prevention and early intervention of colorectal cancer. Current CNN-based methods are limited in modeling long-range dependencies while transformer-based methods cannot capture sufficient contextual dependencies. Hybrid networks are prone to overfitting the convolutional features, leading to the dispersion of attention in the Transformer. Addressing the existing issues, we propose an approach for polyp segmentation with Hierarchical Interworking Decoder (HID) that fully utilizes hierarchical features to establish multi-scale discriminative criteria. HID leverages Interworking Attention Module (IAM) to refine single-level features, where the globally shared attention mechanism in IAM concurrently integrates affinity information from all different hierarchical features, facilitating global information exchange. Adjacent Aggregation Module (AAM) to refine and integrate adjacent-level features. Through the refinement of single-level features and the integration of different-level features, HID simultaneously captures global information and local contextual information. Extensive experiments demonstrate that HID exhibits outstanding generalization performance and achieves state-of-the-art accuracy on multiple polyp segmentation benchmarks.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108737"},"PeriodicalIF":4.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction and interpretation of EEG features for diagnosis and severity prediction of Alzheimer’s Disease and Frontotemporal dementia using deep learning 利用深度学习提取和解释脑电图特征用于阿尔茨海默病和额颞叶痴呆的诊断和严重程度预测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-30 DOI: 10.1016/j.bspc.2025.108667
Tuan Vo, Ali K. Ibrahim, Hanqi Zhuang, Chiron Bang
{"title":"Extraction and interpretation of EEG features for diagnosis and severity prediction of Alzheimer’s Disease and Frontotemporal dementia using deep learning","authors":"Tuan Vo,&nbsp;Ali K. Ibrahim,&nbsp;Hanqi Zhuang,&nbsp;Chiron Bang","doi":"10.1016/j.bspc.2025.108667","DOIUrl":"10.1016/j.bspc.2025.108667","url":null,"abstract":"<div><div>Alzheimer’s Disease (AD) is the most common form of dementia, characterized by progressive cognitive decline and memory loss. Frontotemporal dementia (FTD), the second most common form of dementia, affects the frontal and temporal lobes, causing changes in personality, behavior, and language. Due to overlapping symptoms, FTD is often misdiagnosed as AD. Although electroencephalography (EEG) is portable, non-invasive, and cost-effective, its diagnostic potential for AD and FTD is limited by the similarities between the two diseases. To address this, we introduce an EEG-based feature extraction method to identify and predict the severity of AD and FTD using deep learning. Key findings include increased delta band activities in the frontal and central regions as biomarkers. By extracting temporal and spectral features from EEG signals, our model combines a Convolutional Neural Network with an attention-based Long Short-Term Memory (aLSTM) network, achieving over 90% accuracy in distinguishing AD and FTD from cognitively normal (CN) individuals. It also predicts severity with relative errors of less than 35% for AD and approximately 15.5% for FTD. Differentiating FTD from AD remains challenging due to shared characteristics. However, applying a feature selection procedure improves the specificity in separating AD from FTD, increasing it from 26% to 65%. Building on this, we developed a two-stage approach to classify AD, CN, and FTD simultaneously. In this approach, CN is identified first, followed by the differentiation of FTD from AD. This method achieves an overall accuracy of 84% in classifying AD, CN, and FTD.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108667"},"PeriodicalIF":4.9,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-contact, real-time monitoring of patient thoracic cross-sectional area during CPR based on depth camera 基于深度相机的心肺复苏术患者胸部横截面积的非接触式实时监测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-30 DOI: 10.1016/j.bspc.2025.108517
Sunxiaohe Li , Peng Wang , Jihang Xue , Zirui Wang , Fanglin Geng , Hao Zhang , Zhongrui Bai , Lidong Du , Xianxiang Chen , Huadong Zhu , Yecheng Liu , JunXian Song , Gang Cheng , Zhenfeng Li , Zhen Fang
{"title":"Non-contact, real-time monitoring of patient thoracic cross-sectional area during CPR based on depth camera","authors":"Sunxiaohe Li ,&nbsp;Peng Wang ,&nbsp;Jihang Xue ,&nbsp;Zirui Wang ,&nbsp;Fanglin Geng ,&nbsp;Hao Zhang ,&nbsp;Zhongrui Bai ,&nbsp;Lidong Du ,&nbsp;Xianxiang Chen ,&nbsp;Huadong Zhu ,&nbsp;Yecheng Liu ,&nbsp;JunXian Song ,&nbsp;Gang Cheng ,&nbsp;Zhenfeng Li ,&nbsp;Zhen Fang","doi":"10.1016/j.bspc.2025.108517","DOIUrl":"10.1016/j.bspc.2025.108517","url":null,"abstract":"<div><div>Real-time monitoring of Cardiopulmonary Resuscitation (CPR) quality is crucial for the resuscitation of patients with cardiac arrest (CA). However, most of the parameters measured by existing monitoring devices are fixed absolute metrics, overlooking significant individual variation feedback metrics such as thoracic cross-sectional area. To address this, we propose a non-contact CPR quality monitoring method based on a depth camera, which measures compression depth, rate, and changes in the patient’s thoracic cross-sectional area in real time. The method first uses the depth camera to create a spatial point cloud and track the compression position, then calculates the monitoring parameters based on depth changes in the point cloud within the region of interest. Experiments conducted in different scenarios demonstrate the accuracy and effectiveness of the proposed method. Furthermore, experiments on CPR manikins of different body sizes reveal that the same level of compression results in different compression effects depending on the body size, suggesting that personalized compression strategies may improve the success rate of resuscitation in real-world scenarios. This study is the first to achieve real-time monitoring of thoracic cross-sectional area during CPR, incorporating a new indicator into CPR quality monitoring.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108517"},"PeriodicalIF":4.9,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信