Biomedical Signal Processing and Control最新文献

筛选
英文 中文
Enhancing collaboration between teacher and student for effective cross-domain nuclei detection and classification
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-07 DOI: 10.1016/j.bspc.2025.107763
Aiqiu Wu , Kai Fan , Binbin Zheng , Anli Zhang , Ao Li , Minghui Wang
{"title":"Enhancing collaboration between teacher and student for effective cross-domain nuclei detection and classification","authors":"Aiqiu Wu ,&nbsp;Kai Fan ,&nbsp;Binbin Zheng ,&nbsp;Anli Zhang ,&nbsp;Ao Li ,&nbsp;Minghui Wang","doi":"10.1016/j.bspc.2025.107763","DOIUrl":"10.1016/j.bspc.2025.107763","url":null,"abstract":"<div><div>Automated detection and classification of cell nuclei in histopathology images is critical for accurate cancer diagnosis. Deep learning-based methods have shown promise, yet their effectiveness is often undermined by domain shift arising from variations in patient data, staining protocols, and imaging devices between training and testing datasets. The teacher-student framework has emerged as a viable strategy for domain adaptation, wherein the teacher transfers source domain knowledge to the student. However, the framework is vulnerable to unreliable pseudo label, which can further lead to a vicious cycle of propagating incorrect information between teacher and student. In this study, we present the Collaborative Teacher-Student (CTS) framework for cross-domain nuclei detection and classification, which is intended to assist in diagnosing various types of cancer. The CTS introduces the Identity Swap Mechanism (ISM) that dynamically exchanges the identities of teacher and student models based on their respective performance. This mechanism fosters a mutual learning paradigm, effectively mitigating the propagation of misinformation and preventing performance degradation. Additionally, we propose the Joint Uncertainty-guided Student Training (JUST) strategy that incorporates uncertainty estimates from both teacher and student models, to filter out unreliable pseudo labels and facilitate more accurate knowledge transfer. Experimental results demonstrate that the CTS framework consistently outperforms existing methods across multiple domain adaptation scenarios. Notably, it achieves significant performance improvements of 3.1 % in detection F-score and 2.4 % in classification F-score on the breast cancer dataset BCNuP. The code will be made available at: <span><span>https://github.com/waq2001/collaborative_teacher</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107763"},"PeriodicalIF":4.9,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STD-YOLOv7:A small target detector for micronucleus based on YOLOv7 STD-YOLOv7:基于 YOLOv7 的微核小目标探测器
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-07 DOI: 10.1016/j.bspc.2025.107810
Weiyi Wei, Yaowei Leng, Linfeng Cao, Yibin Wang
{"title":"STD-YOLOv7:A small target detector for micronucleus based on YOLOv7","authors":"Weiyi Wei,&nbsp;Yaowei Leng,&nbsp;Linfeng Cao,&nbsp;Yibin Wang","doi":"10.1016/j.bspc.2025.107810","DOIUrl":"10.1016/j.bspc.2025.107810","url":null,"abstract":"<div><div>The micronucleus of cells represents a form of abnormal structure in eukaryotic organisms. The detection of cellular micronuclei is applied in diverse aspects including the assessment of radiation-induced damage, experiments on new drugs, as well as the domain of food safety. Currently, however, research on micronucleus recognition remains limited, with detection accuracy often proving insufficient. In response to these challenges, we propose the STD-YOLOv7 micronucleus recognition algorithm, which integrates the YOLOv7 object detection framework with the Coordinate Attention (CA) mechanism and the Res-ACmix module, specifically tailored for recognizing cellular micronuclei. The CA mechanism enhances feature map expression, while the Res-ACmix module optimizes feature extraction. Both are applied within the feature extraction network, enabling refined feature transfer throughout the network. Furthermore, incorporating Dropout within the Backbone improves overall model performance by mitigating overfitting. Predictions are made at each layer’s prediction head to generate final results. Experimental results on the constructed SRCHD dataset show that the proposed STD-YOLOv7 algorithm surpasses other comparable methods in performance on this dataset and also performs well on publicly available datasets. On the SRCHD dataset, STD-YOLOv7 achieved significant improvements, including a 6.37 % increase in mean Average Precision (mAP@50), a 5.51 % boost in Recall, and a 5.01 % rise in Precision.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107810"},"PeriodicalIF":4.9,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing diabetic retinopathy classification using ensemble deep learning approaches
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-06 DOI: 10.1016/j.bspc.2025.107804
Ankur Biswas , Rita Banik
{"title":"Advancing diabetic retinopathy classification using ensemble deep learning approaches","authors":"Ankur Biswas ,&nbsp;Rita Banik","doi":"10.1016/j.bspc.2025.107804","DOIUrl":"10.1016/j.bspc.2025.107804","url":null,"abstract":"<div><div>Diabetic retinopathy is a condition that significantly weakens diabetic individuals, identified by impairment of the blood vessels in the retina. Successful treatment requires early diagnosis and categorization using retinal image segmentation and classification. This study proposes a hybrid pre-trained convolutional neural network (CNN) and recurrent neural network (RNN) architecture to categorize the severity levels of diabetic retinopathy accurately. The proposed model capitalizes on the feature extraction capabilities of CNNs and the spatial dependencies captured by RNNs to achieve higher classification accuracy. The CNN is trained on a generous dataset and optimized on the retinal dataset to extract salient features specific to the task. The RNN then utilizes these features to create a final classification by discovering their spatial relationships. The proposed hybrid pre-trained CNN-RNN model outperforms existing leading-edge approaches on an openly accessible DR dataset, reaching a precision of 0.96. The promising results reveal the potential of the proposed model to accurately and efficiently categorize the severity levels of diabetic retinopathy, which could ultimately improve the diagnosis and intervention. By facilitating early detection and treatment, the model can potentially decrease the threat of severe vision loss and blindness, enhancing patient outcomes and quality of life.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107804"},"PeriodicalIF":4.9,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced retinal arteries and veins segmentation through deep learning with conditional random fields
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-06 DOI: 10.1016/j.bspc.2025.107747
Mennatullah Mahmoud , Mohammad Mansour , Hisham M. Elrefai , Amira J. Hamed , Essam A. Rashed
{"title":"Enhanced retinal arteries and veins segmentation through deep learning with conditional random fields","authors":"Mennatullah Mahmoud ,&nbsp;Mohammad Mansour ,&nbsp;Hisham M. Elrefai ,&nbsp;Amira J. Hamed ,&nbsp;Essam A. Rashed","doi":"10.1016/j.bspc.2025.107747","DOIUrl":"10.1016/j.bspc.2025.107747","url":null,"abstract":"<div><div>The intricate network of retinal blood vessels serves as a sensitive window into systemic health, offering valuable insights into diseases like diabetic retinopathy. However, unraveling these insights poses challenges due to limitations of traditional visible light fundus photography. Infrared imaging emerges as a transformative tool, enabling deeper tissue penetration and enhanced visualization of the retinal vasculature. Yet, unlocking its full potential hinges on accurate and reliable segmentation of retinal arteries and veins within IR images. This study explores different ways to improve the accurate mapping of blood vessels in the eye using deep learning architectures. We used a special dataset captured with advanced technology to train and test three different models. This study amplifies the dataset’s adaptability, facilitating the training of U-Net, Residual U-Net, and Attention U-Net models. Among these models, the Attention Residual U-Net demonstrated superior segmentation performance, achieving an accuracy of 96.03%, a dice coefficient of 0.882, and a recall of 0.895 after post-processing. This research opens up possibilities for further improvements in eye-related healthcare.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107747"},"PeriodicalIF":4.9,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining impedance cardiography with Windkessel model for blood pressure estimation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-06 DOI: 10.1016/j.bspc.2025.107820
Naiwen Zhang , Jiale Chen , Jinting Ma , Xiaolong Guo , Jing Guo , Guo Dan
{"title":"Combining impedance cardiography with Windkessel model for blood pressure estimation","authors":"Naiwen Zhang ,&nbsp;Jiale Chen ,&nbsp;Jinting Ma ,&nbsp;Xiaolong Guo ,&nbsp;Jing Guo ,&nbsp;Guo Dan","doi":"10.1016/j.bspc.2025.107820","DOIUrl":"10.1016/j.bspc.2025.107820","url":null,"abstract":"<div><div>Given that blood pressure is a vital indicator of cardiovascular health, the domain of non-invasive continuous blood pressure monitoring has emerged as a hot area of interest in current research. However, existing studies in this field are often constrained by their limited capacity for clinical physiological interpretation and for reflecting cardiovascular and hemodynamic information. This gap hinders their effectiveness in elucidating the influence of cardiovascular system changes on blood pressure. This study aims to address these issues by using impedance cardiogram signal and the Windkessel (WK) model. First, we extracted features representing hemodynamic parameters from impedance cardiogram signal. Then, these features were utilized alongside the XGBoost algorithm to estimate parameters within the WK model. Finally, this model was used to model the subject’s cardiovascular system, thereby precisely simulating and estimating blood pressure changes. This methodology was validated using a public dataset, with results indicating that in resting scenario, the mean absolute error for systolic blood pressure and diastolic blood pressure were 4.72 mmHg and 3.72 mmHg, respectively. Furthermore, our findings identified a positive correlation between the WK model’s resistance parameter and blood pressure, and a negative correlation between its compliance parameter and blood pressure. These insights are instrumental in pioneering new avenues for continuous blood pressure estimation and in deepening our understanding of the physiological mechanisms of blood pressure changes.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107820"},"PeriodicalIF":4.9,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving cross-session motor imagery decoding performance with data augmentation and domain adaptation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107756
Shuai Guo , Yi Wang , Yuang Liu , Xin Zhang , Baoping Tang
{"title":"Improving cross-session motor imagery decoding performance with data augmentation and domain adaptation","authors":"Shuai Guo ,&nbsp;Yi Wang ,&nbsp;Yuang Liu ,&nbsp;Xin Zhang ,&nbsp;Baoping Tang","doi":"10.1016/j.bspc.2025.107756","DOIUrl":"10.1016/j.bspc.2025.107756","url":null,"abstract":"<div><div>Recent research has increasingly utilized deep learning (DL) to decode electroencephalogram (EEG) signals, enhancing the accuracy of motor imagery (MI) classification. While DL has improved MI decoding performance, challenges persist as distribution variances in MI-EEG data across different sessions. Additionally, the collection of EEG signals for MI tasks presents significant challenges, particularly in terms of time and economic costs. Data collection not only requires professional equipment and controlled environments but also demands the cooperation of a large number of participants to obtain sufficient sample size and diversity. To address these issues, this study proposes two methods to improve the decoding performance of MI-EEG signals based on an improved lightweight network. Firstly, a recombination-based data augmentation method leveraging channel knowledge is proposed to expand the training dataset and enhance model classification generalization, without the need for additional experiments to collect new data. Secondly, an improved domain adaptation network is introduced to align feature distributions between different domains, minimizing domain gaps. The proposed domain adaptation method aligns the target EEG domain with the corresponding class centers using pseudo-labeling. Extensive experiments are conducted using a cross-session training strategy on the BCIC IV 2a and BCIC IV 2b datasets. The results demonstrate that the proposed data augmentation method and improved domain adaptation method effectively enhance classification accuracy, providing a novel perspective for the practical application of MI-EEG.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107756"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIG-UDA: Generative unsupervised domain adaptation with transformer-embedded invariance for cross-modality medical image segmentation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107722
Jiapeng Li , Yijia Chen , Shijie Li , Lisheng Xu , Wei Qian , Shuai Tian , Lin Qi
{"title":"TIG-UDA: Generative unsupervised domain adaptation with transformer-embedded invariance for cross-modality medical image segmentation","authors":"Jiapeng Li ,&nbsp;Yijia Chen ,&nbsp;Shijie Li ,&nbsp;Lisheng Xu ,&nbsp;Wei Qian ,&nbsp;Shuai Tian ,&nbsp;Lin Qi","doi":"10.1016/j.bspc.2025.107722","DOIUrl":"10.1016/j.bspc.2025.107722","url":null,"abstract":"<div><div>Unsupervised domain adaptation (UDA) in medical image segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain, especially when there are significant differences in data distribution across multi-modal medical images. Traditional UDA methods typically involve image translation and segmentation modules. However, during image translation, the anatomical structure of the generated images may vary, resulting in a mismatch of source domain labels and impacting subsequent segmentation. In addition, during image segmentation, although the Transformer architecture is used in UDA tasks due to its superior global context capture ability, it may not effectively facilitate knowledge transfer in UDA tasks due to lacking the adaptability of the self-attention mechanism in Transformers. To address these issues, we propose a generative UDA network with invariance mining, named TIG-UDA, for cross-modality multi-organ medical image segmentation, which includes an image style translation network (ISTN) and an invariance adaptation segmentation network (IASN). In ISTN, we not only introduce a structure preservation mechanism to guide image generation to achieve anatomical structure consistency, but also align the latent semantic features of source and target domain images to enhance the quality of the generated images. In IASN, we propose an invariance adaptation module that can extract the invariability weights of learned features in the attention mechanism of Transformer to compensate for the differences between source and target domains. Experimental results on two public cross-modality datasets (MS-CMR dataset and Abdomen dataset) show the promising segmentation performance of TIG-UDA compared with other state-of-the-art UDA methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107722"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised object detection for automatic tooth-marked tongue recognition
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107766
Yongcun Zhang , Jiajun Xu , Yina He , Shaozi Li , Zhiming Luo , Huangwei Lei
{"title":"Weakly supervised object detection for automatic tooth-marked tongue recognition","authors":"Yongcun Zhang ,&nbsp;Jiajun Xu ,&nbsp;Yina He ,&nbsp;Shaozi Li ,&nbsp;Zhiming Luo ,&nbsp;Huangwei Lei","doi":"10.1016/j.bspc.2025.107766","DOIUrl":"10.1016/j.bspc.2025.107766","url":null,"abstract":"<div><div>Tongue diagnosis in Traditional Chinese Medicine (TCM) is a crucial diagnostic method that can reflect an individual’s health status. Traditional methods for identifying tooth-marked tongues are subjective and inconsistent because they rely on practitioner experience. We propose a novel fully automated <strong>W</strong>eakly <strong>S</strong>upervised method using <strong>V</strong>ision transformer and <strong>M</strong>ultiple instance learning (<strong>WSVM</strong>) for tongue extraction and tooth-marked tongue recognition. Our approach first accurately detects and extracts the tongue region from clinical images, removing any irrelevant background information. Then, we implement an end-to-end weakly supervised object detection method. We utilize Vision Transformer (ViT) to process tongue images in patches and employ multiple instance loss to identify tooth-marked regions with only image-level annotations. WSVM achieves high accuracy in tooth-marked tongue classification and tooth-marked tongue detection. Visualization experiments further demonstrate its effectiveness in pinpointing these regions. This automated approach enhances the objectivity and accuracy of tooth-marked tongue diagnosis. It provides significant clinical value by assisting TCM practitioners in making precise diagnoses and treatment recommendations. Code is available at <span><span>https://github.com/yc-zh/WSVM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107766"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Med-LVDM: Medical latent variational diffusion model for medical image translation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107735
Xiaoyan Kui , Bo Liu , Zanbo Sun , Qinsong Li , Min Zhang , Wei Liang , Beiji Zou
{"title":"Med-LVDM: Medical latent variational diffusion model for medical image translation","authors":"Xiaoyan Kui ,&nbsp;Bo Liu ,&nbsp;Zanbo Sun ,&nbsp;Qinsong Li ,&nbsp;Min Zhang ,&nbsp;Wei Liang ,&nbsp;Beiji Zou","doi":"10.1016/j.bspc.2025.107735","DOIUrl":"10.1016/j.bspc.2025.107735","url":null,"abstract":"<div><div>Learning-based methods for medical image translation have proven effective in addressing the challenge of obtaining complete multimodal medical images in clinical practice, particularly when patients are allergic to contrast agents or critical illnesses. Recently, diffusion models have exhibited superior performance in various image-generation tasks and are expected to replace generative adversarial networks (GANs) for medical image translation. However, existing methods suffer from unintuitive training objectives and complex network structures that curtail their efficacy in this domain. To address this gap, we propose a novel medical latent variational diffusion model (Med-LVDM) for efficient medical image translation. Firstly, we introduce a new parametric representation based on the variational diffusion model (VDM) and optimize the training objective to the weighted mean square error between the synthetic and target images, which is intuitive and has fewer model parameters. Then, we map the diffusion training and sampling process to the latent space, significantly reducing computational complexity to enhance the feasibility of clinical applications. Finally, to capture global information without focusing solely on local features, we utilize U-ViT as the backbone for Med-LVDM to effectively adapt to the latent space representing abstract information rather than concrete pixel-level information. Extensive qualitative and quantitative results in multi-contrast MRI and cross-modality MRI-CT datasets demonstrate our superiority in translation quality compared to state-of-the-art methods. In particular, Med-LVDM achieved its highest SSIM and PSNR of 92.37% and 26.23 dB on the BraTS2018 dataset, 90.18% and 24.55 dB on the IXI dataset, 91.61% and 25.52 dB on the MRI-CT dataset.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107735"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting FOXA1 gene mutation status in prostate cancer through multi-modal deep learning
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107739
Simin Lin , Longxin Deng , Ziwei Hu , Chengda Lin , Yongxin Mao , Yuntao Liu , Wei Li , Yue Yang , Rui Zhou , Yancheng Lai , Huang He , Tao Tan , Xinlin Zhang , Tong Tong , Na Ta , Rui Chen
{"title":"Predicting FOXA1 gene mutation status in prostate cancer through multi-modal deep learning","authors":"Simin Lin ,&nbsp;Longxin Deng ,&nbsp;Ziwei Hu ,&nbsp;Chengda Lin ,&nbsp;Yongxin Mao ,&nbsp;Yuntao Liu ,&nbsp;Wei Li ,&nbsp;Yue Yang ,&nbsp;Rui Zhou ,&nbsp;Yancheng Lai ,&nbsp;Huang He ,&nbsp;Tao Tan ,&nbsp;Xinlin Zhang ,&nbsp;Tong Tong ,&nbsp;Na Ta ,&nbsp;Rui Chen","doi":"10.1016/j.bspc.2025.107739","DOIUrl":"10.1016/j.bspc.2025.107739","url":null,"abstract":"<div><div>Prostate cancer stands as the foremost cause of cancer-related mortality among men globally, with its incidence and mortality rates increasing alongside the aging population. The FOXA1 gene assumes a pivotal role in prostate cancer pathology, which is potential as a prognostic indicator and a potent therapeutic target across various stages of prostate cancer. Mutations in FOXA1 have been shown to amplify, supplant, and reconfigure Androgen Receptor function, thereby fostering prostate cancer proliferation. FOXA1 is the most common molecular mutation type in Asian prostate cancer patients, with a mutation rate reaching an astonishing 41<span><math><mtext>%</mtext></math></span> in China. It is also an important molecular subtype in Western populations. Currently, targeted therapy for FOXA1 is rapidly developing. Therefore, effective identification of FOXA1 mutations is of great clinical significance. Gene mutation detection is usually carried out by molecular biological methods, which is expensive and has a long-time cycle. To address this problem, we proposed a multi-modal deep learning network. This network can predict the FOXA1 gene mutation status using only Hematoxylin–Eosin (H&amp;E) stained pathological images and clinical data. Following five-fold cross-validation, our model achieved an optimal Area Under the receiver operating characteristic Curve (AUC) of 0.808, with an average predicted AUC of 0.74, surpassing other comparative models. Furthermore, we observed a discernible correlation between FOXA1 mutations and ISUP grade.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107739"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信