Computer methods and programs in biomedicine最新文献

筛选
英文 中文
Exhaustive biclustering driven by self-learning evolutionary approach for biomedical data 生物医学数据自学习进化方法驱动的穷举双聚类
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-29 DOI: 10.1016/j.cmpb.2025.108846
Adrián Segura-Ortiz , Adán José-García , Laetitia Jourdan , José García-Nieto
{"title":"Exhaustive biclustering driven by self-learning evolutionary approach for biomedical data","authors":"Adrián Segura-Ortiz ,&nbsp;Adán José-García ,&nbsp;Laetitia Jourdan ,&nbsp;José García-Nieto","doi":"10.1016/j.cmpb.2025.108846","DOIUrl":"10.1016/j.cmpb.2025.108846","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Biclustering is a key data analysis technique that identifies submatrices with coherent patterns, widely applied in biomedical fields such as gene co-expression analysis. Despite its importance, in the context of evolutionary algorithms, traditional partial representations in biclustering algorithms face significant limitations, such as redundancy and limited adaptability to domain-specific objectives. This study aims to overcome these challenges by introducing MOEBA-BIO, a new evolutionary biclustering framework for biomedical data.</div></div><div><h3>Methods:</h3><div>MOEBA-BIO is designed as a flexible framework based on the evolutionary metaheuristics scheme. It includes a self-configurator that dynamically adjusts the algorithm’s objectives and parameters based on contextual domain knowledge. The framework employs a complete representation, enabling the integration of new domain-specific objectives and the self-determination of the number of biclusters, addressing the limitations of traditional representations. The source code is available through the following git repository: <span><span>https://github.com/AdrianSeguraOrtiz/MOEBA-BIO</span><svg><path></path></svg></span>.</div></div><div><h3>Results:</h3><div>Experimental results demonstrate that MOEBA-BIO overcomes the limitations of classical partial representations. Furthermore, its application to simulated and real-world gene expression datasets highlights its ability to specialize in specific biological domains, improving accuracy and functional enrichment of biclusters compared to other state-of-the-art techniques.</div></div><div><h3>Conclusions:</h3><div>MOEBA-BIO represents a significant advancement in biclustering applied to bioinformatics. Its innovative framework, combining adaptability, self-configuration, and integration of domain-specific objectives, addresses the main limitations of traditional methods and offers robust solutions for complex biomedical datasets.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108846"},"PeriodicalIF":4.9,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bidirectional reasoning approach for blood glucose control via invertible neural networks 基于可逆神经网络的血糖控制的双向推理方法
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-27 DOI: 10.1016/j.cmpb.2025.108844
Jingchi Jiang , Rujia Shen , Yang Yang , Boran Wang , Yi Guan
{"title":"A bidirectional reasoning approach for blood glucose control via invertible neural networks","authors":"Jingchi Jiang ,&nbsp;Rujia Shen ,&nbsp;Yang Yang ,&nbsp;Boran Wang ,&nbsp;Yi Guan","doi":"10.1016/j.cmpb.2025.108844","DOIUrl":"10.1016/j.cmpb.2025.108844","url":null,"abstract":"<div><h3>Background and Objective</h3><div>: Despite the profound advancements that deep learning models have achieved across a multitude of domains, their propensity to learn spurious correlations significantly impedes their applicability to tasks necessitating causal and counterfactual reasoning.</div></div><div><h3>Methods:</h3><div>In this paper, we propose a Bidirectional Neural Network, which innovatively consolidates forward causal reasoning with inverse counterfactual reasoning into a cohesive framework. This integration is facilitated through the implementation of multi-stacked affine coupling layers, which ensure the network’s invertibility, thereby enabling bidirectional reasoning capabilities within a singular architectural construct. To augment the network’s trainability and to ensure the bidirectional differentiability of the parameters, we introduce an orthogonal weight normalization technique. Additionally, the counterfactual reasoning capacity of the Bidirectional Neural Network is embedded within the policy function of reinforcement learning, thereby effectively addressing the challenges associated with reward sparsity in the blood glucose control scenario.</div></div><div><h3>Results:</h3><div>We evaluate our framework on two pivotal tasks: causal-based blood glucose forecasting and counterfactual-based blood glucose control. The empirical results affirm that our model not only exemplifies enhanced generalization in causal reasoning but also significantly surpasses comparative models in handling out-of-distribution data. Furthermore, in blood glucose control tasks, the integration of counterfactual reasoning markedly improves decision efficacy, sample efficiency, and convergence velocity.</div></div><div><h3>Conclusion:</h3><div>It is our expectation that the Bidirectional Neural Network will pave novel pathways in the exploration of causal and counterfactual reasoning, thus providing groundbreaking methods for complex decision-making processes. Code is available at <span><span>https://github.com/HITshenrj/BNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108844"},"PeriodicalIF":4.9,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projection-based reduced order modelling for unsteady parametrized optimal control problems in 3D cardiovascular flows 三维心血管流非定常参数化最优控制问题的投影降阶建模
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-24 DOI: 10.1016/j.cmpb.2025.108813
Surabhi Rathore , Pasquale C. Africa , Francesco Ballarin , Federico Pichi , Michele Girfoglio , Gianluigi Rozza
{"title":"Projection-based reduced order modelling for unsteady parametrized optimal control problems in 3D cardiovascular flows","authors":"Surabhi Rathore ,&nbsp;Pasquale C. Africa ,&nbsp;Francesco Ballarin ,&nbsp;Federico Pichi ,&nbsp;Michele Girfoglio ,&nbsp;Gianluigi Rozza","doi":"10.1016/j.cmpb.2025.108813","DOIUrl":"10.1016/j.cmpb.2025.108813","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Accurately defining outflow boundary conditions in patient-specific models poses significant challenges due to complex vascular morphologies, physiological conditions, and high computational demands. These challenges hinder the computation of realistic and reliable cardiovascular (CV) haemodynamics by incorporating clinical data such as 4D magnetic resonance imaging. The objective is to control the outflow boundary conditions to optimize CV haemodynamics and minimize the discrepancy between target and computed flow velocity profiles. This paper presents a projection-based reduced order modelling (ROM) framework for unsteady parametrized optimal control problems (OCP<span><math><msub><mrow></mrow><mrow><mrow><mo>(</mo><mi>μ</mi><mo>)</mo></mrow></mrow></msub></math></span>s) arising from CV applications.</div></div><div><h3>Methods:</h3><div>Numerical solutions of OCP<span><math><msub><mrow></mrow><mrow><mrow><mo>(</mo><mi>μ</mi><mo>)</mo></mrow></mrow></msub></math></span>s require substantial computational resources, highlighting the need for robust and efficient ROMs to perform real-time and many-query simulations. We investigate the performance of a projection-based reduction technique that relies on the offline-online paradigm, enabling significant computational cost savings. In this study, the fluid flow is governed by unsteady Navier–Stokes equations with physical parametric dependence, <em>i.e.</em> the Reynolds number. The Galerkin finite element method is used to compute the high-fidelity solutions in the offline phase. We implemented a nested-proper orthogonal decomposition (<em>nested-POD</em>) for fast simulation of OCP<span><math><msub><mrow></mrow><mrow><mrow><mo>(</mo><mi>μ</mi><mo>)</mo></mrow></mrow></msub></math></span>s that encompasses two stages: temporal compression for reducing dimensionality in time, followed by parametric-space compression on the precomputed POD modes.</div></div><div><h3>Results:</h3><div>We tested the efficacy of the proposed methodology on vascular models, namely an idealized bifurcation geometry and a patient-specific coronary artery bypass graft, incorporating stress control at the outflow boundary and observing consistent speed-up with respect to high-fidelity strategies. We observed the inter-dependency between the state, adjoint, and control solutions and presented detailed flow field characteristics, providing valuable insights into factors such as atherosclerosis risk.</div></div><div><h3>Conclusion:</h3><div>The projection-based ROM framework provides an efficient and accurate approach for simulating parametrized CV flows. By enabling real-time, patient-specific modelling, this advancement supports personalized medical interventions and improves the predictions of disease progression in vascular regions.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108813"},"PeriodicalIF":4.9,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution 交叉融合自适应特征增强变压器:高效高频整合和稀疏注意力增强脑MRI超分辨率
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-24 DOI: 10.1016/j.cmpb.2025.108815
Zhiying Yang , Hanguang Xiao , Xinyi Wang , Feizhong Zhou , Tianhao Deng , Shihong Liu
{"title":"Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution","authors":"Zhiying Yang ,&nbsp;Hanguang Xiao ,&nbsp;Xinyi Wang ,&nbsp;Feizhong Zhou ,&nbsp;Tianhao Deng ,&nbsp;Shihong Liu","doi":"10.1016/j.cmpb.2025.108815","DOIUrl":"10.1016/j.cmpb.2025.108815","url":null,"abstract":"<div><h3>Background and Objectives:</h3><div>High-resolution magnetic resonance imaging (MRI) is essential for diagnosing and treating brain diseases. Transformer-based approaches demonstrate strong potential in MRI super-resolution by capturing long-range dependencies effectively. However, existing Transformer-based super-resolution methods face several challenges: (1) they primarily focus on low-frequency information, neglecting the utilization of high-frequency information; (2) they lack effective mechanisms to integrate both low-frequency and high-frequency information; (3) they struggle to effectively eliminate redundant information during the reconstruction process. To address these issues, we propose the Cross-fusion Adaptive Feature Enhancement Transformer (CAFET).</div></div><div><h3>Methods:</h3><div>Our model maximizes the potential of both CNNs and Transformers. It consists of four key blocks: a high-frequency enhancement block for extracting high-frequency information; a hybrid attention block for capturing global information and local fitting, which includes channel attention and shifted rectangular window attention; a large-window fusion attention block for integrating local high-frequency features and global low-frequency features; and an adaptive sparse overlapping attention block for dynamically retaining key information and enhancing the aggregation of cross-window features.</div></div><div><h3>Results:</h3><div>Extensive experiments validate the effectiveness of the proposed method. On the BraTS and IXI datasets, with an upsampling factor of <span><math><mrow><mo>×</mo><mn>2</mn></mrow></math></span>, the proposed method achieves a maximum PSNR improvement of 2.4 dB and 1.3 dB compared to state-of-the-art methods, along with an SSIM improvement of up to 0.16% and 1.42%. Similarly, at an upsampling factor of <span><math><mrow><mo>×</mo><mn>4</mn></mrow></math></span>, the proposed method achieves a maximum PSNR improvement of 1.04 dB and 0.3 dB over the current leading methods, along with an SSIM improvement of up to 0.25% and 1.66%.</div></div><div><h3>Conclusions:</h3><div>Our method is capable of reconstructing high-quality super-resolution brain MRI images, demonstrating significant clinical potential.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108815"},"PeriodicalIF":4.9,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the use of multiple ROIs for radiomics-based survival modelling: Finding a strategy of aggregating lesions 在基于放射组学的生存模型中使用多重roi:寻找聚集病变的策略
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-23 DOI: 10.1016/j.cmpb.2025.108840
Agata Małgorzata Wilk , Andrzej Swierniak , Andrea d’Amico , Rafał Suwiński , Krzysztof Fujarewicz , Damian Borys
{"title":"Towards the use of multiple ROIs for radiomics-based survival modelling: Finding a strategy of aggregating lesions","authors":"Agata Małgorzata Wilk ,&nbsp;Andrzej Swierniak ,&nbsp;Andrea d’Amico ,&nbsp;Rafał Suwiński ,&nbsp;Krzysztof Fujarewicz ,&nbsp;Damian Borys","doi":"10.1016/j.cmpb.2025.108840","DOIUrl":"10.1016/j.cmpb.2025.108840","url":null,"abstract":"<div><h3>Background:</h3><div>Radiomic features, derived from a region of interest (ROI) in medical images, are valuable as prognostic factors. Selecting an appropriate ROI is critical, and many recent studies have focused on leveraging multiple ROIs by segmenting analogous regions across patients — such as the primary tumour and peritumoral area or subregions of the tumour. These can be straightforwardly incorporated into models as additional features. However, a more complex scenario arises, for example, in a regionally disseminated disease, when multiple distinct lesions are present.</div></div><div><h3>Aim:</h3><div>This study aims to evaluate the feasibility of integrating radiomic data from multiple lesions into survival models. We explore strategies for incorporating these ROIs and hypothesize that including all available lesions can improve model performance.</div></div><div><h3>Methods:</h3><div>While each lesion produces a feature vector, the desired result is a unified prediction. We propose methods to aggregate either the feature vectors to form a representative one or the modelling results to compute a consolidated risk score. As a proof of concept, we apply these strategies to predict distant metastasis risk in a cohort of 115 non-small cell lung cancer patients, 60% of whom exhibit regionally advanced disease. Two feature sets (radiomics extracted from PET and PET interpolated to CT resolution) are tested across various survival models using a Monte Carlo Cross-Validation framework.</div></div><div><h3>Results:</h3><div>Across both feature sets, incorporating all available lesions — rather than limiting analysis to the primary tumour — consistently improved the c-index, irrespective of the survival model used. The highest c-Index obtained by a primary tumour-only model was 0.611 for the PET dataset and 0.614 for the PET_CT dataset, while by using all lesions we were able to achieve c-Indices of 0.632 and 0.634.</div></div><div><h3>Conclusion:</h3><div>Lesions beyond the primary tumour carry information that should be utilized in radiomics-based models to enhance predictive ability.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108840"},"PeriodicalIF":4.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label knowledge guided transformer for automatic radiology report generation 用于自动生成放射学报告的标签知识引导转换器
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-23 DOI: 10.1016/j.cmpb.2025.108877
Rui Wang, Jianguo Liang
{"title":"Label knowledge guided transformer for automatic radiology report generation","authors":"Rui Wang,&nbsp;Jianguo Liang","doi":"10.1016/j.cmpb.2025.108877","DOIUrl":"10.1016/j.cmpb.2025.108877","url":null,"abstract":"<div><h3>Background and Objective</h3><div>The task of automatically generating radiology reports is a key research area at the intersection of computer science and medicine, aiming to enable computers to generate corresponding reports on the basis of radiology images. This field currently faces a significant data bias issue, which causes words describing diseases to be overshadowed by words describing normal regions in the reports.</div></div><div><h3>Methods</h3><div>To address this, we propose the label knowledge guided transformer model for generating radiology reports. Specifically, our model incorporates a Multi Feature Extraction module and a Dual-branch Collaborative Attention module. The Multi Feature Extraction module leverages medical knowledge graphs and feature clustering algorithms to optimize the label feature extraction process from both the prediction and encoding of label information, making it the first module specifically designed to reduce redundant label features. The Dual-branch Collaborative Attention module uses two parallel attention mechanisms to simultaneously compute visual features and label features, and prevents the direct integration of label features into visual features, thereby effectively balancing the model's attention between label features and visual features.</div></div><div><h3>Results</h3><div>We conduct experimental tests using the IU X-Ray and MIMIC-CXR datasets under six natural language generation evaluation metrics and analyze the results. Experimental results demonstrate that our model achieves state-of-the-art (SOTA) performance. Compared with the baseline models, the label knowledge guided transformer achieves an average improvement of 23.3% on the IU X-Ray dataset and 20.7% on the MIMIC-CXR dataset.</div></div><div><h3>Conclusion</h3><div>Our model has strong capabilities in capturing abnormal features, effectively mitigating the adverse effects caused by data bias, and demonstrates significant potential to enhance the quality and accuracy of automatically generated radiology reports.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108877"},"PeriodicalIF":4.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144167103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable machine learning models based on body composition and inflammatory nutritional index (BCINI) to predict early postoperative recurrence of colorectal cancer: Multi-center study 基于身体成分和炎症营养指数(BCINI)的可解释机器学习模型预测结直肠癌术后早期复发:多中心研究
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-22 DOI: 10.1016/j.cmpb.2025.108874
Yongjie Zhou , Jinhong Zhao , Fei Zou , Yongming Tan , Wei Zeng , Jiahui Jiang , Jiale Hu , Qiao Zeng , Lianggeng Gong , Lan Liu , Linhua Zhong
{"title":"Interpretable machine learning models based on body composition and inflammatory nutritional index (BCINI) to predict early postoperative recurrence of colorectal cancer: Multi-center study","authors":"Yongjie Zhou ,&nbsp;Jinhong Zhao ,&nbsp;Fei Zou ,&nbsp;Yongming Tan ,&nbsp;Wei Zeng ,&nbsp;Jiahui Jiang ,&nbsp;Jiale Hu ,&nbsp;Qiao Zeng ,&nbsp;Lianggeng Gong ,&nbsp;Lan Liu ,&nbsp;Linhua Zhong","doi":"10.1016/j.cmpb.2025.108874","DOIUrl":"10.1016/j.cmpb.2025.108874","url":null,"abstract":"<div><h3>Background and objective</h3><div>Colorectal cancer (CRC) ranks among the most prevalent cancers worldwide, with early postoperative recurrence remaining a major cause of mortality. Body composition and inflammatory-nutritional indices (BCINI) have demonstrated potential in reflecting patients’ physiological states; however, their association with early recurrence (ER) after CRC resection remains unclear. This study aimed to establish and validate interpretable machine learning (ML) models based on BCINI to predict ER after CRC resection.</div></div><div><h3>Methods</h3><div>Data from three hospitals were collected, including CT-based body composition metrics and blood test variables. After variable selection, six ML algorithms—XGBoost, Complement Naive Bayes (CNB), support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), and Gaussian Naive Bayes (GNB)—were used to construct ER prediction models. Optimal model selection was based on receiver operating characteristic (ROC) curve analysis. The selected model was externally validated using independent datasets to assess generalizability, while its accuracy and clinical utility were evaluated via calibration curves and decision curve analysis. Additionally, SHapley Additive exPlanations were employed to visualize prediction processes for clinical interpretability.</div></div><div><h3>Results</h3><div>The XGBoost algorithm outperformed other methods in model selection, demonstrating superior accuracy and stability with area under the ROC curve (AUC) values of 0.837 and 0.777 in internal training and validation sets, respectively. This model achieved the lowest Brier score of 0.131 on calibration curves, surpassing the five other ML algorithms. External validation further confirmed its generalizability, yielding AUC values of 0.783 and 0.773 in two independent datasets. Consistent predictive performance was observed across age subgroups (&lt;60 years: AUC 0.762–0.834; ≥60 years: AUC 0.777–0.800) and tumor location subgroups (colon: AUC 0.785–0.845; rectum: AUC 0.751–0.799).</div></div><div><h3>Conclusions</h3><div>The interpretable ML model developed based on BCINI shows promise in predicting ER of CRC. This approach may provide valuable insights for clinical decision-making, enabling early detection and intervention to improve patient outcomes.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108874"},"PeriodicalIF":4.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144211828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The knowledge distillation-assisted multimodal model for osteoporosis screening 知识提炼辅助骨质疏松筛查的多模态模型
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-22 DOI: 10.1016/j.cmpb.2025.108848
Teng Su , Qing Yang , Meng Si , Yuanyuan Sun , Xinyu Ji , Yuyan Zhang , Bing Ji
{"title":"The knowledge distillation-assisted multimodal model for osteoporosis screening","authors":"Teng Su ,&nbsp;Qing Yang ,&nbsp;Meng Si ,&nbsp;Yuanyuan Sun ,&nbsp;Xinyu Ji ,&nbsp;Yuyan Zhang ,&nbsp;Bing Ji","doi":"10.1016/j.cmpb.2025.108848","DOIUrl":"10.1016/j.cmpb.2025.108848","url":null,"abstract":"<div><h3>Background and objective</h3><div>Osteoporosis is characterized by reduced bone mass and deterioration of bone structure, yet screening rates prior to fractures remain low. Given its high prevalence and severe consequences, developing an effective osteoporosis screening model is highly significant. However, constructing these screening models presents two main challenges. First, selecting representative slices from CT image sequences is challenging, making it crucial to filter the most indicative slices. Second, samples lacking complete modal data cannot be directly used in multimodal fusion, resulting in underutilization of available data and limiting the performance of the multimodal osteoporosis screening model.</div></div><div><h3>Methods</h3><div>In this paper, we propose a reinforcement learning-driven knowledge distillation-assisted multimodal model for osteoporosis screening. The model integrates demographic characteristics, routine laboratory indicators, and CT images. Specifically, our framework includes two novel components: 1) a deep reinforcement learning-based image selection module (DRLIS) designed to select representative image slices from CT sequences; and 2) a knowledge distillation-assisted multimodal model (KDAMM) that transfers information from single-modal teacher networks to the multimodal model, effectively utilizing samples with incomplete modalities. The codes are published on: <span><span>https://github.com/AImedcinesdu212/Osteoporosis-Prediction</span><svg><path></path></svg></span><span><span>https://github.com/Hidden-neurosis/osreoporosis.git</span><svg><path></path></svg></span>.</div></div><div><h3>Results</h3><div>The proposed multimodal osteoporosis screening model achieves an accuracy of 88.65 % and an AUC of 0.9542, surpassing existing models by 2.85 % in accuracy and 0.0212 in AUC. Additionally, we demonstrate the effectiveness of each novelty within our framework. The SHAP values are calculated to assess the importance of demographic characteristics and routine laboratory test data.</div></div><div><h3>Conclusion</h3><div>This paper presents a knowledge distillation-assisted multimodal model for opportunistic osteoporosis screening. The model incorporates demographic characteristics, routine laboratory indicators (including blood tests and urinalysis), and CT images. Extensive experiments, conducted on self-collected datasets, validate that the proposed framework achieves state-of-the-art performance.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108848"},"PeriodicalIF":4.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144231069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual-language foundation models in medical imaging: A systematic review and meta-analysis of diagnostic and analytical applications 医学影像中的视觉语言基础模型:诊断和分析应用的系统回顾和荟萃分析
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-21 DOI: 10.1016/j.cmpb.2025.108870
Yiyao Sun , Xinran Wen , Yan Zhang , Lijun Jin , Chunna Yang , Qianhui Zhang , Mingchen Jiang , Zhaoyang Xu , Wei Guo , Juan Su , Xiran Jiang
{"title":"Visual-language foundation models in medical imaging: A systematic review and meta-analysis of diagnostic and analytical applications","authors":"Yiyao Sun ,&nbsp;Xinran Wen ,&nbsp;Yan Zhang ,&nbsp;Lijun Jin ,&nbsp;Chunna Yang ,&nbsp;Qianhui Zhang ,&nbsp;Mingchen Jiang ,&nbsp;Zhaoyang Xu ,&nbsp;Wei Guo ,&nbsp;Juan Su ,&nbsp;Xiran Jiang","doi":"10.1016/j.cmpb.2025.108870","DOIUrl":"10.1016/j.cmpb.2025.108870","url":null,"abstract":"<div><h3>Background and objective</h3><div>Visual-language foundation models (VLMs) have garnered attention for their numerous advantages and significant potential in AI-aided diagnosis and treatment, driving widespread applications in medical tasks. This study analyzes and summarizes the value and prospects of VLMs, highlighting their groundbreaking opportunities in healthcare.</div></div><div><h3>Methods</h3><div>This systematic review and meta-analysis, registered with PROSPERO (CRD42024575746), included studies from PubMed, Embase, Web of Science, and IEEE from inception to December 31, 2024. The inclusion criteria covered state-of-the-art VLM developments and applications in medical imaging. Metrics such as AUC, Dice coefficient, BLEU score, and Accuracy were pooled for tasks like classification, segmentation, report generation, and Visual Question Answering (VQA). Reporting quality and bias were assessed using the QUADAS-AI checklist.</div></div><div><h3>Results</h3><div>A total of 106 eligible studies were identified for this systematic review, of which 94 were included for meta-analysis. The pooled AUC for downstream classification tasks was 0.86 (0.85–0.87); pooled Dice coefficient for segmentation tasks was 0.73 (0.68–0.78); pooled BLEU score for report generation tasks was 0.31 (0.20–0.43); and pooled Acc score for VQA was 0.76 (0.71–0.81). Subgroup analyses were stratified by imaging modalities (radiological, pathological and surface imaging) and publication year (before or after 2023) to explore the heterogeneity within VLM research and to analyze diagnostic performance of the VLMs under different conditions.</div></div><div><h3>Conclusions</h3><div>VLMs based on medical imaging have demonstrated strong performance and significant potential in computer-assisted clinical diagnosis. Stricter reporting standards addressing the unique challenges of VLM research could enhance study quality.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108870"},"PeriodicalIF":4.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal convolutional neural network-based feature extraction and asynchronous channel information fusion method for heart abnormality detection in phonocardiograms 基于时间卷积神经网络的心音图心脏异常特征提取与异步信道信息融合方法
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-05-21 DOI: 10.1016/j.cmpb.2025.108871
Jae-Man Shin , Seongyong Park , Keewon Shin , Woo-Young Seo , Hyun-Seok Kim , Dong-Kyu Kim , Baehun Moon , Seul-Gi Cha , Won-Jung Shin , Sung-Hoon Kim
{"title":"Temporal convolutional neural network-based feature extraction and asynchronous channel information fusion method for heart abnormality detection in phonocardiograms","authors":"Jae-Man Shin ,&nbsp;Seongyong Park ,&nbsp;Keewon Shin ,&nbsp;Woo-Young Seo ,&nbsp;Hyun-Seok Kim ,&nbsp;Dong-Kyu Kim ,&nbsp;Baehun Moon ,&nbsp;Seul-Gi Cha ,&nbsp;Won-Jung Shin ,&nbsp;Sung-Hoon Kim","doi":"10.1016/j.cmpb.2025.108871","DOIUrl":"10.1016/j.cmpb.2025.108871","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Auscultation-based cardiac abnormality detection is valuable screening approach in pediatric populations, particularly in resource-limited settings. However, its clinical utility is often limited by phonocardiogram (PCG) signal variability and a difficulty in distinguishing between pathological and innocent murmurs.</div></div><div><h3>Methods</h3><div>We proposed a framework that leverages temporal convolutional network (TCN)-based feature extraction and information fusion to integrate asynchronously acquired PCG recordings at the patient level. A probabilistic representation of the pathological state was first extracted from segmented PCG signals using a TCN-based model. These segment-level representations were subsequently averaged to generate record- or patient-level features. The framework was designed to accommodate recordings of varying durations and different auscultation locations. Furthermore, we addressed domain adaptation challenges in cardiac abnormality detection by incorporating transfer learning techniques.</div></div><div><h3>Results</h3><div>The proposed method was evaluated using two large, independent public PCG datasets, demonstrating robust performance at both record and patient levels. While its initial performance on an unseen external dataset was modest, likely due to demographic characteristics and signal acquisition, transfer learning significantly improved the model's performance, yielding an area under the receiver operating characteristic curve of 0.931±0.027 and an area under the precision-recall curve of 0.867±0.064 in external validation. Combining internal and external datasets further enhanced model generalizability.</div></div><div><h3>Conclusion</h3><div>This proposed framework accommodates multi-channel, variable-length PCG recordings, making it a flexible and accurate solution for detecting pediatric cardiac abnormalities, particularly in low-resource settings. The source code is publicly available on Github (<span><span>https://github.com/baporlab/pcg_pathological_murmur_detection</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"269 ","pages":"Article 108871"},"PeriodicalIF":4.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信