Artificial Intelligence in Medicine最新文献

筛选
英文 中文
Pathway information on methylation analysis using deep neural network (PROMINENT): An interpretable deep learning method with pathway prior for phenotype prediction using gene-level DNA methylation 使用深度神经网络进行甲基化分析的途径信息:一种可解释的深度学习方法,具有使用基因水平DNA甲基化进行表型预测的途径先验。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-29 DOI: 10.1016/j.artmed.2025.103236
Soyeon Kim , Laizhi Zhang , Yidi Qin , Rebecca I. Caldino Bohn , Hyun Jung Park
{"title":"Pathway information on methylation analysis using deep neural network (PROMINENT): An interpretable deep learning method with pathway prior for phenotype prediction using gene-level DNA methylation","authors":"Soyeon Kim ,&nbsp;Laizhi Zhang ,&nbsp;Yidi Qin ,&nbsp;Rebecca I. Caldino Bohn ,&nbsp;Hyun Jung Park","doi":"10.1016/j.artmed.2025.103236","DOIUrl":"10.1016/j.artmed.2025.103236","url":null,"abstract":"<div><h3>Background</h3><div>DNA methylation is a key epigenetic marker that influences gene expression and phenotype regulation, and is affected by both genetic and environmental factors. Traditional linear regression methods such as elastic nets have been employed to assess the cumulative effects of multiple DNA methylation markers on phenotypes. However, these methods often fail to capture the complex nonlinear nature of the data. Recent deep learning approaches, such as MethylNet, have improved the prediction accuracy but lack interpretability and efficiency.</div></div><div><h3>Findings</h3><div>To address these limitations, we introduced <u>P</u>athway Info<u>r</u>mati<u>o</u>n on <u>M</u>ethylat<u>i</u>on Analysis using a Deep <u>Ne</u>ural <u>N</u>e<u>t</u>work (PROMINENT), a novel interpretable deep learning method that integrates gene-level DNA methylation data with biological pathway information for phenotype prediction. PROMINENT enhances interpretability and prediction accuracy by incorporating gene- and pathway-level priors from databases such as Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG). It employs SHapley Additive exPlanations (SHAP) to prioritize significant genes and pathways. Evaluated across various datasets, childhood asthma, idiopathic pulmonary fibrosis (IPF), and first-episode psychosis (FEP)—PROMINENT consistently outperformed existing methods in terms of prediction accuracy and computational efficiency. PROMINENT also identified crucial genes and pathways involved in disease mechanisms.</div></div><div><h3>Conclusions</h3><div>PROMINENT represents a significant advancement in leveraging DNA methylation data for phenotype prediction, offering both high accuracy and interpretability within reasonable computational time. This method holds promise for elucidating the epigenetic underpinnings of complex diseases and enhancing the utility of DNA methylation data in biomedical research.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"170 ","pages":"Article 103236"},"PeriodicalIF":6.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMSupcon: An image fusion-based multi-modal supervised contrastive method for brain tumor diagnosis MMSupcon:一种基于图像融合的多模态监督对比脑肿瘤诊断方法
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-28 DOI: 10.1016/j.artmed.2025.103253
Haoyu Wang , Jing Zhang , Siying Wu , Haoran Wei , Xun Chen , Yunwei Ou , Xiaoyan Sun
{"title":"MMSupcon: An image fusion-based multi-modal supervised contrastive method for brain tumor diagnosis","authors":"Haoyu Wang ,&nbsp;Jing Zhang ,&nbsp;Siying Wu ,&nbsp;Haoran Wei ,&nbsp;Xun Chen ,&nbsp;Yunwei Ou ,&nbsp;Xiaoyan Sun","doi":"10.1016/j.artmed.2025.103253","DOIUrl":"10.1016/j.artmed.2025.103253","url":null,"abstract":"<div><div>The diagnosis of brain tumors is pivotal for effective treatment, with MRI serving as a commonly used non-invasive diagnostic modality in clinical practices. Fundamentally, brain tumor diagnosis is a type of pattern recognition task that requires the integration of information from multi-modal MRI images. However, existing fusion strategies are hindered by the scarcity of multi-modal imaging samples. In this paper, we propose a new training paradigm tailored for the scenario of multi-modal imaging in brain tumor diagnosis, called multi-modal supervised contrastive learning method (MMSupcon). This method significantly enhances diagnostic accuracy through two key components: multi-modal medical image fusion and multi-modal supervised contrastive loss. First, the fusion component integrates complementary imaging modalities to generate information-rich samples. Second, by introducing fused samples to guide original samples in learning feature consistency or inconsistency among classes, our loss component effectively preserves the integrity of cross-modal information while maintaining the distinctiveness of individual modalities. Finally, MMSupcon is validated on a real-world brain tumor dataset collected from Beijing Tiantan Hospital, achieving state-of-the-art performance. Furthermore, additional experiments on two public BraTS glioma classification datasets also demonstrate our substantial performance improvements. The source code is released at <span><span>https://github.com/hywang02/MMSupcon</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"170 ","pages":"Article 103253"},"PeriodicalIF":6.2,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145005191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET–CT imaging 保护隐私的联合迁移学习在PET-CT图像中增强肝脏病变分割
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-28 DOI: 10.1016/j.artmed.2025.103245
Rajesh Kumar , Shaoning Zeng , Jay Kumar , Zakria , Xinfeng Mao
{"title":"Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET–CT imaging","authors":"Rajesh Kumar ,&nbsp;Shaoning Zeng ,&nbsp;Jay Kumar ,&nbsp;Zakria ,&nbsp;Xinfeng Mao","doi":"10.1016/j.artmed.2025.103245","DOIUrl":"10.1016/j.artmed.2025.103245","url":null,"abstract":"<div><div>Positron Emission Tomography-Computed Tomography (PET–CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning’s privacy-preserving collaboration with transfer learning’s pre-trained model adaptation, enhancing liver lesion segmentation in PET–CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET–CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET–CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103245"},"PeriodicalIF":6.2,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers 可信医学成像的物理基础:对人工智能研究人员的调查
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-26 DOI: 10.1016/j.artmed.2025.103251
Miriam Cobo , David Corral Fontecha , Wilson Silva , Lara Lloret Iglesias
{"title":"Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers","authors":"Miriam Cobo ,&nbsp;David Corral Fontecha ,&nbsp;Wilson Silva ,&nbsp;Lara Lloret Iglesias","doi":"10.1016/j.artmed.2025.103251","DOIUrl":"10.1016/j.artmed.2025.103251","url":null,"abstract":"<div><div>Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103251"},"PeriodicalIF":6.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIPs: Tooth instance and pulp segmentation based on hierarchical extraction and fusion of anatomical priors from cone-beam CT TIPs:基于锥形束CT解剖先验信息的分层提取和融合的牙实体和牙髓分割
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-23 DOI: 10.1016/j.artmed.2025.103247
Tao Zhong , Yang Ning , Xueyang Wu , Li Ye , Chichi Li , Yu Zhang , Yu Du
{"title":"TIPs: Tooth instance and pulp segmentation based on hierarchical extraction and fusion of anatomical priors from cone-beam CT","authors":"Tao Zhong ,&nbsp;Yang Ning ,&nbsp;Xueyang Wu ,&nbsp;Li Ye ,&nbsp;Chichi Li ,&nbsp;Yu Zhang ,&nbsp;Yu Du","doi":"10.1016/j.artmed.2025.103247","DOIUrl":"10.1016/j.artmed.2025.103247","url":null,"abstract":"<div><div>Accurate instance segmentation of tooth and pulp from cone-beam computed tomography (CBCT) images is essential but highly challenging due to the pulp’s small structures and indistinct boundaries. To address these critical challenges, we propose TIPs designed for <u>T</u>ooth <u>I</u>nstance and <u>P</u>ulp <u>s</u>egmentation. TIPs initially employs a backbone model to segment a binary mask of the tooth from CBCT images, which is then utilized to derive position prior of the tooth and shape prior of the pulp. Subsequently, we propose the Hierarchical Fusion Mamba models to leverage the strengths of both anatomical priors and CBCT images by extracting and integrating shallow and deep features from Convolution Neural Networks (CNNs) and State Space Sequence Models (SSMs), respectively. This process achieves tooth instance and pulp segmentation, which are then combined to obtain the final pulp instance segmentation. Extensive experiments on CBCT scans from 147 patients demonstrate that TIPs significantly outperforms state-of-the-art methods in terms of segmentation accuracy. Furthermore, we have encapsulated this framework into an openly accessible tool for one-click using. To our knowledge, this is the first toolbox capable of segmentation of tooth and pulp instances, with its performance validated on two external datasets comprising 59 samples from the Toothfairy2 dataset and 48 samples from the STS dataset. These results demonstrate the potential of TIPs as a practical tool to boost clinical workflows in digital dentistry, enhancing the precision and efficiency of dental diagnostics and treatment planning.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103247"},"PeriodicalIF":6.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiplex aggregation combining sample reweight composite network for pathology image segmentation 基于多重聚合的样本重权复合网络病理图像分割
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-22 DOI: 10.1016/j.artmed.2025.103239
Dawei Fan , Zhuo Chen , Yifan Gao , Jiaming Yu , Kaibin Li , Yi Wei , Yanping Chen , Riqing Chen , Lifang Wei
{"title":"Multiplex aggregation combining sample reweight composite network for pathology image segmentation","authors":"Dawei Fan ,&nbsp;Zhuo Chen ,&nbsp;Yifan Gao ,&nbsp;Jiaming Yu ,&nbsp;Kaibin Li ,&nbsp;Yi Wei ,&nbsp;Yanping Chen ,&nbsp;Riqing Chen ,&nbsp;Lifang Wei","doi":"10.1016/j.artmed.2025.103239","DOIUrl":"10.1016/j.artmed.2025.103239","url":null,"abstract":"<div><div>In digital pathology, nuclei segmentation is a critical task for pathological image analysis, holding significant importance for diagnosis and research. However, challenges such as blurred boundaries between nuclei and background regions, domain shifts between pathological images, and uneven distribution of nuclei pose significant obstacles to segmentation tasks. To address these issues, we propose an innovative Causal inference inspired Diversified aggregation convolution Network named CDNet, which integrates a Diversified Aggregation Convolution (DAC), a Causal Inference Module (CIM) based on causal discovery principles, and a comprehensive loss function. DAC improves the issue of unclear boundaries between nuclei and background regions, and CIM enhances the model’s cross-domain generalization ability. A novel Stable-Weighted Combined loss function was designed that combined the chunk-computed Dice Loss with the Focal Loss and the Causal Inference Loss to address the issue of uneven nuclei distribution. Experimental evaluations on the MoNuSeg, GLySAC, and MoNuSAC datasets demonstrate that CDNet significantly outperforms other models and exhibits strong generalization capabilities. Specifically, CDNet outperforms the second-best model by 0.79% (mIoU) and 1.32% (DSC) on the MoNuSeg dataset, by 2.65% (mIoU) and 2.13% (DSC) on the GLySAC dataset, and by 1.54% (mIoU) and 1.10% (DSC) on the MoNuSAC dataset. Code is publicly available at <span><span>https://github.com/7FFDW/CDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103239"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unprepared and overwhelmed: A case for clinician-focused AI education 毫无准备和不知所措:一个以临床医生为中心的人工智能教育案例
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-22 DOI: 10.1016/j.artmed.2025.103252
Nadia Siddiqui , Yazan Bouchi , Ellen Kim , Jonathan D. Hron , John Park , John Kang
{"title":"Unprepared and overwhelmed: A case for clinician-focused AI education","authors":"Nadia Siddiqui ,&nbsp;Yazan Bouchi ,&nbsp;Ellen Kim ,&nbsp;Jonathan D. Hron ,&nbsp;John Park ,&nbsp;John Kang","doi":"10.1016/j.artmed.2025.103252","DOIUrl":"10.1016/j.artmed.2025.103252","url":null,"abstract":"<div><div>This perspective illustrates the need for improved AI education for clinicians, highlighting gaps in current approaches and technical content. It advocates for the creation of AI guides specifically designed for clinicians integrating case-based learning approaches and led by clinical informaticians. We emphasize the importance of modern medical educational strategies, and reflect on relevance and applicability of AI education, to ensure clinicians are prepared for safe, effective, and efficient AI-driven healthcare.</div></div><div><h3>1–2 Sentence description</h3><div>This position article reflects on the current landscape of AI educational guides for clinicians, identifying gaps in instructional approaches and technical content. We propose the development of case-based AI education modules led by clinical informatics physicians in collaboration with professional societies.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103252"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvidenceMap: Learning evidence analysis to unleash the power of small language models for biomedical question answering 证据地图:学习证据分析,释放生物医学问题回答的小语言模型的力量
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-19 DOI: 10.1016/j.artmed.2025.103246
Chang Zong , Jian Wan , Siliang Tang , Lei Zhang
{"title":"EvidenceMap: Learning evidence analysis to unleash the power of small language models for biomedical question answering","authors":"Chang Zong ,&nbsp;Jian Wan ,&nbsp;Siliang Tang ,&nbsp;Lei Zhang","doi":"10.1016/j.artmed.2025.103246","DOIUrl":"10.1016/j.artmed.2025.103246","url":null,"abstract":"<div><div>When addressing professional questions in the biomedical domain, humans typically acquire multiple pieces of information as evidence and engage in multifaceted analysis to provide high-quality answers. Current LLM-based question answering methods lack a detailed definition and learning process for evidence analysis, leading to the risk of error propagation and hallucinations while using evidence. Although increasing the parameter size of LLMs can alleviate these issues, it also presents challenges in training and deployment with limited resources. In this study, we propose <strong><span>EvidenceMap</span></strong>, which aims to enable a lightweight pre-trained language model to explicitly learn multiple aspects of biomedical evidence, including supportive evaluation, logical correlation and content summarization, thereby latently guiding a generative model (around 3B parameters) to provide textual responses. Experimental results demonstrate that our method, learning evidence analysis by fine-tuning a model with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and 5.7% in reference-based quality and accuracy, respectively. The code and dataset for reproducing our framework and experiments are available at <span><span>https://github.com/ZUST-BIT/EvidenceMap</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103246"},"PeriodicalIF":6.2,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Difficulty-aware coupled contour regression network with IoU loss for efficient IVUS delineation 具有IoU损失的困难感知耦合轮廓回归网络用于有效的IVUS描绘
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-18 DOI: 10.1016/j.artmed.2025.103240
Yuan Yang , Xu Yu , Wei Yu , Shengxian Tu , Su Zhang , Wei Yang
{"title":"Difficulty-aware coupled contour regression network with IoU loss for efficient IVUS delineation","authors":"Yuan Yang ,&nbsp;Xu Yu ,&nbsp;Wei Yu ,&nbsp;Shengxian Tu ,&nbsp;Su Zhang ,&nbsp;Wei Yang","doi":"10.1016/j.artmed.2025.103240","DOIUrl":"10.1016/j.artmed.2025.103240","url":null,"abstract":"<div><div>The lumen and external elastic lamina contour delineation is crucial for quantitative analyses of intravascular ultrasound (IVUS) images. However, the various artifacts in IVUS images pose substantial challenges for accurate delineation. Existing mask-based methods often produce anatomically implausible contours in artifact-affected images, while contour-based methods suffer from the over-smooth problem within the artifact regions. In this paper, we directly regress the contour pairs instead of mask-based segmentation. A coupled contour representation is adopted to learn a low-dimensional contour signature space, where the embedded anatomical prior enables the model to avoid producing unreasonable results. Further, a PIoU loss is proposed to capture the overall shape of the contour points and maximize the similarity between the regressed contours and manually delineated contours with various irregular shapes, alleviating the over-smooth problem. For the images with severe artifacts, a difficulty-aware training strategy is designed for contour regression, which gradually guides the model focus on hard samples and improves contour localization accuracy. We evaluate the proposed framework on a large IVUS dataset, consisting of 7204 frames from 185 pullbacks. The mean Dice similarity coefficients of the method for the lumen and external elastic lamina are 0.951 and 0.967, which significantly outperforms other state-of-the-art (SOTA) models. All regressed contours in the test images are anatomically plausible. On the public IVUS-2011 dataset, the proposed method attains comparable performance to the SOTA models with the highest processing speed at 100 fps. The code is available at <span><span>https://github.com/SMU-MedicalVision/ContourRegression</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103240"},"PeriodicalIF":6.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BIGPN: Biologically informed graph propagational network for plasma proteomic profiling of neurodegenerative biomarkers BIGPN:用于神经退行性生物标志物血浆蛋白质组学分析的生物学信息图传播网络
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2025-08-15 DOI: 10.1016/j.artmed.2025.103241
Sunghong Park , Dong-gi Lee , Juhyeon Kim , Masaud Shah , Hyunjung Shin , Hyun Goo Woo
{"title":"BIGPN: Biologically informed graph propagational network for plasma proteomic profiling of neurodegenerative biomarkers","authors":"Sunghong Park ,&nbsp;Dong-gi Lee ,&nbsp;Juhyeon Kim ,&nbsp;Masaud Shah ,&nbsp;Hyunjung Shin ,&nbsp;Hyun Goo Woo","doi":"10.1016/j.artmed.2025.103241","DOIUrl":"10.1016/j.artmed.2025.103241","url":null,"abstract":"<div><div>Neurodegenerative diseases involve progressive neuronal dysfunction, requiring identification of specific pathological features for accurate diagnosis. Although cerebrospinal fluid analysis and neuroimaging are commonly employed, their invasiveness and high-cost limit widespread clinical use. In contrast, blood-based biomarkers offer a non-invasive, cost-effective, and accessible alternative. Recent advances in plasma proteomics combined with machine learning (ML) have further improved diagnostic accuracy; however, the integration of underlying biological information remains largely overlooked. Notably, many ML-based plasma proteomic profiling approaches overlook protein-protein interactions (PPI) and the hierarchical structure of molecular pathways. To address these limitations, we propose Biologically Informed Graph Propagational Network (BIGPN), a novel ML model for plasma proteomic profiling of neurodegenerative biomarkers. BIGPN employs graph neural network-based architecture to harness a PPI network and propagates independent effects of proteins through the PPI network, capturing higher-order interactions with global awareness of PPIs. BIGPN then applies a multi-level pathway structure to extract biologically meaningful feature representations, ensuring that the model reflects structured biological mechanisms, and it provides clear explainability of the pathway structure in the context of importance through probabilistically represented parameters. Experimental validation on the UK Biobank dataset demonstrated the superior performance of BIGPN in neurodegenerative risk prediction, outperforming comparison methods. Furthermore, the explainability of BIGPN facilitated detailed analyses of the discriminative significance of synergistic effects, the predictive importance of proteins, and the longitudinal changes in biomarker profiles, reinforcing its clinical relevance. Overall, BIGPN's integration of PPIs and pathway structure addresses critical gaps in ML-based plasma proteomic profiling, offering a powerful approach for improved neurodegenerative disease diagnosis.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103241"},"PeriodicalIF":6.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信