Medical image analysis最新文献

筛选
英文 中文
Coordinate-based neural representation enabling zero-shot learning for fast 3D multiparametric quantitative MRI
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-06 DOI: 10.1016/j.media.2025.103530
Guoyan Lao , Ruimin Feng , Haikun Qi , Zhenfeng Lv , Qiangqiang Liu , Chunlei Liu , Yuyao Zhang , Hongjiang Wei
{"title":"Coordinate-based neural representation enabling zero-shot learning for fast 3D multiparametric quantitative MRI","authors":"Guoyan Lao ,&nbsp;Ruimin Feng ,&nbsp;Haikun Qi ,&nbsp;Zhenfeng Lv ,&nbsp;Qiangqiang Liu ,&nbsp;Chunlei Liu ,&nbsp;Yuyao Zhang ,&nbsp;Hongjiang Wei","doi":"10.1016/j.media.2025.103530","DOIUrl":"10.1016/j.media.2025.103530","url":null,"abstract":"<div><div>Quantitative magnetic resonance imaging (qMRI) offers tissue-specific physical parameters with significant potential for neuroscience research and clinical practice. However, lengthy scan times for 3D multiparametric qMRI acquisition limit its clinical utility. Here, we propose SUMMIT, an innovative imaging methodology that includes data acquisition and an unsupervised reconstruction for simultaneous multiparametric qMRI. SUMMIT first encodes multiple important quantitative properties into highly undersampled k-space. It further leverages implicit neural representation incorporated with a dedicated physics model to reconstruct the desired multiparametric maps without needing external training datasets. SUMMIT delivers co-registered <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>, <span><math><msubsup><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span>, and subvoxel quantitative susceptibility mapping. Extensive simulations, phantom, and in vivo brain imaging demonstrate SUMMIT’s high accuracy. Notably, SUMMIT uniquely unravels microstructural alternations in patients with white matter hyperintense lesions with high sensitivity and specificity. Additionally, the proposed unsupervised approach for qMRI reconstruction also introduces a novel zero-shot learning paradigm for multiparametric imaging applicable to various medical imaging modalities.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103530"},"PeriodicalIF":10.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based association analysis for medical imaging using latent-space geometric confounder correction
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-06 DOI: 10.1016/j.media.2025.103529
Xianjing Liu , Bo Li , Meike W. Vernooij , Eppo B. Wolvius , Gennady V. Roshchupkin , Esther E. Bron
{"title":"AI-based association analysis for medical imaging using latent-space geometric confounder correction","authors":"Xianjing Liu ,&nbsp;Bo Li ,&nbsp;Meike W. Vernooij ,&nbsp;Eppo B. Wolvius ,&nbsp;Gennady V. Roshchupkin ,&nbsp;Esther E. Bron","doi":"10.1016/j.media.2025.103529","DOIUrl":"10.1016/j.media.2025.103529","url":null,"abstract":"<div><div>This study addresses the challenges of confounding effects and interpretability in artificial-intelligence-based medical image analysis. Whereas existing literature often resolves confounding by removing confounder-related information from latent representations, this strategy risks affecting image reconstruction quality in generative models, thus limiting their applicability in feature visualization. To tackle this, we propose a different strategy that retains confounder-related information in latent representations while finding an alternative confounder-free representation of the image data.</div><div>Our approach views the latent space of an autoencoder as a vector space, where imaging-related variables, such as the learning target (t) and confounder (c), have a vector capturing their variability. The confounding problem is addressed by searching a confounder-free vector which is orthogonal to the confounder-related vector but maximally collinear to the target-related vector. To achieve this, we introduce a novel correlation-based loss that not only performs vector searching in the latent space, but also encourages the encoder to generate latent representations linearly correlated with the variables. Subsequently, we interpret the confounder-free representation by sampling and reconstructing images along the confounder-free vector.</div><div>The efficacy and flexibility of our proposed method are demonstrated across three applications, accommodating multiple confounders and utilizing diverse image modalities. Results affirm the method’s effectiveness in reducing confounder influences, preventing wrong or misleading associations, and offering a unique visual interpretation for in-depth investigations by clinical and epidemiological researchers. The code is released in the following GitLab repository: <span><span>https://gitlab.com/radiology/compopbio/ai_based_association_analysis</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103529"},"PeriodicalIF":10.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The ULS23 challenge: A baseline model and benchmark dataset for 3D universal lesion segmentation in computed tomography
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-03 DOI: 10.1016/j.media.2025.103525
M.J.J. de Grauw , E.Th. Scholten , E.J. Smit , M.J.C.M. Rutten , M. Prokop , B. van Ginneken , A. Hering
{"title":"The ULS23 challenge: A baseline model and benchmark dataset for 3D universal lesion segmentation in computed tomography","authors":"M.J.J. de Grauw ,&nbsp;E.Th. Scholten ,&nbsp;E.J. Smit ,&nbsp;M.J.C.M. Rutten ,&nbsp;M. Prokop ,&nbsp;B. van Ginneken ,&nbsp;A. Hering","doi":"10.1016/j.media.2025.103525","DOIUrl":"10.1016/j.media.2025.103525","url":null,"abstract":"<div><div>Size measurements of tumor manifestations on follow-up CT examinations are crucial for evaluating treatment outcomes in cancer patients. Efficient lesion segmentation can speed up these radiological workflows. While numerous benchmarks and challenges address lesion segmentation in specific organs like the liver, kidneys, and lungs, the larger variety of lesion types encountered in clinical practice demands a more universal approach. To address this gap, we introduced the ULS23 benchmark for 3D universal lesion segmentation in chest-abdomen-pelvis CT examinations. The ULS23 training dataset contains 38,693 lesions across this region, including challenging pancreatic, colon and bone lesions. For evaluation purposes, we curated a dataset comprising 775 lesions from 284 patients. Each of these lesions was identified as a target lesion in a clinical context, ensuring diversity and clinical relevance within this dataset. The ULS23 benchmark is publicly accessible at <span><span>https://uls23.grand-challenge.org</span><svg><path></path></svg></span>, enabling researchers worldwide to assess the performance of their segmentation methods. Furthermore, we have developed and publicly released our baseline semi-supervised 3D lesion segmentation model. This model achieved an average Dice coefficient of 0.703 ± 0.240 on the challenge test set. We invite ongoing submissions to advance the development of future ULS models.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103525"},"PeriodicalIF":10.7,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hashimoto’s thyroiditis recognition from multi-modal data via global cross-attention and distance-aware training
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-02 DOI: 10.1016/j.media.2025.103515
Quankeng Huang , Wenchao Jiang , Junhang Li , Jianxuan Wen , Ji He , Wei Song
{"title":"Hashimoto’s thyroiditis recognition from multi-modal data via global cross-attention and distance-aware training","authors":"Quankeng Huang ,&nbsp;Wenchao Jiang ,&nbsp;Junhang Li ,&nbsp;Jianxuan Wen ,&nbsp;Ji He ,&nbsp;Wei Song","doi":"10.1016/j.media.2025.103515","DOIUrl":"10.1016/j.media.2025.103515","url":null,"abstract":"<div><div>Ultrasound images and biological indicators, which reveal Hashimoto’s thyroiditis (HT) characteristics in thyroid tissue from different perspectives, play crucial roles in HT recognition. Ultrasound images of patients with HT typically present a heterogeneous background with potential decreases in echogenicity. Clinicians are prone to misdiagnosing HT by visually evaluating these characteristics. In addition, patients with HT may exhibit fluctuations in relevant biological indicators, but there are no absolute relationships between a single biological indicator and HT. To address these challenges, we propose HTR-Net, a novel HT recognition network that combines ultrasound images and biological indicators through multi-modality information embedding. Specifically, HTR-Net introduces a global cross-attention module (GCA), which enhances recognition of the heterogeneous background with potential decreases in echogenicity. A distance-aware mismatched augmentation (DMA) strategy is also designed to expand the limited biological indicator data and ensure reasonable values for the augmented biological indicators, thus enhancing the model performance. In order to address the nonabsolute relationship between HT and a single biological indicator, we propose a distance-aware loss (DL) function to constrain feature mapping for effective information extraction from indicators, thereby enhancing the model’s capability to detect anomalous sets of biological indicators. To validate the proposed method, we construct a multi-center HT dataset and conduct extensive experiments. The experimental results demonstrate that the proposed HTR-Net achieves state-of-the-art (SOTA) performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103515"},"PeriodicalIF":10.7,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt-based polyp segmentation during endoscopy
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-28 DOI: 10.1016/j.media.2025.103510
Xinzhen Ren , Wenju Zhou , Naitong Yuan , Fang Li , Yetian Ruan , Huiyu Zhou
{"title":"Prompt-based polyp segmentation during endoscopy","authors":"Xinzhen Ren ,&nbsp;Wenju Zhou ,&nbsp;Naitong Yuan ,&nbsp;Fang Li ,&nbsp;Yetian Ruan ,&nbsp;Huiyu Zhou","doi":"10.1016/j.media.2025.103510","DOIUrl":"10.1016/j.media.2025.103510","url":null,"abstract":"<div><div>Accurate judgment and identification of polyp size is crucial in endoscopic diagnosis. However, the indistinct boundaries of polyps lead to missegmentation and missed cancer diagnoses. In this paper, a prompt-based polyp segmentation method (PPSM) is proposed to assist in early-stage cancer diagnosis during endoscopy. It combines endoscopists’ experience and artificial intelligence technology. Firstly, a prompt-based polyp segmentation network (PPSN) is presented, which contains the prompt encoding module (PEM), the feature extraction encoding module (FEEM), and the mask decoding module (MDM). The PEM encodes prompts to guide the FEEM for feature extracting and the MDM for mask generating. So that PPSN can segment polyps efficiently. Secondly, endoscopists’ ocular attention data (gazes) are used as prompts, which can enhance PPSN’s accuracy for segmenting polyps and obtain prompt data effectively in real-world. To reinforce the PPSN’s stability, non-uniform dot matrix prompts are generated to compensate for frame loss during the eye-tracking. Moreover, a data augmentation method based on the segment anything model (SAM) is introduced to enrich the prompt dataset and improve the PPSN’s adaptability. Experiments demonstrate the PPSM’s accuracy and real-time capability. The results from cross-training and cross-testing on four datasets show the PPSM’s generalization. Based on the research results, a disposable electronic endoscope with the real-time auxiliary diagnosis function for early cancer and an image processor have been developed. Part of the code and the method for generating the prompts dataset are available at <span><span>https://github.com/XinZhenRen/PPSM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103510"},"PeriodicalIF":10.7,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-based prototype inverse-projection for identifying cortical sulcal pattern abnormalities in congenital heart disease
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-28 DOI: 10.1016/j.media.2025.103538
Hyeokjin Kwon , Seungyeon Son , Sarah U. Morton , David Wypij , John Cleveland , Caitlin K Rollins , Hao Huang , Elizabeth Goldmuntz , Ashok Panigrahy , Nina H. Thomas , Wendy K. Chung , Evdokia Anagnostou , Ami Norris-Brilliant , Bruce D. Gelb , Patrick McQuillen , George A. Porter Jr. , Martin Tristani-Firouzi , Mark W. Russell , Amy E. Roberts , Jane W. Newburger , Kiho Im
{"title":"Graph-based prototype inverse-projection for identifying cortical sulcal pattern abnormalities in congenital heart disease","authors":"Hyeokjin Kwon ,&nbsp;Seungyeon Son ,&nbsp;Sarah U. Morton ,&nbsp;David Wypij ,&nbsp;John Cleveland ,&nbsp;Caitlin K Rollins ,&nbsp;Hao Huang ,&nbsp;Elizabeth Goldmuntz ,&nbsp;Ashok Panigrahy ,&nbsp;Nina H. Thomas ,&nbsp;Wendy K. Chung ,&nbsp;Evdokia Anagnostou ,&nbsp;Ami Norris-Brilliant ,&nbsp;Bruce D. Gelb ,&nbsp;Patrick McQuillen ,&nbsp;George A. Porter Jr. ,&nbsp;Martin Tristani-Firouzi ,&nbsp;Mark W. Russell ,&nbsp;Amy E. Roberts ,&nbsp;Jane W. Newburger ,&nbsp;Kiho Im","doi":"10.1016/j.media.2025.103538","DOIUrl":"10.1016/j.media.2025.103538","url":null,"abstract":"<div><div>Examining the altered arrangement and patterning of sulcal folds offers insights into the mechanisms of neurodevelopmental differences in psychiatric and neurological disorders. Previous sulcal pattern analysis used spectral graph matching of sulcal pit-based graph structures to assess deviations from normative sulcal patterns. However, challenges exist, including the absence of a standard criterion for defining a typical reference set, time-consuming cost of graph matching, user-defined feature weight sets, and assumptions about uniform node distribution. We developed a deep learning-based sulcal pattern analysis to address these challenges by adapting prototype-based graph neural networks to sulcal pattern graphs. Additionally, we proposed a prototype inverse-projection for better interpretability. Unlike other prototype-based models, our approach inversely projects prototypes onto individual node representations to calculate the inverse-projection weights, enabling efficient visualization of prototypes and focusing the model on selective regions. We evaluated our method through a classification task between healthy controls (<em>n</em> = 174, age = 15.4 <span><math><mrow><mo>±</mo><mrow><mspace></mspace></mrow></mrow></math></span>1.9 [mean ± standard deviation, years]) and patients with congenital heart disease (<em>n</em> = 345, age = 15.8 <span><math><mrow><mo>±</mo><mrow><mspace></mspace></mrow></mrow></math></span>4.7) from four cohort studies and a public dataset. Our approach demonstrated superior classification performance compared to other state-of-the-art models, supported by extensive ablative studies. Furthermore, we visualized and examined the learned prototypes to enhance understanding. We believe our method has the potential to be a sensitive and understandable tool for sulcal pattern analysis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103538"},"PeriodicalIF":10.7,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143685193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating language into medical visual recognition and reasoning: A survey
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-27 DOI: 10.1016/j.media.2025.103514
Yinbin Lu , Alan Wang
{"title":"Integrating language into medical visual recognition and reasoning: A survey","authors":"Yinbin Lu ,&nbsp;Alan Wang","doi":"10.1016/j.media.2025.103514","DOIUrl":"10.1016/j.media.2025.103514","url":null,"abstract":"<div><div>Vision-Language Models (VLMs) are regarded as efficient paradigms that build a bridge between visual perception and textual interpretation. For medical visual tasks, they can benefit from expert observation and physician knowledge extracted from textual context, thereby improving the visual understanding of models. Motivated by the fact that extensive medical reports are commonly attached to medical imaging, medical VLMs have triggered more and more interest, serving not only as self-supervised learning in the pretraining stage but also as a means to introduce auxiliary information into medical visual perception. To strengthen the understanding of such a promising direction, this survey aims to provide an in-depth exploration and review of medical VLMs for various visual recognition and reasoning tasks. Firstly, we present an introduction to medical VLMs. Then, we provide preliminaries and delve into how to exploit language in medical visual tasks from diverse perspectives. Further, we investigate publicly available VLM datasets and discuss the challenges and future perspectives. We expect that the comprehensive discussion about state-of-the-art medical VLMs will make researchers realize their significant potential.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103514"},"PeriodicalIF":10.7,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the eye: A relational model for early dementia detection using retinal OCTA images
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-26 DOI: 10.1016/j.media.2025.103513
Shouyue Liu , Ziyi Zhang , Yuanyuan Gu , Jinkui Hao , Yonghuai Liu , Huazhu Fu , Xinyu Guo , Hong Song , Shuting Zhang , Yitian Zhao
{"title":"Beyond the eye: A relational model for early dementia detection using retinal OCTA images","authors":"Shouyue Liu ,&nbsp;Ziyi Zhang ,&nbsp;Yuanyuan Gu ,&nbsp;Jinkui Hao ,&nbsp;Yonghuai Liu ,&nbsp;Huazhu Fu ,&nbsp;Xinyu Guo ,&nbsp;Hong Song ,&nbsp;Shuting Zhang ,&nbsp;Yitian Zhao","doi":"10.1016/j.media.2025.103513","DOIUrl":"10.1016/j.media.2025.103513","url":null,"abstract":"<div><div>Early detection of dementia, such as Alzheimer’s disease (AD) or mild cognitive impairment (MCI), is essential to enable timely intervention and potential treatment. Accurate detection of AD/MCI is challenging due to the high complexity, cost, and often invasive nature of current diagnostic techniques, which limit their suitability for large-scale population screening. Given the shared embryological origins and physiological characteristics of the retina and brain, retinal imaging is emerging as a potentially rapid and cost-effective alternative for the identification of individuals with or at high risk of AD. In this paper, we present a novel PolarNet+ that uses retinal optical coherence tomography angiography (OCTA) to discriminate early-onset AD (EOAD) and MCI subjects from controls. Our method first maps OCTA images from Cartesian coordinates to polar coordinates, allowing approximate sub-region calculation to implement the clinician-friendly early treatment of diabetic retinopathy study (ETDRS) grid analysis. We then introduce a multi-view module to serialize and analyze the images along three dimensions for comprehensive, clinically useful information extraction. Finally, we abstract the sequence embedding into a graph, transforming the detection task into a general graph classification problem. A regional relationship module is applied after the multi-view module to explore the relationship between the sub-regions. Such regional relationship analyses validate known eye-brain links and reveal new discriminative patterns. The proposed model is trained, tested, and validated on four retinal OCTA datasets, including 1,671 participants with AD, MCI, and healthy controls. Experimental results demonstrate the performance of our model in detecting AD and MCI with an AUC of 88.69% and 88.02%, respectively. Our results provide evidence that retinal OCTA imaging, coupled with artificial intelligence, may serve as a rapid and non-invasive approach for large-scale screening of AD and MCI. The code is available at <span><span>https://github.com/iMED-Lab/PolarNet-Plus-PyTorch</span><svg><path></path></svg></span>, and the dataset is also available upon request.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103513"},"PeriodicalIF":10.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain knowledge based comprehensive segmentation of Type-A aortic dissection with clinically-oriented evaluation
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-26 DOI: 10.1016/j.media.2025.103512
Shanshan Song , Hailong Qiu , Meiping Huang , Jian Zhuang , Qing Lu , Yiyu Shi , Xiaomeng Li , Wen Xie , Guang Tong , Xiaowei Xu
{"title":"Domain knowledge based comprehensive segmentation of Type-A aortic dissection with clinically-oriented evaluation","authors":"Shanshan Song ,&nbsp;Hailong Qiu ,&nbsp;Meiping Huang ,&nbsp;Jian Zhuang ,&nbsp;Qing Lu ,&nbsp;Yiyu Shi ,&nbsp;Xiaomeng Li ,&nbsp;Wen Xie ,&nbsp;Guang Tong ,&nbsp;Xiaowei Xu","doi":"10.1016/j.media.2025.103512","DOIUrl":"10.1016/j.media.2025.103512","url":null,"abstract":"<div><div>Type-A aortic dissection (TAAD) is a cardiac emergency in which rapid diagnosis, prognosis prediction, and surgical planning are critical for patient survival. A comprehensive understanding of the anatomic structures and related features of TAAD patients is the key to completing these tasks. However, due to the emergent nature of this disease and requirement of advanced expertise, manual segmentation of these anatomic structures is not routinely available in clinical practice. Currently, automatic segmentation of TAAD is a focus of the cardiovascular imaging research. However, existing works have two limitations: no comprehensive public dataset and lack of clinically-oriented evaluation. To address these limitations, in this paper we propose imageTAAD, the first comprehensive segmentation dataset of TAAD with clinically-oriented evaluation. The dataset is comprised of 120 cases, and each case is annotated by medical experts with 35 foreground classes reflecting the clinical needs for diagnosis, prognosis prediction and surgical planning for TAAD. In addition, we have identified four key clinical features for clinically-oriented evaluation. We also propose SegTAAD, a baseline method for comprehensive segmentation of TAAD. SegTAAD utilizes two pieces of domain knowledge: (1) the boundaries play a key role in the evaluation of clinical features, and can enhance the segmentation performance, and (2) the tear is located between TL and FL. We have conducted intensive experiments with a variety of state-of-the-art (SOTA) methods, and experimental results have shown that our method achieves SOTA performance on the ImageTAAD dataset in terms of overall DSC score, 95% Hausdorff distance, and four clinical features. In our study, we also found an interesting phenomenon that a higher DSC score does not necessarily indicate better accuracy in clinical feature extraction. All the dataset, code and trained models have been published (<span><span>Xiaowei, 2024</span></span>).</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103512"},"PeriodicalIF":10.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable modality-specific and interactive graph convolutional network on brain functional and structural connectomes
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-25 DOI: 10.1016/j.media.2025.103509
Jing Xia, Yi Hao Chan, Deepank Girish, Jagath C. Rajapakse
{"title":"Interpretable modality-specific and interactive graph convolutional network on brain functional and structural connectomes","authors":"Jing Xia,&nbsp;Yi Hao Chan,&nbsp;Deepank Girish,&nbsp;Jagath C. Rajapakse","doi":"10.1016/j.media.2025.103509","DOIUrl":"10.1016/j.media.2025.103509","url":null,"abstract":"<div><div>Both brain functional connectivity (FC) and structural connectivity (SC) provide distinct neural mechanisms for cognition and neurological disease. In addition, interactions between SC and FC within distributed association regions are related to alterations in cognition or neurological diseases, considering the inherent linkage between neural function and structure. However, there is a scarcity of existing learning-based methods that leverage both modality-specific characteristics and high-order interactions between the two modalities for regression or classification. Hence, this study proposes an interpretable modality-specific and interactive graph convolutional network (MS-Inter-GCN) that incorporates modality-specific information, reflecting the unique neural mechanism for each modality, and structure–function interactions, capturing the underlying foundation provided by white-matter fiber tracts for high-level brain function. In MS-Inter-GCN, we generate modality-specific task-relevant embeddings separately from both FC and SC using a graph convolutional encoder–decoder module. Subsequently, we learn the interactive weights between corresponding regions of FC and SC, reflecting the coupling strength, by employing an interactive module on the embeddings of both modalities. A novel graph structure is constructed, which uses modality-specific task-relevant embeddings and inserts the interactive weights as edges connecting corresponding regions of two modalities, and then is used for the regression or classification task. Finally, a post-hoc explainable technology - GNNExplainer- is used to identify salient regions and connections of each modality as well as salient interactions between FC and SC associated with tasks. We apply the proposed framework to fluid cognition prediction, Parkinson’s disease (PD), Alzheimer’s disease (AD), and schizophrenia (SZ) classification. Experimental results demonstrate that our method outperforms the other ten state-of-the-art methods on multi-modal brain features on all tasks. The GNNExplainer identifies salient structural and functional regions and connections for fluid cognition, PD, AD, and SZ. It confirms that strong structure–function coupling within the executive and control networks, combined with weak coupling within the motor network, is associated with fluid cognition. Moreover, structure–function decoupling in specific brain regions serves as a marker for different diseases: decoupling of the prefrontal, superior parietal, and superior occipital cortices is a marker of PD; decoupling of the middle frontal and lateral parietal cortices, temporal pole, and subcortical regions is indicative of AD; and decoupling of the prefrontal, parietal, and temporal cortices, as well as the cerebellum, contributes to SZ.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103509"},"PeriodicalIF":10.7,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信