Linkun Cai, Yawen Liu, Haijun Niu, Wei Zheng, Hao Wang, Han Lv, Pengling Ren, Zhenchang Wang
{"title":"AgGAN: Anatomy-guided generative adversarial network to synthesize arterial spin labeling images for cerebral blood flow measurement under simulated microgravity.","authors":"Linkun Cai, Yawen Liu, Haijun Niu, Wei Zheng, Hao Wang, Han Lv, Pengling Ren, Zhenchang Wang","doi":"10.1016/j.media.2025.103817","DOIUrl":"https://doi.org/10.1016/j.media.2025.103817","url":null,"abstract":"<p><p>Microgravity-induced alterations in cerebral blood flow (CBF) may contribute to cognitive decline and neurodegeneration in astronauts. Accurate CBF quantification under microgravity conditions is fundamental for maintaining astronaut health and ensuring the success of human space missions. Arterial spin labeling (ASL) perfusion magnetic resonance imaging (MRI) is currently the only non-invasive, non-radioactive technique to quantitatively assessing global and regional CBF. However, deploying MRI scanners aboard space station remains challenging due to technical, logistical and payload limitations. To address this challenge, we propose an end-to-end Anatomy-guided Generative Adversarial Network (AgGAN) as non-invasive, cost-effective, and accurate tool for estimating CBF by synthesizing ASL images under simulated microgravity conditions from corresponding baseline images. Specifically, inspired by radiologists' diagnostic pattern, we develop a position-aware module to incorporate brain anatomical prior, and a region-adaptive feature extraction module to capture features of irregular brain regions. We also introduce a region-aware focal loss to enhance the synthesis quality of anatomically complex regions. Furthermore, we propose structure boundary-aware loss to encourage the synthesis network to learn boundary details, effectively avoiding exacerbation of partial volume effect and improving the accuracy of CBF quantification. Experimental results demonstrate the superiority of the proposed AgGAN in ASL image synthesis under simulated microgravity and show excellent subjective image quality evaluation. These findings highlight the potential of our model for CBF prediction in astronauts during spaceflight. Our dataset and code are available at https://github.com/Cai-Linkun/AgGAN.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 Pt A","pages":"103817"},"PeriodicalIF":11.8,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145212865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan P. Meneses , Cristian Tejos , Enes Makalic , Sergio Uribe
{"title":"Non-iterative and uncertainty-aware MRI-based liver fat estimation using an unsupervised deep learning method","authors":"Juan P. Meneses , Cristian Tejos , Enes Makalic , Sergio Uribe","doi":"10.1016/j.media.2025.103811","DOIUrl":"10.1016/j.media.2025.103811","url":null,"abstract":"<div><div>Liver proton density fat fraction (PDFF), the ratio between fat-only and overall proton densities, is an extensively validated biomarker associated with several diseases. In recent years, numerous deep learning-based methods for estimating PDFF have been proposed to optimize acquisition and post-processing times without sacrificing accuracy, compared to conventional methods. However, the lack of interpretability and the often poor generalizability of these DL-based models undermine the adoption of such techniques in clinical practice.</div><div>In this work, we propose an Artificial Intelligence-based Decomposition of water and fat with Echo Asymmetry and Least-squares (AI-DEAL) method, designed to estimate both proton density fat fraction (PDFF) and the associated uncertainty maps. Once trained, AI-DEAL performs a one-shot MRI water-fat separation by first calculating the nonlinear confounder variables, <span><math><msubsup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span> and off-resonance field. It then employs a weighted least squares approach to compute water-only and fat-only signals, along with their corresponding covariance matrix, which are subsequently used to derive the PDFF and its associated uncertainty.</div><div>We validated our method using in vivo liver CSE-MRI, a fat-water phantom, and a numerical phantom. AI-DEAL demonstrated PDFF biases of 0.25% and −0.12% at two liver ROIs, outperforming state-of-the-art deep learning-based techniques. Although trained using in vivo data, our method exhibited PDFF biases of −3.43% in the fat-water phantom and −0.22% in the numerical phantom with no added noise. The latter bias remained approximately constant when noise was introduced. Furthermore, the estimated uncertainties showed good agreement with the observed errors and the variations within each ROI, highlighting their potential value for assessing the reliability of the resulting PDFF maps.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103811"},"PeriodicalIF":11.8,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145091857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lingxi Hu , Xiao Wu , Risa Higashita , Xiaoli Xing , Menglan Zhou , Song Lin , Xiaorong Li , Yi Yue , Zunjie Xiao , Yinglin Zhang , Chenglin Yao , Jinming Duan , Jiang Liu
{"title":"Long-term stabilized iris tracking with unsupervised constraints on dynamic AS-OCT","authors":"Lingxi Hu , Xiao Wu , Risa Higashita , Xiaoli Xing , Menglan Zhou , Song Lin , Xiaorong Li , Yi Yue , Zunjie Xiao , Yinglin Zhang , Chenglin Yao , Jinming Duan , Jiang Liu","doi":"10.1016/j.media.2025.103787","DOIUrl":"10.1016/j.media.2025.103787","url":null,"abstract":"<div><div>Primary angle-closure glaucoma (PACG) is responsible for half of all glaucoma-related blindness worldwide. The devastating disease is often clinically silent before causing irreversible visual damage. Glaucomatous optic neuropathy is the major diagnostic criterion for glaucoma. Patients with severe PACG have been clinically found to have significantly lower pupillary reflex velocity and higher iris rigidity. Anterior segment optical coherence tomography (AS-OCT) enables dynamic visualization of the ocular iris anatomy which cannot otherwise be acquired by other imaging modalities. However, automatic quantification of dynamic iris motion on AS-OCT has not yet been implemented. The main challenges lie in the frequent jitter of high-resolution optical imaging, irregular temporal variations of elastic features, and relatively scarce datasets. In this paper, we propose an unsupervised constraint-based jitter refinement tracking (CJRTrack) framework for long-term AS-OCT video tracking. CJRTrack primarily consists of three modules: it first extracts a set of key regions from low-resolution images using an off-the-shelf point tracking algorithm. Given the initialized frames and points, an unsupervised multi-frame differentiable registration network estimates the localized deformation field patch for corresponding high-resolution images. It then refines these predictions using a temporal topology constraint-based module, which explicitly ensures overall trajectory stabilization and tracking. Multi-scale evaluations on two independent AS-OCT datasets demonstrate that CJRTrack significantly outperforms existing tracking models in both accuracy and stability. The clinical adaptivity of the model is further assessed on a glaucoma dataset containing 543 diseased eyes. Jitter-corrected quantification is extracted and used to classify neuropathic damage in primary angle closure patients.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103787"},"PeriodicalIF":11.8,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recovering intrinsic conduction velocity and action potential duration from electroanatomic mapping data using curvature","authors":"Caroline Roney , Gernot Plank , Shohreh Honarbakhsh , Caterina Vidal Horrach , Simone Pezzuto , Edward Vigmond","doi":"10.1016/j.media.2025.103809","DOIUrl":"10.1016/j.media.2025.103809","url":null,"abstract":"<div><div>Electroanatomic mapping systems measure the spread of activation and recovery over the surface of the heart. Propagation in cardiac tissue is complicated by the tissue architecture which produces a spatially varying anisotropic conductivity, leading to complex wavefronts. Curvature of the wavefront is known to affect both conduction velocity (CV) and action potential duration (APD). In this study, we sought to better define the impact of wavefront curvature on these properties, as well as the influence of conductivity, in order to recover intrinsic tissue properties. The dependence of CV and APD on curvature were measured for positive and negative curvatures for several ionic models, and then verified in realistic 2D and 3D simulations. Clinical data were also analysed. Results indicate that the effects of APD and CV are well described by simple formulae, and if the structure of the fibre is known, the intrinsic propagation velocities can be recovered. Geometrical curvature, as determined strictly by wavefront shape and ignoring the fibre structure, leads to large regions of spurious high curvature. This is important for determining pathological zones of slow conduction. In the simulations studied, curvature modulated APD by at most 20 ms.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103809"},"PeriodicalIF":11.8,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145091922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Template-based semantic-guided orthodontic teeth alignment previewer","authors":"Qianwen Ji , Yizhou Chen , Xiaojun Chen","doi":"10.1016/j.media.2025.103802","DOIUrl":"10.1016/j.media.2025.103802","url":null,"abstract":"<div><div>Intuitive visualization of orthodontic prediction results is of great significance in helping patients make up their minds about orthodontics and maintain an optimistic attitude during treatment. To address this, we propose a semantically guided orthodontic simulation prediction framework that predicts orthodontic outcomes using only a frontal photograph. Our method comprises four key steps. Firstly, we perform semantic segmentation of oral and the teeth cavity, enabling the extraction of category-specific tooth contours from frontal images with misaligned teeth. Secondly, these extracted contours are employed to adapt the predefined teeth templates to reconstruct 3D models of the teeth. Thirdly, using the reconstructed tooth positions, sizes, and postures, we fit the dental arch curve to guide tooth movement, producing a 3D model of the teeth after simulated orthodontic adjustments. Ultimately, we apply a semantically guided diffusion model for structural control and generate orthodontic prediction images which are consistent with the style of input images by applying texture transformation. Notably, our tooth semantic segmentation model attains an average intersection of union of 0.834 for 24 tooth classes excluding the second and third molars. The average Chamfer distance between our reconstructed teeth models and their corresponding ground-truth counterparts measures at 1.272 mm<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> in test cases. The teeth alignment, as predicted by our approach, exhibits a high degree of consistency with the actual post-orthodontic results in frontal images. This comprehensive qualitative and quantitative evaluation indicates the practicality and effectiveness of our framework in orthodontics and facial beautification.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103802"},"PeriodicalIF":11.8,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fangmao Ju , Yuzhu He , Fan Wang , Xianjun Li , Chen Niu , Chunfeng Lian , Jianhua Ma
{"title":"Model-unrolled fast MRI with weakly supervised lesion enhancement","authors":"Fangmao Ju , Yuzhu He , Fan Wang , Xianjun Li , Chen Niu , Chunfeng Lian , Jianhua Ma","doi":"10.1016/j.media.2025.103806","DOIUrl":"10.1016/j.media.2025.103806","url":null,"abstract":"<div><div>The utility of Magnetic Resonance Imaging (MRI) in anomaly detection and disease diagnosis is well recognized. However, the current imaging protocol is often hindered by long scanning durations and a misalignment between the scanning process and the specific requirements of subsequent clinical assessments. While recent studies have actively explored accelerated MRI techniques, the majority have concentrated on improving overall image quality across all voxel locations, overlooking the attention to specific abnormalities that hold clinical significance. To address this discrepancy, we propose a model-unrolled deep-learning method, guided by weakly supervised lesion attention, for accelerated MRI oriented by downstream clinical needs. In particular, we construct a lesion-focused MRI reconstruction model, which incorporates customized learnable regularizations that can be learned efficiently by using only image-level labels to improve potential lesion reconstruction but preserve overall image quality. We then design a dedicated iterative algorithm to solve this task-driven reconstruction model, which is further unfolded as a cascaded deep network for lesion-focused fast imaging. Comprehensive experiments on two public datasets, i.e., fastMRI and Stanford Knee MRI Multi-Task Evaluation (SKM-TEA), demonstrate that our approach, referred to as Lesion-Focused MRI (LF-MRI), surpassed existing accelerated MRI methods by relatively large margins. Remarkably, LF-MRI led to substantial improvements in areas showing pathology. The source code and pretrained models will be publicly available at <span><span>https://github.com/ladderlab-xjtu/LF-MRI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103806"},"PeriodicalIF":11.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145086249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correlation Routing Network for Explainable Lesion Classification in Multi-Parametric Liver MRI","authors":"Fakai Wang , Zhehan Shen , Huimin Lin , Fuhua Yan","doi":"10.1016/j.media.2025.103790","DOIUrl":"10.1016/j.media.2025.103790","url":null,"abstract":"<div><div>Liver tumor diagnosis is a key task in abdominal imaging examination, and there are numerous researches on automatic classifying focal liver lesions (FLL). More are in CT and ultrasound and fewer utilize MRI despite unique diagnostic advantages. The obstacles lie in dataset curation, technical complexity, and clinical explainability in liver MRI. In this paper, we propose the Correlation Routing Network (CRN) which takes in 10 MRI sequences and predicts lesion types (HCC, Cholangioma, Metastasis, Hemangioma, FNH, Cyst) as well as imaging features, to achieve both high accuracy and explainability. The CRN model consists of encoding branches, correlation routing/relay modules, and the self-attention module. The independent encoding paradigm facilitates information disentangling, the correlation routing scheme helps redirection and decoupling effectively, and the self-attention enforces global feature sharing and prediction consistency. The model predicts detailed lesion imaging features, promoting explainable classification and clinical accountability. We also identify the signal relations and derive quantitative explainability. Our liver lesion classification model achieves malignant-benign accuracy of 97.2%, six-class accuracy of 88%, and averaged imaging feature accuracy of 84.9%, outperforming popular CNN and transformer-based models. We hope to spark insights for multimodal lesion classification and model explainability.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103790"},"PeriodicalIF":11.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fenghe Tang , Qingsong Yao , Wenxin Ma , Chenxu Wu , Zihang Jiang , S. Kevin Zhou
{"title":"Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation","authors":"Fenghe Tang , Qingsong Yao , Wenxin Ma , Chenxu Wu , Zihang Jiang , S. Kevin Zhou","doi":"10.1016/j.media.2025.103770","DOIUrl":"10.1016/j.media.2025.103770","url":null,"abstract":"<div><div>Medical image segmentation remains a formidable challenge due to the label scarcity. Pre-training Vision Transformer (ViT) through masked image modeling (MIM) on large-scale unlabeled medical datasets presents a promising solution, providing both computational efficiency and model generalization for various downstream tasks. However, current ViT-based MIM pre-training frameworks predominantly emphasize local aggregation representations in output layers and fail to exploit the rich representations across different ViT layers that better capture fine-grained semantic information needed for more precise medical downstream tasks. To fill the above gap, we hereby present <strong>Hi</strong>erarchical <strong>En</strong>coder-<strong>d</strong>riven <strong>MAE</strong> (<strong>Hi-End-MAE</strong>), a simple yet effective ViT-based pre-training solution, which centers on two key innovations: (1) Encoder-driven reconstruction, which encourages the encoder to learn more informative features to guide the reconstruction of masked patches; and (2) Hierarchical dense decoding, which implements a hierarchical decoding structure to capture rich representations across different layers. We pre-train Hi-End-MAE on a large-scale dataset of 10K CT scans and evaluated its performance across nine public medical image segmentation benchmarks. Extensive experiments demonstrate that Hi-End-MAE achieves superior transfer learning capabilities across various downstream tasks, revealing the potential of ViT in medical imaging applications. The code is available at: <span><span>https://github.com/FengheTan9/Hi-End-MAE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103770"},"PeriodicalIF":11.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyao Zhang , Qiankun Ma , Tong Zhang , Jie Chen , Hairong Zheng , Wen Gao
{"title":"Switch-UMamba: Dynamic scanning vision Mamba UNet for medical image segmentation","authors":"Ziyao Zhang , Qiankun Ma , Tong Zhang , Jie Chen , Hairong Zheng , Wen Gao","doi":"10.1016/j.media.2025.103792","DOIUrl":"10.1016/j.media.2025.103792","url":null,"abstract":"<div><div>Recently, State Space Models (SSMs), particularly the Mamba-based framework, have demonstrated exceptional performance in medical image segmentation. This is attributed to their capacity to capture long-range dependencies efficiently with linear computational complexity. Nonetheless, current Mamba-based models encounter challenges in preserving the spatial context of 2D visual features, which is a consequence of their reliance on static 1D selective scanning patterns. In this study, we present Switch-UMamba, an innovative hybrid UNet framework that integrates local feature extraction power of Convolutional Neural Networks (CNNs) with the abilities of SSMs for capturing the long-range dependency. Switch-UMamba capitalizes on the Switch Visual State Space (VSS) module to leverage the Mixture-of-Scans (MoS) approach, a new scanning mechanism that amalgamates diverse scanning policies by considering each scan head as an expert within the Mixture-of-Experts (MoE) framework. MoS employs a router to dynamically allocate appropriate scanning policies and corresponding scan heads for each sample. This sparse-activated dynamic scanning approach not only ensures a rich and comprehensive acquisition of spatial information but also curtails computational expenses. Our comprehensive experimental evaluation on several medical image segmentation benchmarks indicates that Switch-UMamba has achieved state-of-the-art performances without using any pretrained weights. It is also worth highlighting that our approach outperforms other Mamba-based models with fewer parameters.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103792"},"PeriodicalIF":11.8,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuoshuo Li , Jiong Zhang , Youbing Zeng , Jiaying Lin , Dan Zhang , Jianjia Zhang , Duan Xu , Hosung Kim , Bingguang Liu , Mengting Liu
{"title":"SurfGNN: A robust surface-based prediction model with interpretability for coactivation maps of spatial and cortical features","authors":"Zhuoshuo Li , Jiong Zhang , Youbing Zeng , Jiaying Lin , Dan Zhang , Jianjia Zhang , Duan Xu , Hosung Kim , Bingguang Liu , Mengting Liu","doi":"10.1016/j.media.2025.103793","DOIUrl":"10.1016/j.media.2025.103793","url":null,"abstract":"<div><div>Current brain surface-based prediction models often overlook the variability of regional attributes at the cortical feature level. While graph neural networks (GNNs) excel at capturing regional differences, they encounter challenges when dealing with complex, high-density graph structures. In this work, we consider the cortical surface mesh as a sparse graph and propose an interpretable prediction model—Surface Graph Neural Network (SurfGNN). SurfGNN employs topology-sampling learning (TSL) and region-specific learning (RSL) structures to manage individual cortical features at both lower and higher scales of the surface mesh, effectively tackling the challenges posed by the overly abundant mesh nodes and addressing the issue of heterogeneity in cortical regions. Building on this, a novel score-weighted fusion (SWF) method is implemented to merge nodal representations associated with each cortical feature for prediction. We apply our model to a neonatal brain age prediction task using a dataset of harmonized MR images from 481 subjects (503 scans). SurfGNN outperforms all existing state-of-the-art methods, demonstrating an improvement of at least 9.0% and achieving a mean absolute error (MAE) of 0.827 ± 0.056 in postmenstrual weeks. Furthermore, it generates feature-level activation maps, indicating its capability to identify robust regional variations in different morphometric contributions for prediction. The codes will be available at <span><span>https://github.com/ZhuoshL/SurfGNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103793"},"PeriodicalIF":11.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145040553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}