Song Tang , Shaxu Yan , Xiaozhi Qi , Jianxin Gao , Mao Ye , Jianwei Zhang , Xiatian Zhu
{"title":"Few-shot medical image segmentation with high-fidelity prototypes","authors":"Song Tang , Shaxu Yan , Xiaozhi Qi , Jianxin Gao , Mao Ye , Jianwei Zhang , Xiatian Zhu","doi":"10.1016/j.media.2024.103412","DOIUrl":"10.1016/j.media.2024.103412","url":null,"abstract":"<div><div>Few-shot Semantic Segmentation (FSS) aims to adapt a pretrained model to new classes with as few as a single labeled training sample per class. Despite the prototype based approaches have achieved substantial success, existing models are limited to the imaging scenarios with considerably distinct objects and not highly complex background, e.g., natural images. This makes such models suboptimal for medical imaging with both conditions invalid. To address this problem, we propose a novel <strong>D</strong><em>etail</em> <strong>S</strong><em>elf-refined</em> <strong>P</strong><em>rototype</em> <strong>Net</strong><em>work</em> (<strong>DSPNet</strong>) to construct high-fidelity prototypes representing the object foreground and the background more comprehensively. Specifically, to construct global semantics while maintaining the captured detail semantics, we learn the foreground prototypes by modeling the multimodal structures with clustering and then fusing each in a channel-wise manner. Considering that the background often has no apparent semantic relation in the spatial dimensions, we integrate channel-specific structural information under sparse channel-aware regulation. Extensive experiments on three challenging medical image benchmarks show the superiority of DSPNet over previous state-of-the-art methods. The code and data are available at <span><span>https://github.com/tntek/DSPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103412"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142780633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"COLLATOR: Consistent spatial–temporal longitudinal atlas construction via implicit neural representation","authors":"Lixuan Chen , Xuanyu Tian , Jiangjie Wu , Guoyan Lao , Yuyao Zhang , Hongjiang Wei","doi":"10.1016/j.media.2024.103396","DOIUrl":"10.1016/j.media.2024.103396","url":null,"abstract":"<div><div>Longitudinal brain atlases that present brain development trend along time, are essential tools for brain development studies. However, conventional methods construct these atlases by independently averaging brain images from different individuals at discrete time points. This approach could introduce temporal inconsistencies due to variations in ontogenetic trends among samples, potentially affecting accuracy of brain developmental characteristic analysis. In this paper, we propose an implicit neural representation (INR)-based framework to improve the temporal consistency in longitudinal atlases. We treat temporal inconsistency as a 4-dimensional (4D) image denoising task, where the data consists of 3D spatial information and 1D temporal progression. We formulate the longitudinal atlas as an implicit function of the spatial–temporal coordinates, allowing structural inconsistency over the time to be considered as 3D image noise along age. Inspired by recent self-supervised denoising methods (e.g. Noise2Noise), our approach learns the noise-free and temporally continuous implicit function from inconsistent longitudinal atlas data. Finally, the time-consistent longitudinal brain atlas can be reconstructed by evaluating the denoised 4D INR function at critical brain developing time points. We evaluate our approach on three longitudinal brain atlases of different MRI modalities, demonstrating that our method significantly improves temporal consistency while accurately preserving brain structures. Additionally, the continuous functions generated by our method enable the creation of 4D atlases with higher spatial and temporal resolution. Code: <span><span>https://github.com/maopaom/COLLATOR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103396"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Ivantsits , Leonid Goubergrits , Jan-Martin Kuhnigk , Markus Huellebrand , Jan Bruening , Tabea Kossen , Boris Pfahringer , Jens Schaller , Andreas Spuler , Titus Kuehne , Yizhuan Jia , Xuesong Li , Suprosanna Shit , Bjoern Menze , Ziyu Su , Jun Ma , Ziwei Nie , Kartik Jain , Yanfei Liu , Yi Lin , Anja Hennemuth
{"title":"Corrigendum to “Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge” [Medical Image Analysis, April 2022, Volume 77, 102333]","authors":"Matthias Ivantsits , Leonid Goubergrits , Jan-Martin Kuhnigk , Markus Huellebrand , Jan Bruening , Tabea Kossen , Boris Pfahringer , Jens Schaller , Andreas Spuler , Titus Kuehne , Yizhuan Jia , Xuesong Li , Suprosanna Shit , Bjoern Menze , Ziyu Su , Jun Ma , Ziwei Nie , Kartik Jain , Yanfei Liu , Yi Lin , Anja Hennemuth","doi":"10.1016/j.media.2024.103363","DOIUrl":"10.1016/j.media.2024.103363","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103363"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Ma , Kaili Liang , Liu Li , Saga Masui , Yourong Guo , Chiara Nosarti , Emma C. Robinson , Bernhard Kainz , Daniel Rueckert
{"title":"The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction","authors":"Qiang Ma , Kaili Liang , Liu Li , Saga Masui , Yourong Guo , Chiara Nosarti , Emma C. Robinson , Bernhard Kainz , Daniel Rueckert","doi":"10.1016/j.media.2024.103394","DOIUrl":"10.1016/j.media.2024.103394","url":null,"abstract":"<div><div>The Developing Human Connectome Project (dHCP) aims to explore developmental patterns of the human brain during the perinatal period. An automated processing pipeline has been developed to extract high-quality cortical surfaces from structural brain magnetic resonance (MR) images for the dHCP neonatal dataset. However, the current implementation of the pipeline requires more than 6.5 h to process a single MRI scan, making it expensive for large-scale neuroimaging studies. In this paper, we propose a fast deep learning (DL) based pipeline for dHCP neonatal cortical surface reconstruction, incorporating DL-based brain extraction, cortical surface reconstruction and spherical projection, as well as GPU-accelerated cortical surface inflation and cortical feature estimation. We introduce a multiscale deformation network to learn diffeomorphic cortical surface reconstruction end-to-end from T2-weighted brain MRI. A fast unsupervised spherical mapping approach is integrated to minimize metric distortions between cortical surfaces and projected spheres. The entire workflow of our DL-based dHCP pipeline completes within only 24 s on a modern GPU, which is nearly 1000 times faster than the original dHCP pipeline. The qualitative assessment demonstrates that for 82.5% of the test samples, the cortical surfaces reconstructed by our DL-based pipeline achieve superior (54.2%) or equal (28.3%) surface quality compared to the original dHCP pipeline.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103394"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142780635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yejee Shin , Geonhui Son , Dosik Hwang , Taejoon Eo
{"title":"Ensemble and low-frequency mixing with diffusion models for accelerated MRI reconstruction","authors":"Yejee Shin , Geonhui Son , Dosik Hwang , Taejoon Eo","doi":"10.1016/j.media.2025.103477","DOIUrl":"10.1016/j.media.2025.103477","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) is an important imaging modality in medical diagnosis, providing comprehensive anatomical information with detailed tissue structures. However, the long scan time required to acquire high-quality MR images is a major challenge, especially in urgent clinical scenarios. Although diffusion models have achieved remarkable performance in accelerated MRI, there are several challenges. In particular, they struggle with the long inference time due to the high number of iterations in the reverse process of diffusion models. Additionally, they occasionally create artifacts or ‘hallucinate’ tissues that do not exist in the original anatomy. To address these problems, we propose ensemble and adaptive low-frequency mixing on the diffusion model, namely ELF-Diff for accelerated MRI. The proposed method consists of three key components in the reverse diffusion step: (1) optimization based on unified data consistency; (2) low-frequency mixing; and (3) aggregation of multiple perturbations of the predicted images for the ensemble in each step. We evaluate ELF-Diff on two MRI datasets, FastMRI and SKM-TEA. ELF-Diff surpasses other existing diffusion models for MRI reconstruction. Furthermore, extensive experiments, including a subtask of pathology detection, further demonstrate the superior anatomical precision of our method. ELF-Diff outperforms the existing state-of-the-art MRI reconstruction methods without being limited to specific undersampling patterns.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103477"},"PeriodicalIF":10.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143296486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangxin Wang , Zhan Wu , Yujia Zhou , Huazhong Shu , Jean-Louis Coatrieux , Qianjin Feng , Yang Chen
{"title":"Topology-oriented foreground focusing network for semi-supervised coronary artery segmentation","authors":"Xiangxin Wang , Zhan Wu , Yujia Zhou , Huazhong Shu , Jean-Louis Coatrieux , Qianjin Feng , Yang Chen","doi":"10.1016/j.media.2025.103465","DOIUrl":"10.1016/j.media.2025.103465","url":null,"abstract":"<div><div>Automatic coronary artery (CA) segmentation on coronary-computed tomography angiography (CCTA) images is critical for coronary-related disease diagnosis and pre-operative planning. However, such segmentation remains a challenging task due to the difficulty in maintaining the topological consistency of CA, interference from irrelevant tubular structures, and insufficient labeled data. In this study, we propose a novel semi-supervised topology-oriented foreground focusing network (TOFF-Net) to comprehensively address such challenges. Specifically, we first propose an explicit vascular connectivity preservation (VCP) loss to capture the topological information and effectively strengthen vascular connectivity. Then, we propose an irrelevant vessels removal (IVR) module, which aims to integrate local CA details and global CA distribution, thereby eliminating interference of irrelevant vessels. Moreover, we propose a foreground label migration and focusing (FLMF) module with Pioneer-Imitator learning as a semi-supervised strategy to exploit the unlabeled data. The FLMF can effectively guide the attention of TOFF-Net to the foreground. Extensive results on our in-house dataset and two public datasets demonstrate that our TOFF-Net achieves state-of-the-art CA segmentation performance with high topological consistency and few false-positive irrelevant tubular structures. The results also reveal that our TOFF-Net presents considerable potential for parsing other types of vessels.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103465"},"PeriodicalIF":10.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adeleh Bitarafan , Mohammad Mozafari , Mohammad Farid Azampour , Mahdieh Soleymani Baghshah , Nassir Navab , Azade Farshad
{"title":"Self-supervised 3D medical image segmentation by flow-guided mask propagation learning","authors":"Adeleh Bitarafan , Mohammad Mozafari , Mohammad Farid Azampour , Mahdieh Soleymani Baghshah , Nassir Navab , Azade Farshad","doi":"10.1016/j.media.2025.103478","DOIUrl":"10.1016/j.media.2025.103478","url":null,"abstract":"<div><div>Despite significant progress in 3D medical image segmentation using deep learning, manual annotation remains a labor-intensive bottleneck. Self-supervised mask propagation (SMP) methods have emerged to alleviate this challenge, allowing intra-volume segmentation with just a single slice annotation. However, the previous SMP methods often rely on 2D information and ignore volumetric contexts. While our previous work, called Vol2Flow, attempts to address this concern, it exhibits limitations, including not focusing enough on local (i.e., slice-pair) information, neglecting global information (i.e., volumetric contexts) in the objective function, and error accumulation during slice-to-slice reconstruction. This paper introduces Flow2Mask, a novel SMP method, developed to overcome the limitations of previous SMP approaches, particularly Vol2Flow. During training, Flow2Mask proposes the <em>Local-to-Global (L2G)</em> loss to learn inter-slice flow fields among all consecutive slices within a volume in an unsupervised manner. This dynamic loss is based on curriculum learning to gradually learn information within a volume from local to global contexts. Additionally, the <em>Inter-Slice Smoothness (ISS)</em> loss is introduced as a regularization term to encourage changes between the slices occur consistently and continuously. During inference, Flow2Mask leverages these 3D flow fields for inter-slice mask propagation in a 3D image, spreading annotation from a single annotated slice to the entire volume. Moreover, we propose an automatic strategy to select the most representative slice as initial annotation in the mask propagation process. Experimental evaluations on different abdominal datasets demonstrate that our proposed SMP method outperforms previous approaches and improves the overall mean DSC of Vol2Flow by <span><math><mrow><mo>+</mo><mn>2</mn><mo>.</mo><mn>1</mn><mtext>%</mtext></mrow></math></span>, <span><math><mrow><mo>+</mo><mn>8</mn><mo>.</mo><mn>2</mn><mtext>%</mtext></mrow></math></span>, and <span><math><mrow><mo>+</mo><mn>4</mn><mo>.</mo><mn>0</mn><mtext>%</mtext></mrow></math></span> for the Sliver, CHAOS, and 3D-IRCAD datasets, respectively. Furthermore, Flow2Mask even exhibits substantial improvements in weakly-supervised and self-supervised few-shot segmentation methods when applied as a mask completion tool. The code and model for Flow2Mask are available at <span><span>https://github.com/AdelehBitarafan/Flow2Mask</span><svg><path></path></svg></span>, providing a valuable contribution to the field of medical image segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103478"},"PeriodicalIF":10.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143428900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bishal Thapaliya , Robyn Miller , Jiayu Chen , Yu Ping Wang , Esra Akbas , Ram Sapkota , Bhaskar Ray , Pranav Suresh , Santosh Ghimire , Vince D. Calhoun , Jingyu Liu
{"title":"DSAM: A deep learning framework for analyzing temporal and spatial dynamics in brain networks","authors":"Bishal Thapaliya , Robyn Miller , Jiayu Chen , Yu Ping Wang , Esra Akbas , Ram Sapkota , Bhaskar Ray , Pranav Suresh , Santosh Ghimire , Vince D. Calhoun , Jingyu Liu","doi":"10.1016/j.media.2025.103462","DOIUrl":"10.1016/j.media.2025.103462","url":null,"abstract":"<div><div>Resting-state functional magnetic resonance imaging (rs-fMRI) is a noninvasive technique pivotal for understanding human neural mechanisms of intricate cognitive processes. Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest, or dynamic functional connectivity matrices with a sliding window approach. These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand. While deep learning has gained substantial popularity for modeling complex relational data, its application to uncovering the spatiotemporal dynamics of the brain is still limited. In this study we propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series and employs a specialized graph neural network for the final classification. Our model, <em>DSAM</em>, leverages temporal causal convolutional networks to capture the temporal dynamics in both low- and high-level feature representations, a temporal attention unit to identify important time points, a self-attention unit to construct the goal-specific connectivity matrix, and a novel variant of graph neural network to capture the spatial dynamics for downstream classification. To validate our approach, we conducted experiments on the Human Connectome Project dataset with 1075 samples to build and interpret the model for the classification of sex group, and the Adolescent Brain Cognitive Development Dataset with 8520 samples for independent testing. Compared our proposed framework with other state-of-art models, results suggested this novel approach goes beyond the assumption of a fixed connectivity matrix, and provides evidence of goal-specific brain connectivity patterns, which opens up potential to gain deeper insights into how the human brain adapts its functional connectivity specific to the task at hand. Our implementation can be found on <span><span>https://github.com/bishalth01/DSAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103462"},"PeriodicalIF":10.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143075001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haochen Jin , Junyi Shen , Lei Cui , Xiaoshuang Shi , Kang Li , Xiaofeng Zhu
{"title":"Dynamic graph based weakly supervised deep hashing for whole slide image classification and retrieval","authors":"Haochen Jin , Junyi Shen , Lei Cui , Xiaoshuang Shi , Kang Li , Xiaofeng Zhu","doi":"10.1016/j.media.2025.103468","DOIUrl":"10.1016/j.media.2025.103468","url":null,"abstract":"<div><div>Recently, a multi-scale representation attention based deep multiple instance learning method has proposed to directly extract patch-level image features from gigapixel whole slide images (WSIs), and achieved promising performance on multiple popular WSI datasets. However, it still has two major limitations: (i) without considering the relations among patches, thereby possibly restricting the model performance; (ii) unable to handle retrieval tasks, which is very important in clinic diagnosis. To overcome these limitations, in this paper, we propose a novel end-to-end MIL-based deep hashing framework, which is composed of a multi-scale representation attention based deep network as the backbone, patch-based dynamic graphs and hashing encoding layers, to simultaneously handle classification and retrieval tasks. Specifically, the multi-scale representation attention based deep network is to directly extract patch-level features from WSIs with mining the significant information at cell-, patch- and bag-level features. Additionally, we design a novel patch-based dynamic graph construction method to learn the relations among patches within each bag. Moreover, the hashing encoding layers are to encode patch- and WSI-level features into binary codes for patch- and WSI-level image retrieval. Extensive experiments on multiple popular datasets demonstrate that the proposed framework outperforms recent state-of-the-art ones on both classification and retrieval tasks. <em>All source codes are available at</em> <span><span><em>https://github.com/hcjin0816/DG_WSDH</em></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103468"},"PeriodicalIF":10.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143066459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim J. Adler , Jan-Hinrich Nölke , Annika Reinke , Minu Dietlinde Tizabi , Sebastian Gruber , Dasha Trofimova , Lynton Ardizzone , Paul F. Jaeger , Florian Buettner , Ullrich Köthe , Lena Maier-Hein
{"title":"Application-driven validation of posteriors in inverse problems","authors":"Tim J. Adler , Jan-Hinrich Nölke , Annika Reinke , Minu Dietlinde Tizabi , Sebastian Gruber , Dasha Trofimova , Lynton Ardizzone , Paul F. Jaeger , Florian Buettner , Ullrich Köthe , Lena Maier-Hein","doi":"10.1016/j.media.2025.103474","DOIUrl":"10.1016/j.media.2025.103474","url":null,"abstract":"<div><div>Current deep learning-based solutions for image analysis tasks are commonly incapable of handling problems to which multiple different plausible solutions exist. In response, posterior-based methods such as conditional Diffusion Models and Invertible Neural Networks have emerged; however, their translation is hampered by a lack of research on adequate validation. In other words, the way progress is measured often does not reflect the needs of the driving practical application. Closing this gap in the literature, we present the first systematic framework for the application-driven validation of posterior-based methods in inverse problems. As a methodological novelty, it adopts key principles from the field of object detection validation, which has a long history of addressing the question of how to locate and match multiple object instances in an image. Treating modes as instances enables us to perform mode-centric validation, using well-interpretable metrics from the application perspective. We demonstrate the value of our framework through instantiations for a synthetic toy example and two medical vision use cases: pose estimation in surgery and imaging-based quantification of functional tissue parameters for diagnostics. Our framework offers key advantages over common approaches to posterior validation in all three examples and could thus revolutionize performance assessment in inverse problems.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103474"},"PeriodicalIF":10.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143074993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}