Samiran Dey , Partha Basuchowdhuri , Debasis Mitra , Robin Augustine , Sanjoy Kumar Saha , Tapabrata Chakraborti
{"title":"Uncertainty estimation using boundary prediction for medical image super-resolution","authors":"Samiran Dey , Partha Basuchowdhuri , Debasis Mitra , Robin Augustine , Sanjoy Kumar Saha , Tapabrata Chakraborti","doi":"10.1016/j.cviu.2025.104349","DOIUrl":null,"url":null,"abstract":"<div><div>Medical image super-resolution can be performed by several deep learning frameworks. However, as the safety of each patient is of primary concern, having models with a high degree of population level accuracy is not enough. Instead of a one size fits all approach, there is a need to measure the reliability and trustworthiness of such models from the point of view of personalized healthcare and precision medicine. Hence, in this paper, we propose a novel approach to predict a range of super-resolved (SR) images that any generative super-resolution model may yield for a given low-resolution (LR) image using residual image prediction. Providing multiple images within the suggested lower and upper bound increases the probability of finding an exact match to the high-resolution (HR) image. To further compare models and provide reliability scores, we estimate the coverage and uncertainty of the models and check if coverage can be improved at the cost of increasing uncertainty. Experimental results on lung CT scans from LIDC-IDRI and Radiopedia COVID-19 CT Images Segmentation datasets show that our models, BliMSR and MoMSGAN, provide the best HR and SR coverage at different levels of residual attention with a comparatively lower uncertainty. We believe our model agnostic approach to uncertainty estimation for generative medical imaging is the first of its kind and would help clinicians decide on the trustworthiness of any super-resolution model in a generalized manner while providing alternate SR images with enhanced details for better diagnosis for each individual patient.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"256 ","pages":"Article 104349"},"PeriodicalIF":4.3000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225000724","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Medical image super-resolution can be performed by several deep learning frameworks. However, as the safety of each patient is of primary concern, having models with a high degree of population level accuracy is not enough. Instead of a one size fits all approach, there is a need to measure the reliability and trustworthiness of such models from the point of view of personalized healthcare and precision medicine. Hence, in this paper, we propose a novel approach to predict a range of super-resolved (SR) images that any generative super-resolution model may yield for a given low-resolution (LR) image using residual image prediction. Providing multiple images within the suggested lower and upper bound increases the probability of finding an exact match to the high-resolution (HR) image. To further compare models and provide reliability scores, we estimate the coverage and uncertainty of the models and check if coverage can be improved at the cost of increasing uncertainty. Experimental results on lung CT scans from LIDC-IDRI and Radiopedia COVID-19 CT Images Segmentation datasets show that our models, BliMSR and MoMSGAN, provide the best HR and SR coverage at different levels of residual attention with a comparatively lower uncertainty. We believe our model agnostic approach to uncertainty estimation for generative medical imaging is the first of its kind and would help clinicians decide on the trustworthiness of any super-resolution model in a generalized manner while providing alternate SR images with enhanced details for better diagnosis for each individual patient.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems