{"title":"Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement","authors":"Vamsi Krishna Vasa, Peijie Qiu, Wenhui Zhu, Yujian Xiong, Oana Dumitrascu, Yalin Wang","doi":"arxiv-2409.07862","DOIUrl":null,"url":null,"abstract":"Retinal fundus photography offers a non-invasive way to diagnose and monitor\na variety of retinal diseases, but is prone to inherent quality glitches\narising from systemic imperfections or operator/patient-related factors.\nHowever, high-quality retinal images are crucial for carrying out accurate\ndiagnoses and automated analyses. The fundus image enhancement is typically\nformulated as a distribution alignment problem, by finding a one-to-one mapping\nbetween a low-quality image and its high-quality counterpart. This paper\nproposes a context-informed optimal transport (OT) learning framework for\ntackling unpaired fundus image enhancement. In contrast to standard generative\nimage enhancement methods, which struggle with handling contextual information\n(e.g., over-tampered local structures and unwanted artifacts), the proposed\ncontext-aware OT learning paradigm better preserves local structures and\nminimizes unwanted artifacts. Leveraging deep contextual features, we derive\nthe proposed context-aware OT using the earth mover's distance and show that\nthe proposed context-OT has a solid theoretical guarantee. Experimental results\non a large-scale dataset demonstrate the superiority of the proposed method\nover several state-of-the-art supervised and unsupervised methods in terms of\nsignal-to-noise ratio, structural similarity index, as well as two downstream\ntasks. The code is available at\n\\url{https://github.com/Retinal-Research/Contextual-OT}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Retinal fundus photography offers a non-invasive way to diagnose and monitor
a variety of retinal diseases, but is prone to inherent quality glitches
arising from systemic imperfections or operator/patient-related factors.
However, high-quality retinal images are crucial for carrying out accurate
diagnoses and automated analyses. The fundus image enhancement is typically
formulated as a distribution alignment problem, by finding a one-to-one mapping
between a low-quality image and its high-quality counterpart. This paper
proposes a context-informed optimal transport (OT) learning framework for
tackling unpaired fundus image enhancement. In contrast to standard generative
image enhancement methods, which struggle with handling contextual information
(e.g., over-tampered local structures and unwanted artifacts), the proposed
context-aware OT learning paradigm better preserves local structures and
minimizes unwanted artifacts. Leveraging deep contextual features, we derive
the proposed context-aware OT using the earth mover's distance and show that
the proposed context-OT has a solid theoretical guarantee. Experimental results
on a large-scale dataset demonstrate the superiority of the proposed method
over several state-of-the-art supervised and unsupervised methods in terms of
signal-to-noise ratio, structural similarity index, as well as two downstream
tasks. The code is available at
\url{https://github.com/Retinal-Research/Contextual-OT}.
视网膜眼底摄影为诊断和监测各种视网膜疾病提供了一种无创方法,但由于系统缺陷或操作员/患者相关因素,容易产生固有的质量问题。然而,高质量的视网膜图像对于进行准确诊断和自动分析至关重要。眼底图像增强通常被表述为分布对齐问题,即在低质量图像和高质量图像之间找到一一对应的映射关系。本文提出了一种基于上下文的最优传输(OT)学习框架,用于解决无配对眼底图像增强问题。标准的生成式图像增强方法在处理上下文信息(如过度篡改的局部结构和不需要的伪影)方面存在困难,与之相比,本文提出的上下文感知 OT 学习范式能更好地保留局部结构,并最大限度地减少不需要的伪影。利用深度上下文特征,我们利用地球移动距离推导出了所提出的上下文感知 OT,并证明所提出的上下文 OT 具有坚实的理论保证。在大规模数据集上的实验结果表明,在信噪比、结构相似性指数以及两个下游任务方面,提出的方法优于几种最先进的有监督和无监督方法。代码可在(url{https://github.com/Retinal-Research/Contextual-OT}.