Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement

Vamsi Krishna Vasa, Peijie Qiu, Wenhui Zhu, Yujian Xiong, Oana Dumitrascu, Yalin Wang
{"title":"Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement","authors":"Vamsi Krishna Vasa, Peijie Qiu, Wenhui Zhu, Yujian Xiong, Oana Dumitrascu, Yalin Wang","doi":"arxiv-2409.07862","DOIUrl":null,"url":null,"abstract":"Retinal fundus photography offers a non-invasive way to diagnose and monitor\na variety of retinal diseases, but is prone to inherent quality glitches\narising from systemic imperfections or operator/patient-related factors.\nHowever, high-quality retinal images are crucial for carrying out accurate\ndiagnoses and automated analyses. The fundus image enhancement is typically\nformulated as a distribution alignment problem, by finding a one-to-one mapping\nbetween a low-quality image and its high-quality counterpart. This paper\nproposes a context-informed optimal transport (OT) learning framework for\ntackling unpaired fundus image enhancement. In contrast to standard generative\nimage enhancement methods, which struggle with handling contextual information\n(e.g., over-tampered local structures and unwanted artifacts), the proposed\ncontext-aware OT learning paradigm better preserves local structures and\nminimizes unwanted artifacts. Leveraging deep contextual features, we derive\nthe proposed context-aware OT using the earth mover's distance and show that\nthe proposed context-OT has a solid theoretical guarantee. Experimental results\non a large-scale dataset demonstrate the superiority of the proposed method\nover several state-of-the-art supervised and unsupervised methods in terms of\nsignal-to-noise ratio, structural similarity index, as well as two downstream\ntasks. The code is available at\n\\url{https://github.com/Retinal-Research/Contextual-OT}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Retinal fundus photography offers a non-invasive way to diagnose and monitor a variety of retinal diseases, but is prone to inherent quality glitches arising from systemic imperfections or operator/patient-related factors. However, high-quality retinal images are crucial for carrying out accurate diagnoses and automated analyses. The fundus image enhancement is typically formulated as a distribution alignment problem, by finding a one-to-one mapping between a low-quality image and its high-quality counterpart. This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement. In contrast to standard generative image enhancement methods, which struggle with handling contextual information (e.g., over-tampered local structures and unwanted artifacts), the proposed context-aware OT learning paradigm better preserves local structures and minimizes unwanted artifacts. Leveraging deep contextual features, we derive the proposed context-aware OT using the earth mover's distance and show that the proposed context-OT has a solid theoretical guarantee. Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods in terms of signal-to-noise ratio, structural similarity index, as well as two downstream tasks. The code is available at \url{https://github.com/Retinal-Research/Contextual-OT}.
用于视网膜眼底图像增强的情境感知优化传输学习
视网膜眼底摄影为诊断和监测各种视网膜疾病提供了一种无创方法,但由于系统缺陷或操作员/患者相关因素,容易产生固有的质量问题。然而,高质量的视网膜图像对于进行准确诊断和自动分析至关重要。眼底图像增强通常被表述为分布对齐问题,即在低质量图像和高质量图像之间找到一一对应的映射关系。本文提出了一种基于上下文的最优传输(OT)学习框架,用于解决无配对眼底图像增强问题。标准的生成式图像增强方法在处理上下文信息(如过度篡改的局部结构和不需要的伪影)方面存在困难,与之相比,本文提出的上下文感知 OT 学习范式能更好地保留局部结构,并最大限度地减少不需要的伪影。利用深度上下文特征,我们利用地球移动距离推导出了所提出的上下文感知 OT,并证明所提出的上下文 OT 具有坚实的理论保证。在大规模数据集上的实验结果表明,在信噪比、结构相似性指数以及两个下游任务方面,提出的方法优于几种最先进的有监督和无监督方法。代码可在(url{https://github.com/Retinal-Research/Contextual-OT}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信