Non-Mydriatic Fundus Images Enhancement Based on Conformal Mapping Extension

Songlin Yan, Xiujiao Chen, Jiehua Sun, Xiaoying Tang, Xuebin Chi
{"title":"Non-Mydriatic Fundus Images Enhancement Based on Conformal Mapping Extension","authors":"Songlin Yan, Xiujiao Chen, Jiehua Sun, Xiaoying Tang, Xuebin Chi","doi":"10.1109/CCIS53392.2021.9754644","DOIUrl":null,"url":null,"abstract":"Image enhancement is an important technique for improving observation, especially for non-mydriatic fundus images. Hence a new non-mydriatic fundus images enhancement pipeline is proposed here. Our fundamental procedure is from automatically generating the mask of the field of view (FOV) to restoring their original color. Briefly speaking, by extending the FOV region with conformal mapping, we can solve the boundary problems of image enhancement. And inspired by high dynamic range imaging (HDRI) theory, a new color restoration tactic is developed to correct the color deformation of enhanced images. To demonstrate the robustness of our algorithms, a hybrid test dataset is introduced. It not only contains some public datasets, e.g. DRIVE, Kaggle and Web (some unannotated images from a web), but also includes many private non-mydriatic datasets that were collected from the third affiliated hospital of our collaborative university. The masks were validated on DRIVE dataset by using 5 famous criteria. And we performed all enhanced results with 10 different objective image quality assessment (IQA) models. The experimental outputs of mask segmentation achieve the similarity coefficients: Cosine 99.594%, Sorensen-Dice 99.593%, Jaccard 99.19% and Pearson 98.714%, and Tanimoto 98.891%, respectively. The enhanced results from the IQA models are: BRISQE 38.9, BLIINDS2 49.87, BIQI 16.49, ILNIQE 43.39, NIQE 6.62, IFC 1.517, MS-SSIM 0.712, PSNR 21.33, SSIM 0.775, and VIF 0.2, respectively. Besides, we will opensource all programs and test codes on GitHub.","PeriodicalId":191226,"journal":{"name":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCIS53392.2021.9754644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Image enhancement is an important technique for improving observation, especially for non-mydriatic fundus images. Hence a new non-mydriatic fundus images enhancement pipeline is proposed here. Our fundamental procedure is from automatically generating the mask of the field of view (FOV) to restoring their original color. Briefly speaking, by extending the FOV region with conformal mapping, we can solve the boundary problems of image enhancement. And inspired by high dynamic range imaging (HDRI) theory, a new color restoration tactic is developed to correct the color deformation of enhanced images. To demonstrate the robustness of our algorithms, a hybrid test dataset is introduced. It not only contains some public datasets, e.g. DRIVE, Kaggle and Web (some unannotated images from a web), but also includes many private non-mydriatic datasets that were collected from the third affiliated hospital of our collaborative university. The masks were validated on DRIVE dataset by using 5 famous criteria. And we performed all enhanced results with 10 different objective image quality assessment (IQA) models. The experimental outputs of mask segmentation achieve the similarity coefficients: Cosine 99.594%, Sorensen-Dice 99.593%, Jaccard 99.19% and Pearson 98.714%, and Tanimoto 98.891%, respectively. The enhanced results from the IQA models are: BRISQE 38.9, BLIINDS2 49.87, BIQI 16.49, ILNIQE 43.39, NIQE 6.62, IFC 1.517, MS-SSIM 0.712, PSNR 21.33, SSIM 0.775, and VIF 0.2, respectively. Besides, we will opensource all programs and test codes on GitHub.
基于保角映射扩展的非散瞳眼底图像增强
图像增强是改善观察效果的重要技术,尤其是对非散瞳眼底图像。为此,本文提出了一种新的非散瞳眼底图像增强管道。我们的基本程序是从自动生成视场遮罩(FOV)到恢复其原始颜色。简单地说,通过保角映射对视场区域进行扩展,可以解决图像增强的边界问题。在高动态范围成像(HDRI)理论的启发下,提出了一种校正增强图像颜色变形的颜色恢复策略。为了证明我们的算法的鲁棒性,引入了一个混合测试数据集。它不仅包含一些公共数据集,例如DRIVE, Kaggle和Web(来自Web的一些未注释的图像),还包括许多从我们合作大学的第三附属医院收集的私人非神秘数据集。在DRIVE数据集上使用5个著名的标准对掩码进行验证。我们用10种不同的客观图像质量评估(IQA)模型对所有增强结果进行了分析。mask分割实验输出的相似系数分别为:Cosine 99.594%、Sorensen-Dice 99.593%、Jaccard 99.19%、Pearson 98.714%、Tanimoto 98.891%。IQA模型的增强结果分别为:BRISQE 38.9、BLIINDS2 49.87、BIQI 16.49、ILNIQE 43.39、NIQE 6.62、IFC 1.517、MS-SSIM 0.712、PSNR 21.33、SSIM 0.775和VIF 0.2。此外,我们将在GitHub上开源所有的程序和测试代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信