Songlin Yan, Xiujiao Chen, Jiehua Sun, Xiaoying Tang, Xuebin Chi
{"title":"Non-Mydriatic Fundus Images Enhancement Based on Conformal Mapping Extension","authors":"Songlin Yan, Xiujiao Chen, Jiehua Sun, Xiaoying Tang, Xuebin Chi","doi":"10.1109/CCIS53392.2021.9754644","DOIUrl":null,"url":null,"abstract":"Image enhancement is an important technique for improving observation, especially for non-mydriatic fundus images. Hence a new non-mydriatic fundus images enhancement pipeline is proposed here. Our fundamental procedure is from automatically generating the mask of the field of view (FOV) to restoring their original color. Briefly speaking, by extending the FOV region with conformal mapping, we can solve the boundary problems of image enhancement. And inspired by high dynamic range imaging (HDRI) theory, a new color restoration tactic is developed to correct the color deformation of enhanced images. To demonstrate the robustness of our algorithms, a hybrid test dataset is introduced. It not only contains some public datasets, e.g. DRIVE, Kaggle and Web (some unannotated images from a web), but also includes many private non-mydriatic datasets that were collected from the third affiliated hospital of our collaborative university. The masks were validated on DRIVE dataset by using 5 famous criteria. And we performed all enhanced results with 10 different objective image quality assessment (IQA) models. The experimental outputs of mask segmentation achieve the similarity coefficients: Cosine 99.594%, Sorensen-Dice 99.593%, Jaccard 99.19% and Pearson 98.714%, and Tanimoto 98.891%, respectively. The enhanced results from the IQA models are: BRISQE 38.9, BLIINDS2 49.87, BIQI 16.49, ILNIQE 43.39, NIQE 6.62, IFC 1.517, MS-SSIM 0.712, PSNR 21.33, SSIM 0.775, and VIF 0.2, respectively. Besides, we will opensource all programs and test codes on GitHub.","PeriodicalId":191226,"journal":{"name":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCIS53392.2021.9754644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image enhancement is an important technique for improving observation, especially for non-mydriatic fundus images. Hence a new non-mydriatic fundus images enhancement pipeline is proposed here. Our fundamental procedure is from automatically generating the mask of the field of view (FOV) to restoring their original color. Briefly speaking, by extending the FOV region with conformal mapping, we can solve the boundary problems of image enhancement. And inspired by high dynamic range imaging (HDRI) theory, a new color restoration tactic is developed to correct the color deformation of enhanced images. To demonstrate the robustness of our algorithms, a hybrid test dataset is introduced. It not only contains some public datasets, e.g. DRIVE, Kaggle and Web (some unannotated images from a web), but also includes many private non-mydriatic datasets that were collected from the third affiliated hospital of our collaborative university. The masks were validated on DRIVE dataset by using 5 famous criteria. And we performed all enhanced results with 10 different objective image quality assessment (IQA) models. The experimental outputs of mask segmentation achieve the similarity coefficients: Cosine 99.594%, Sorensen-Dice 99.593%, Jaccard 99.19% and Pearson 98.714%, and Tanimoto 98.891%, respectively. The enhanced results from the IQA models are: BRISQE 38.9, BLIINDS2 49.87, BIQI 16.49, ILNIQE 43.39, NIQE 6.62, IFC 1.517, MS-SSIM 0.712, PSNR 21.33, SSIM 0.775, and VIF 0.2, respectively. Besides, we will opensource all programs and test codes on GitHub.