{"title":"Towards AI based Ophthalmological Screening through Ultra-widefield Fundus Image to Conventional Fundus Image Translation","authors":"Pham Van Nguyen, D. Le, S. Song, Hyunseung Choo","doi":"10.1109/IMCOM53663.2022.9721717","DOIUrl":null,"url":null,"abstract":"Conventional fundus image (CFI) has been the most popular modality used in ophthalmological diagnosis. However, taking the CFI is costly and a burden for patients since it requires pupil to dilate. Recent research have shifted more attention towards ultra-widefield fundus image (UFI) which includes a larger area, is cheaper and easier to take. Although features of an CFI can be found in a corresponding UFI, the use of UFIs for eye diagnosis is still limited due to the low contrast and inconsistent background color. The recent advancements in deep learning promote UFI-to-CFI translation to be a promising direction for an early ophthalmological screening. Existing methods cannot deal with low-quality image samples and their outputs usually have low brightness. In this paper, we outperform other works by a novel framework which tackles above problems. In this framework, we deploy an object detector and an illumination estimator to refine input samples of an attention-aided cydeGAN model which is used to generate the CFI. Numerous experiments state that 98.8% of the generated CFIs are recognized as good quality which shows the suitability of our framework for an ophthalmological screening system.","PeriodicalId":367038,"journal":{"name":"2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMCOM53663.2022.9721717","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Conventional fundus image (CFI) has been the most popular modality used in ophthalmological diagnosis. However, taking the CFI is costly and a burden for patients since it requires pupil to dilate. Recent research have shifted more attention towards ultra-widefield fundus image (UFI) which includes a larger area, is cheaper and easier to take. Although features of an CFI can be found in a corresponding UFI, the use of UFIs for eye diagnosis is still limited due to the low contrast and inconsistent background color. The recent advancements in deep learning promote UFI-to-CFI translation to be a promising direction for an early ophthalmological screening. Existing methods cannot deal with low-quality image samples and their outputs usually have low brightness. In this paper, we outperform other works by a novel framework which tackles above problems. In this framework, we deploy an object detector and an illumination estimator to refine input samples of an attention-aided cydeGAN model which is used to generate the CFI. Numerous experiments state that 98.8% of the generated CFIs are recognized as good quality which shows the suitability of our framework for an ophthalmological screening system.