Retinal Image Restoration using Transformer and Cycle-Consistent Generative Adversarial Network

Alnur Alimanov, Md Baharul Islam
{"title":"Retinal Image Restoration using Transformer and Cycle-Consistent Generative Adversarial Network","authors":"Alnur Alimanov, Md Baharul Islam","doi":"10.1109/ISPACS57703.2022.10082822","DOIUrl":null,"url":null,"abstract":"Medical imaging plays a significant role in detecting and treating various diseases. However, these images often happen to be of too poor quality, leading to decreased efficiency, extra expenses, and even incorrect diagnoses. Therefore, we propose a retinal image enhancement method using a vision transformer and convolutional neural network. It builds a cycle-consistent generative adversarial network that relies on unpaired datasets. It consists of two generators that translate images from one domain to another (e.g., low- to high-quality and vice versa), playing an adversarial game with two discriminators. Generators produce indistinguishable images for discriminators that predict the original images from generated ones. Generators are a combination of vision transformer (ViT) encoder and con-volutional neural network (CNN) decoder. Discriminators include traditional CNN encoders. The resulting improved images have been tested quantitatively using such evaluation metrics as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed method successfully reduces the adverse effects of blurring, noise, illumination disturbances, and color distortions while signifi-cantly preserving structural and color information. Experimental results show the superiority of the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB for the second dataset. Testing SSIM is 0.919 and 0.904, respectively. The code is available at https://github.com/AAleka/Transformer-Cycle-GAN","PeriodicalId":410603,"journal":{"name":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS57703.2022.10082822","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Medical imaging plays a significant role in detecting and treating various diseases. However, these images often happen to be of too poor quality, leading to decreased efficiency, extra expenses, and even incorrect diagnoses. Therefore, we propose a retinal image enhancement method using a vision transformer and convolutional neural network. It builds a cycle-consistent generative adversarial network that relies on unpaired datasets. It consists of two generators that translate images from one domain to another (e.g., low- to high-quality and vice versa), playing an adversarial game with two discriminators. Generators produce indistinguishable images for discriminators that predict the original images from generated ones. Generators are a combination of vision transformer (ViT) encoder and con-volutional neural network (CNN) decoder. Discriminators include traditional CNN encoders. The resulting improved images have been tested quantitatively using such evaluation metrics as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed method successfully reduces the adverse effects of blurring, noise, illumination disturbances, and color distortions while signifi-cantly preserving structural and color information. Experimental results show the superiority of the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB for the second dataset. Testing SSIM is 0.919 and 0.904, respectively. The code is available at https://github.com/AAleka/Transformer-Cycle-GAN
基于变压器和周期一致生成对抗网络的视网膜图像恢复
医学影像在各种疾病的检测和治疗中发挥着重要作用。然而,这些图像往往质量太差,导致效率下降,额外的费用,甚至错误的诊断。因此,我们提出了一种基于视觉变压器和卷积神经网络的视网膜图像增强方法。它建立了一个循环一致的生成对抗网络,依赖于未配对的数据集。它由两个生成器组成,将图像从一个域转换为另一个域(例如,低质量到高质量,反之亦然),与两个鉴别器进行对抗性博弈。生成器为鉴别器生成不可区分的图像,鉴别器从生成的图像中预测原始图像。发生器是视觉变压器(ViT)编码器和卷积神经网络(CNN)解码器的结合。鉴别器包括传统的CNN编码器。使用峰值信噪比(PSNR)、结构相似指数(SSIM)等评估指标和定性(即血管分割)对改进后的图像进行了定量测试。该方法成功地减少了模糊、噪声、照明干扰和颜色失真的不利影响,同时显著地保留了结构和颜色信息。实验结果表明了该方法的优越性。我们测试的第一个数据集的PSNR为31.138 dB,第二个数据集的PSNR为27.798 dB。检验SSIM分别为0.919和0.904。代码可在https://github.com/AAleka/Transformer-Cycle-GAN上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信