DFC-Net: a dual-path frequency-domain cross-attention fusion network for retinal image quality assessment.

IF 2.9 2区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Biomedical optics express Pub Date : 2024-10-17 eCollection Date: 2024-11-01 DOI:10.1364/BOE.531292
Xiaoyan Kui, Zeru Hai, Beiji Zou, Wei Liang, Liming Chen
{"title":"DFC-Net: a dual-path frequency-domain cross-attention fusion network for retinal image quality assessment.","authors":"Xiaoyan Kui, Zeru Hai, Beiji Zou, Wei Liang, Liming Chen","doi":"10.1364/BOE.531292","DOIUrl":null,"url":null,"abstract":"<p><p>Retinal image quality assessment (RIQA) is crucial for diagnosing various eye diseases and ensuring the accuracy of diagnostic analyses based on retinal fundus images. Traditional deep convolutional neural networks (CNNs) for RIQA face challenges such as over-reliance on RGB image brightness and difficulty in differentiating closely ranked image quality categories. To address these issues, we introduced the Dual-Path Frequency-domain Cross-attention Network (DFC-Net), which integrates RGB images and contrast-enhanced images using contrast-limited adaptive histogram equalization (CLAHE) as dual inputs. This approach improves structure detail detection and feature extraction. We also incorporated a frequency-domain attention mechanism (FDAM) to focus selectively on frequency components indicative of quality degradations and a cross-attention mechanism (CAM) to optimize the integration of dual inputs. Our experiments on the EyeQ and RIQA-RFMiD datasets demonstrated significant improvements, achieving a precision of 0.8895, recall of 0.8923, F1-score of 0.8909, and a Kappa score of 0.9191 on the EyeQ dataset. On the RIQA-RFMiD dataset, the precision was 0.702, recall 0.6729, F1-score 0.6869, and Kappa score 0.7210, outperforming current state-of-the-art approaches.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 11","pages":"6399-6415"},"PeriodicalIF":2.9000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11563343/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/BOE.531292","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Retinal image quality assessment (RIQA) is crucial for diagnosing various eye diseases and ensuring the accuracy of diagnostic analyses based on retinal fundus images. Traditional deep convolutional neural networks (CNNs) for RIQA face challenges such as over-reliance on RGB image brightness and difficulty in differentiating closely ranked image quality categories. To address these issues, we introduced the Dual-Path Frequency-domain Cross-attention Network (DFC-Net), which integrates RGB images and contrast-enhanced images using contrast-limited adaptive histogram equalization (CLAHE) as dual inputs. This approach improves structure detail detection and feature extraction. We also incorporated a frequency-domain attention mechanism (FDAM) to focus selectively on frequency components indicative of quality degradations and a cross-attention mechanism (CAM) to optimize the integration of dual inputs. Our experiments on the EyeQ and RIQA-RFMiD datasets demonstrated significant improvements, achieving a precision of 0.8895, recall of 0.8923, F1-score of 0.8909, and a Kappa score of 0.9191 on the EyeQ dataset. On the RIQA-RFMiD dataset, the precision was 0.702, recall 0.6729, F1-score 0.6869, and Kappa score 0.7210, outperforming current state-of-the-art approaches.

DFC-Net:用于视网膜图像质量评估的双路频域交叉注意力融合网络。
视网膜图像质量评估(RIQA)对于诊断各种眼科疾病和确保基于视网膜眼底图像的诊断分析的准确性至关重要。用于 RIQA 的传统深度卷积神经网络(CNN)面临着过度依赖 RGB 图像亮度和难以区分等级紧密的图像质量类别等挑战。为了解决这些问题,我们引入了双路频域交叉注意网络(DFC-Net),它将 RGB 图像和使用对比度限制自适应直方图均衡化(CLAHE)的对比度增强图像整合为双路输入。这种方法改进了结构细节检测和特征提取。我们还采用了频域关注机制(FDAM)来选择性地关注表明质量下降的频率成分,以及交叉关注机制(CAM)来优化双输入的整合。我们在 EyeQ 和 RIQA-RFMiD 数据集上进行的实验证明了这一方法的显著改进,在 EyeQ 数据集上的精确度达到 0.8895,召回率达到 0.8923,F1 分数达到 0.8909,Kappa 分数达到 0.9191。在 RIQA-RFMiD 数据集上,精确度为 0.702,召回率为 0.6729,F1 分数为 0.6869,Kappa 分数为 0.7210,超过了目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical optics express
Biomedical optics express BIOCHEMICAL RESEARCH METHODS-OPTICS
CiteScore
6.80
自引率
11.80%
发文量
633
审稿时长
1 months
期刊介绍: The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including: Tissue optics and spectroscopy Novel microscopies Optical coherence tomography Diffuse and fluorescence tomography Photoacoustic and multimodal imaging Molecular imaging and therapies Nanophotonic biosensing Optical biophysics/photobiology Microfluidic optical devices Vision research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信