具有质量感知对比损失的金字塔网络用于视网膜图像质量评估

Guanghui Yue;Shaoping Zhang;Tianwei Zhou;Bin Jiang;Weide Liu;Tianfu Wang
{"title":"具有质量感知对比损失的金字塔网络用于视网膜图像质量评估","authors":"Guanghui Yue;Shaoping Zhang;Tianwei Zhou;Bin Jiang;Weide Liu;Tianfu Wang","doi":"10.1109/TMI.2024.3501405","DOIUrl":null,"url":null,"abstract":"Captured retinal images vary greatly in quality. Low-quality images increase the risk of misdiagnosis. This motivates to design effective retinal image quality assessment (RIQA) methods. Current deep learning-based methods usually classify the image into three levels of “Good”, “Usable”, and “Reject”, while ignoring the quantitative feedback for more detailed quality scores. This study proposes a unified RIQA framework, named QAC-Net, that can evaluate the quality of retinal images in both qualitative and quantitative manners. To improve the prediction accuracy, QAC-Net focuses on extracting discriminative features by using two strategies. On the one hand, it adopts a pyramid network structure that simultaneously inputs the scaled images to learn quality-aware features at different scales and purify the feature representation through a consistency loss. On the other hand, to improve feature representation, it utilizes a quality-aware contrastive (QAC) loss that considers quality relationships between different images. The QAC losses for qualitative and quantitative evaluation tasks have different forms in view of the task differences. Considering the shortage of datasets for the quantitative evaluation task, we construct a dataset with 2,300 authentically distorted retinal images, each of which is annotated with a numerical quality score through subjective experiments. Experimental results on public and our constructed datasets show that our QAC-Net is competent for the RIQA tasks with considerable performance.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1416-1431"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pyramid Network With Quality-Aware Contrastive Loss for Retinal Image Quality Assessment\",\"authors\":\"Guanghui Yue;Shaoping Zhang;Tianwei Zhou;Bin Jiang;Weide Liu;Tianfu Wang\",\"doi\":\"10.1109/TMI.2024.3501405\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Captured retinal images vary greatly in quality. Low-quality images increase the risk of misdiagnosis. This motivates to design effective retinal image quality assessment (RIQA) methods. Current deep learning-based methods usually classify the image into three levels of “Good”, “Usable”, and “Reject”, while ignoring the quantitative feedback for more detailed quality scores. This study proposes a unified RIQA framework, named QAC-Net, that can evaluate the quality of retinal images in both qualitative and quantitative manners. To improve the prediction accuracy, QAC-Net focuses on extracting discriminative features by using two strategies. On the one hand, it adopts a pyramid network structure that simultaneously inputs the scaled images to learn quality-aware features at different scales and purify the feature representation through a consistency loss. On the other hand, to improve feature representation, it utilizes a quality-aware contrastive (QAC) loss that considers quality relationships between different images. The QAC losses for qualitative and quantitative evaluation tasks have different forms in view of the task differences. Considering the shortage of datasets for the quantitative evaluation task, we construct a dataset with 2,300 authentically distorted retinal images, each of which is annotated with a numerical quality score through subjective experiments. Experimental results on public and our constructed datasets show that our QAC-Net is competent for the RIQA tasks with considerable performance.\",\"PeriodicalId\":94033,\"journal\":{\"name\":\"IEEE transactions on medical imaging\",\"volume\":\"44 3\",\"pages\":\"1416-1431\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10756750/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10756750/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

捕获的视网膜图像在质量上差别很大。低质量的图像增加了误诊的风险。这促使人们设计出有效的视网膜图像质量评估方法。目前基于深度学习的方法通常将图像分为“好”、“可用”和“拒绝”三个级别,而忽略了对更详细的质量分数的定量反馈。本研究提出了一个统一的RIQA框架,命名为QAC-Net,可以定性和定量地评估视网膜图像的质量。为了提高预测精度,QAC-Net主要采用两种策略提取判别特征。一方面,采用金字塔网络结构,同时输入缩放后的图像,学习不同尺度的质量感知特征,并通过一致性损失净化特征表示。另一方面,为了改进特征表示,它利用了考虑不同图像之间质量关系的质量感知对比(QAC)损失。由于任务的不同,定性评价任务和定量评价任务的QAC损失有不同的表现形式。考虑到定量评价任务数据集的不足,我们构建了一个包含2300张真实失真的视网膜图像的数据集,并通过主观实验对每张图像进行了数值质量评分。在公共数据集和我们构建的数据集上的实验结果表明,我们的QAC-Net能够胜任RIQA任务,并具有相当的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Pyramid Network With Quality-Aware Contrastive Loss for Retinal Image Quality Assessment
Captured retinal images vary greatly in quality. Low-quality images increase the risk of misdiagnosis. This motivates to design effective retinal image quality assessment (RIQA) methods. Current deep learning-based methods usually classify the image into three levels of “Good”, “Usable”, and “Reject”, while ignoring the quantitative feedback for more detailed quality scores. This study proposes a unified RIQA framework, named QAC-Net, that can evaluate the quality of retinal images in both qualitative and quantitative manners. To improve the prediction accuracy, QAC-Net focuses on extracting discriminative features by using two strategies. On the one hand, it adopts a pyramid network structure that simultaneously inputs the scaled images to learn quality-aware features at different scales and purify the feature representation through a consistency loss. On the other hand, to improve feature representation, it utilizes a quality-aware contrastive (QAC) loss that considers quality relationships between different images. The QAC losses for qualitative and quantitative evaluation tasks have different forms in view of the task differences. Considering the shortage of datasets for the quantitative evaluation task, we construct a dataset with 2,300 authentically distorted retinal images, each of which is annotated with a numerical quality score through subjective experiments. Experimental results on public and our constructed datasets show that our QAC-Net is competent for the RIQA tasks with considerable performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信