Generalizable Underwater Image Quality Assessment With Curriculum Learning-Inspired Domain Adaption

IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Shihui Wu;Qiuping Jiang;Guanghui Yue;Shiqi Wang;Guangtao Zhai
{"title":"Generalizable Underwater Image Quality Assessment With Curriculum Learning-Inspired Domain Adaption","authors":"Shihui Wu;Qiuping Jiang;Guanghui Yue;Shiqi Wang;Guangtao Zhai","doi":"10.1109/TBC.2024.3511962","DOIUrl":null,"url":null,"abstract":"The complex distortions suffered by real-world underwater images pose urgent demands on accurate underwater image quality assessment (UIQA) approaches that can predict underwater image quality consistently with human perception. Deep learning techniques have achieved great success in many applications, yet usually requiring a substantial amount of human-labeled data, which is time-consuming and labor-intensive. Developing a deep learning-based UIQA method that does not rely on any human labeled underwater images for model training poses a great challenge. In this work, we propose a novel UIQA method based on domain adaption (DA) from a curriculum learning perspective. The proposed method is called curriculum learning-inspired DA (CLIDA), aiming to learn an robust and generalizable UIQA model by conducting DA between the labeled natural images and unlabeled underwater images progressively, i.e., from easy to hard. The key is how to select easy samples from all underwater images in the target domain so that the difficulty of DA can be well-controlled at each stage. To this end, we propose a simple yet effective easy sample selection (ESS) scheme to form an easy sample set at each stage. Then, DA is performed between the entire natural image set in the source domain (with labels) and the selected easy sample set in the target domain (with pseudo labels) at each stage. As only those reliable easy examples are involved in DA at each stage, the difficulty of DA is well-controlled and the capability of the model is expected to be progressively enhanced. We conduct extensive experiments to verify the superiority of the proposed CLIDA method and also the effectiveness of each key component involved in our CLIDA framework. The source code will be made available at <uri>https://github.com/zzeu001/CLIDA</uri>.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"252-263"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Broadcasting","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10817078/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The complex distortions suffered by real-world underwater images pose urgent demands on accurate underwater image quality assessment (UIQA) approaches that can predict underwater image quality consistently with human perception. Deep learning techniques have achieved great success in many applications, yet usually requiring a substantial amount of human-labeled data, which is time-consuming and labor-intensive. Developing a deep learning-based UIQA method that does not rely on any human labeled underwater images for model training poses a great challenge. In this work, we propose a novel UIQA method based on domain adaption (DA) from a curriculum learning perspective. The proposed method is called curriculum learning-inspired DA (CLIDA), aiming to learn an robust and generalizable UIQA model by conducting DA between the labeled natural images and unlabeled underwater images progressively, i.e., from easy to hard. The key is how to select easy samples from all underwater images in the target domain so that the difficulty of DA can be well-controlled at each stage. To this end, we propose a simple yet effective easy sample selection (ESS) scheme to form an easy sample set at each stage. Then, DA is performed between the entire natural image set in the source domain (with labels) and the selected easy sample set in the target domain (with pseudo labels) at each stage. As only those reliable easy examples are involved in DA at each stage, the difficulty of DA is well-controlled and the capability of the model is expected to be progressively enhanced. We conduct extensive experiments to verify the superiority of the proposed CLIDA method and also the effectiveness of each key component involved in our CLIDA framework. The source code will be made available at https://github.com/zzeu001/CLIDA.
基于课程学习启发域自适应的广义水下图像质量评价
真实世界的水下图像所遭受的复杂畸变,对准确的水下图像质量评估(UIQA)方法提出了迫切的要求,该方法可以预测与人类感知一致的水下图像质量。深度学习技术在许多应用中取得了巨大的成功,但通常需要大量的人工标记数据,这是耗时和劳动密集型的。开发一种基于深度学习的UIQA方法,不依赖于任何人类标记的水下图像进行模型训练,这是一个巨大的挑战。本文从课程学习的角度提出了一种基于领域自适应(DA)的UIQA方法。本文提出的方法被称为课程学习启发DA (curriculum learning-inspired DA, CLIDA),其目的是通过在有标记的自然图像和未标记的水下图像之间,由易到难,逐步进行DA,学习一个鲁棒的、可推广的UIQA模型。关键是如何从目标域的所有水下图像中选择简单的样本,使每个阶段的数据处理难度得到很好的控制。为此,我们提出了一种简单而有效的易样本选择(ESS)方案,在每个阶段形成一个易样本集。然后,在每个阶段对源域中的整个自然图像集(带标签)和目标域中选定的简单样本集(带伪标签)进行数据分析。由于每个阶段的数据分析只涉及那些可靠的简单示例,因此数据分析的难度得到了很好的控制,模型的能力有望逐步提高。我们进行了大量的实验来验证所提出的CLIDA方法的优越性以及我们的CLIDA框架中涉及的每个关键组件的有效性。源代码将在https://github.com/zzeu001/CLIDA上提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Broadcasting
IEEE Transactions on Broadcasting 工程技术-电信学
CiteScore
9.40
自引率
31.10%
发文量
79
审稿时长
6-12 weeks
期刊介绍: The Society’s Field of Interest is “Devices, equipment, techniques and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.” In addition to this formal FOI statement, which is used to provide guidance to the Publications Committee in the selection of content, the AdCom has further resolved that “broadcast systems includes all aspects of transmission, propagation, and reception.”
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信