Tianhai Chen , Xichen Yang , Tianshu Wang , Nengxin Li , Shun Zhu , Xiaobo Shen
{"title":"基于深度元学习的水下图像质量评价:数据集和客观方法","authors":"Tianhai Chen , Xichen Yang , Tianshu Wang , Nengxin Li , Shun Zhu , Xiaobo Shen","doi":"10.1016/j.cviu.2025.104380","DOIUrl":null,"url":null,"abstract":"<div><div>The degradation of underwater image quality due to complex environments affects the effectiveness of the application, making accurate quality assessment crucial. However, existing Underwater Image Quality Assessment (UIQA) methods lack sufficient reliable data. To address this, we construct the DART2024 dataset, containing 1,000 raw images and 10,000 distorted images generated by 10 enhancement methods, covering diverse underwater scenarios. We propose a novel UIQA method that weights original images via gradient maps, highlights details, constructs a multi-scale deep neural network with perception, fusion, and prediction modules to describe quality characteristics, and designs a meta-learning framework for rapid adaptation to unknown distortions. The experimental results show that DART2024 is credible and meets the training requirements. Our method outperforms SOTA approaches in accuracy, stability, and convergence speed on DART2024 and other underwater datasets. It also shows higher applicability on natural scene datasets. The dataset and source code for the proposed method can be made available at <span><span>https://github.com/dart-into/DART2024</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"257 ","pages":"Article 104380"},"PeriodicalIF":3.5000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Underwater image quality evaluation via deep meta-learning: Dataset and objective method\",\"authors\":\"Tianhai Chen , Xichen Yang , Tianshu Wang , Nengxin Li , Shun Zhu , Xiaobo Shen\",\"doi\":\"10.1016/j.cviu.2025.104380\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The degradation of underwater image quality due to complex environments affects the effectiveness of the application, making accurate quality assessment crucial. However, existing Underwater Image Quality Assessment (UIQA) methods lack sufficient reliable data. To address this, we construct the DART2024 dataset, containing 1,000 raw images and 10,000 distorted images generated by 10 enhancement methods, covering diverse underwater scenarios. We propose a novel UIQA method that weights original images via gradient maps, highlights details, constructs a multi-scale deep neural network with perception, fusion, and prediction modules to describe quality characteristics, and designs a meta-learning framework for rapid adaptation to unknown distortions. The experimental results show that DART2024 is credible and meets the training requirements. Our method outperforms SOTA approaches in accuracy, stability, and convergence speed on DART2024 and other underwater datasets. It also shows higher applicability on natural scene datasets. The dataset and source code for the proposed method can be made available at <span><span>https://github.com/dart-into/DART2024</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":\"257 \",\"pages\":\"Article 104380\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314225001031\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225001031","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Underwater image quality evaluation via deep meta-learning: Dataset and objective method
The degradation of underwater image quality due to complex environments affects the effectiveness of the application, making accurate quality assessment crucial. However, existing Underwater Image Quality Assessment (UIQA) methods lack sufficient reliable data. To address this, we construct the DART2024 dataset, containing 1,000 raw images and 10,000 distorted images generated by 10 enhancement methods, covering diverse underwater scenarios. We propose a novel UIQA method that weights original images via gradient maps, highlights details, constructs a multi-scale deep neural network with perception, fusion, and prediction modules to describe quality characteristics, and designs a meta-learning framework for rapid adaptation to unknown distortions. The experimental results show that DART2024 is credible and meets the training requirements. Our method outperforms SOTA approaches in accuracy, stability, and convergence speed on DART2024 and other underwater datasets. It also shows higher applicability on natural scene datasets. The dataset and source code for the proposed method can be made available at https://github.com/dart-into/DART2024.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems