{"title":"基于特征解缠和动态内容失真制导的水下图像质量评估","authors":"Junjie Zhu;Liquan Shen;Zhengyong Wang;Yihan Yu","doi":"10.1109/TCSVT.2025.3533598","DOIUrl":null,"url":null,"abstract":"Due to the complex underwater imaging process, underwater images contain a variety of unique distortions. While existing underwater image quality assessment (UIQA) methods have made progress by highlighting these distortions, they overlook the fact that image content also affects how distortions are perceived, as different content exhibits varying sensitivities to different types of distortions. Both the characteristics of the content itself and the properties of the distortions determine the quality of underwater images. Additionally, the intertwined nature of content and distortion features in underwater images complicates the accurate extraction of both. In this paper, we address these issues by comprehensively accounting for both content and distortion information and explicitly disentangling underwater image features into content and distortion components. To achieve this, we introduce a dynamic content-distortion guiding and feature disentanglement network (DysenNet), composed of three main components: the feature disentanglement sub-network (FDN), the dynamic content guidance module (DCM), and the dynamic distortion guidance module (DDM). Specifically, the FDN disentangles underwater features into content and distortion elements, allowing us to more clearly measure their respective contributions to image quality. The DCM generates dynamic multi-scale convolutional kernels tailored to the unique content of each image, enabling content-adaptive feature extraction for quality perception. The DDM, on the other hand, addresses both global and local underwater distortions by identifying distortion cues from both channel and spatial perspectives, focusing on regions and channels with severe degradation. Extensive experiments on UIQA datasets demonstrate the state-of-the-art performance of the proposed method.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"5602-5616"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Underwater Image Quality Assessment Using Feature Disentanglement and Dynamic Content-Distortion Guidance\",\"authors\":\"Junjie Zhu;Liquan Shen;Zhengyong Wang;Yihan Yu\",\"doi\":\"10.1109/TCSVT.2025.3533598\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to the complex underwater imaging process, underwater images contain a variety of unique distortions. While existing underwater image quality assessment (UIQA) methods have made progress by highlighting these distortions, they overlook the fact that image content also affects how distortions are perceived, as different content exhibits varying sensitivities to different types of distortions. Both the characteristics of the content itself and the properties of the distortions determine the quality of underwater images. Additionally, the intertwined nature of content and distortion features in underwater images complicates the accurate extraction of both. In this paper, we address these issues by comprehensively accounting for both content and distortion information and explicitly disentangling underwater image features into content and distortion components. To achieve this, we introduce a dynamic content-distortion guiding and feature disentanglement network (DysenNet), composed of three main components: the feature disentanglement sub-network (FDN), the dynamic content guidance module (DCM), and the dynamic distortion guidance module (DDM). Specifically, the FDN disentangles underwater features into content and distortion elements, allowing us to more clearly measure their respective contributions to image quality. The DCM generates dynamic multi-scale convolutional kernels tailored to the unique content of each image, enabling content-adaptive feature extraction for quality perception. The DDM, on the other hand, addresses both global and local underwater distortions by identifying distortion cues from both channel and spatial perspectives, focusing on regions and channels with severe degradation. Extensive experiments on UIQA datasets demonstrate the state-of-the-art performance of the proposed method.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 6\",\"pages\":\"5602-5616\"},\"PeriodicalIF\":11.1000,\"publicationDate\":\"2025-01-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10852362/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10852362/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Underwater Image Quality Assessment Using Feature Disentanglement and Dynamic Content-Distortion Guidance
Due to the complex underwater imaging process, underwater images contain a variety of unique distortions. While existing underwater image quality assessment (UIQA) methods have made progress by highlighting these distortions, they overlook the fact that image content also affects how distortions are perceived, as different content exhibits varying sensitivities to different types of distortions. Both the characteristics of the content itself and the properties of the distortions determine the quality of underwater images. Additionally, the intertwined nature of content and distortion features in underwater images complicates the accurate extraction of both. In this paper, we address these issues by comprehensively accounting for both content and distortion information and explicitly disentangling underwater image features into content and distortion components. To achieve this, we introduce a dynamic content-distortion guiding and feature disentanglement network (DysenNet), composed of three main components: the feature disentanglement sub-network (FDN), the dynamic content guidance module (DCM), and the dynamic distortion guidance module (DDM). Specifically, the FDN disentangles underwater features into content and distortion elements, allowing us to more clearly measure their respective contributions to image quality. The DCM generates dynamic multi-scale convolutional kernels tailored to the unique content of each image, enabling content-adaptive feature extraction for quality perception. The DDM, on the other hand, addresses both global and local underwater distortions by identifying distortion cues from both channel and spatial perspectives, focusing on regions and channels with severe degradation. Extensive experiments on UIQA datasets demonstrate the state-of-the-art performance of the proposed method.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.