{"title":"训练中的恢复模型很重要:退化参考图像质量评估的数据增强","authors":"Jiazhi Du, Dongwei Ren, Yue Cao, W. Zuo","doi":"10.1145/3552456.3555667","DOIUrl":null,"url":null,"abstract":"Full-Reference Image Quality Assessment (FR-IQA) metrics such as PSNR, SSIM, and LPIPS have been widely adopted for evaluating image restoration (IR) methods. However, pristine-quality images are usually not available, making inferior No-Reference Image Quality Assessment (NR-IQA) metrics seem to be the only solutions in practical applications. Fortunately, when evaluating image restoration methods, paired degraded and restoration images are generally available. Thus, this paper takes a step forward to develop a Degraded-Reference IQA (DR-IQA) model while respecting its correspondence with FR-IQA metrics. To this end, we adopt a simple encoder-decoder as DR-IQA model, and take paired degraded and restoration images as the input to predict distortion maps guided by FR-IQA metrics. More importantly, due to the diversity and continuous development of image restoration models, it is difficult to make the DR-IQA model learned based on a specific restoration model generalize well to other ones. To address this issue, we augment the DR-IQA training samples by adding the results produced by in-training restoration models. Benefiting from the diversity of training samples, our learned DR-IQA model generalizes well to unseen restoration models. We respectively test our DR-IQA models on various image restoration tasks,e.g., denoising, super-resolution, JPEG deblocking, and complicated degradations, where our method can further close the performance gap between FR-IQA metrics and the state-of-the-art NR-IQA methods. Moreover, experiments also show the effectiveness of our method in performance comparison and model selection of image restoration models without ground-truth clean images. Source code will be made publicly available.","PeriodicalId":398586,"journal":{"name":"Proceedings of the 2nd International Workshop on Robust Understanding of Low-quality Multimedia Data: Unitive Enhancement, Analysis and Evaluation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"In-training Restoration Models Matter: Data Augmentation for Degraded-reference Image Quality Assessment\",\"authors\":\"Jiazhi Du, Dongwei Ren, Yue Cao, W. Zuo\",\"doi\":\"10.1145/3552456.3555667\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Full-Reference Image Quality Assessment (FR-IQA) metrics such as PSNR, SSIM, and LPIPS have been widely adopted for evaluating image restoration (IR) methods. However, pristine-quality images are usually not available, making inferior No-Reference Image Quality Assessment (NR-IQA) metrics seem to be the only solutions in practical applications. Fortunately, when evaluating image restoration methods, paired degraded and restoration images are generally available. Thus, this paper takes a step forward to develop a Degraded-Reference IQA (DR-IQA) model while respecting its correspondence with FR-IQA metrics. To this end, we adopt a simple encoder-decoder as DR-IQA model, and take paired degraded and restoration images as the input to predict distortion maps guided by FR-IQA metrics. More importantly, due to the diversity and continuous development of image restoration models, it is difficult to make the DR-IQA model learned based on a specific restoration model generalize well to other ones. To address this issue, we augment the DR-IQA training samples by adding the results produced by in-training restoration models. Benefiting from the diversity of training samples, our learned DR-IQA model generalizes well to unseen restoration models. We respectively test our DR-IQA models on various image restoration tasks,e.g., denoising, super-resolution, JPEG deblocking, and complicated degradations, where our method can further close the performance gap between FR-IQA metrics and the state-of-the-art NR-IQA methods. Moreover, experiments also show the effectiveness of our method in performance comparison and model selection of image restoration models without ground-truth clean images. Source code will be made publicly available.\",\"PeriodicalId\":398586,\"journal\":{\"name\":\"Proceedings of the 2nd International Workshop on Robust Understanding of Low-quality Multimedia Data: Unitive Enhancement, Analysis and Evaluation\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd International Workshop on Robust Understanding of Low-quality Multimedia Data: Unitive Enhancement, Analysis and Evaluation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3552456.3555667\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Robust Understanding of Low-quality Multimedia Data: Unitive Enhancement, Analysis and Evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3552456.3555667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In-training Restoration Models Matter: Data Augmentation for Degraded-reference Image Quality Assessment
Full-Reference Image Quality Assessment (FR-IQA) metrics such as PSNR, SSIM, and LPIPS have been widely adopted for evaluating image restoration (IR) methods. However, pristine-quality images are usually not available, making inferior No-Reference Image Quality Assessment (NR-IQA) metrics seem to be the only solutions in practical applications. Fortunately, when evaluating image restoration methods, paired degraded and restoration images are generally available. Thus, this paper takes a step forward to develop a Degraded-Reference IQA (DR-IQA) model while respecting its correspondence with FR-IQA metrics. To this end, we adopt a simple encoder-decoder as DR-IQA model, and take paired degraded and restoration images as the input to predict distortion maps guided by FR-IQA metrics. More importantly, due to the diversity and continuous development of image restoration models, it is difficult to make the DR-IQA model learned based on a specific restoration model generalize well to other ones. To address this issue, we augment the DR-IQA training samples by adding the results produced by in-training restoration models. Benefiting from the diversity of training samples, our learned DR-IQA model generalizes well to unseen restoration models. We respectively test our DR-IQA models on various image restoration tasks,e.g., denoising, super-resolution, JPEG deblocking, and complicated degradations, where our method can further close the performance gap between FR-IQA metrics and the state-of-the-art NR-IQA methods. Moreover, experiments also show the effectiveness of our method in performance comparison and model selection of image restoration models without ground-truth clean images. Source code will be made publicly available.