Scoring structure regularized gradient boosting network for blind image quality assessment

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Lei Wang, Qingbo Wu, Fanman Meng, Zhengning Wang, Chenhao Wu, Haoran Wei, King Ngi Ngan
{"title":"Scoring structure regularized gradient boosting network for blind image quality assessment","authors":"Lei Wang,&nbsp;Qingbo Wu,&nbsp;Fanman Meng,&nbsp;Zhengning Wang,&nbsp;Chenhao Wu,&nbsp;Haoran Wei,&nbsp;King Ngi Ngan","doi":"10.1016/j.displa.2024.102955","DOIUrl":null,"url":null,"abstract":"<div><div>Blind image quality assessment (BIQA) aims to quantitatively predict the subjective perception of the distorted image without accessing its corresponding clean version. Prevailing methods typically model BIQA as a regression task and strive to minimize the average prediction error in terms of the pointwise unstructured loss, such as Mean Square Error (MSE) or Mean Absolute Error (MAE), which ignores the perception toward the rank orders and perceptual differences between different images. This paper proposes a Scoring Structure regularized Gradient Boosting Network (SSGB-Net) to achieve a more comprehensive perception across all distorted images. More specifically, our SSGB-Net performs BIQA in three stages, pair-wise rectification and list-wise boosting, followed by point-wise prediction after linear transformation. First, we correct the initial scores by incorporating the structured pairwise loss, i.e., SoftRank, to preserve the perceptual rank orders of pairwise images. Then, we further boost the previous pairwise correction results with structured listwise loss, i.e., Norm-in-Norm, to maintain the perceptual difference across all images. Finally, the point-wise prediction measures the MSE between the transformed scores and the ground truth through a closed-form solution of the Exponential Moving Average (EMA) driven linear transformation. Based on these iterative corrections, our SSGB-Net can effectively balance multiple BIQA objectives and outperform many state-of-the-art methods in terms of Pearson Linear Correlation Coefficient (PLCC), Spearman Rank Correlation Coefficient (SRCC) and Root Mean Squared Error (RMSE).</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102955"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224003196","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Blind image quality assessment (BIQA) aims to quantitatively predict the subjective perception of the distorted image without accessing its corresponding clean version. Prevailing methods typically model BIQA as a regression task and strive to minimize the average prediction error in terms of the pointwise unstructured loss, such as Mean Square Error (MSE) or Mean Absolute Error (MAE), which ignores the perception toward the rank orders and perceptual differences between different images. This paper proposes a Scoring Structure regularized Gradient Boosting Network (SSGB-Net) to achieve a more comprehensive perception across all distorted images. More specifically, our SSGB-Net performs BIQA in three stages, pair-wise rectification and list-wise boosting, followed by point-wise prediction after linear transformation. First, we correct the initial scores by incorporating the structured pairwise loss, i.e., SoftRank, to preserve the perceptual rank orders of pairwise images. Then, we further boost the previous pairwise correction results with structured listwise loss, i.e., Norm-in-Norm, to maintain the perceptual difference across all images. Finally, the point-wise prediction measures the MSE between the transformed scores and the ground truth through a closed-form solution of the Exponential Moving Average (EMA) driven linear transformation. Based on these iterative corrections, our SSGB-Net can effectively balance multiple BIQA objectives and outperform many state-of-the-art methods in terms of Pearson Linear Correlation Coefficient (PLCC), Spearman Rank Correlation Coefficient (SRCC) and Root Mean Squared Error (RMSE).
基于评分结构正则化梯度增强网络的图像质量盲评价
盲图像质量评估(Blind image quality assessment, BIQA)的目的是在没有获得相应的干净图像的情况下,定量地预测人们对扭曲图像的主观感知。目前流行的方法通常将BIQA建模为回归任务,并力求在点向非结构化损失方面最小化平均预测误差,如均方误差(MSE)或平均绝对误差(MAE),这些方法忽略了对不同图像之间的秩顺序和感知差异的感知。本文提出了一种评分结构正则化梯度增强网络(SSGB-Net)来实现对所有扭曲图像的更全面的感知。更具体地说,我们的SSGB-Net分三个阶段执行BIQA,即成对校正和列表增强,然后是线性变换后的点预测。首先,我们通过结合结构化成对损失(即SoftRank)来纠正初始分数,以保持成对图像的感知秩顺序。然后,我们使用结构化的列表损失(即Norm-in-Norm)进一步增强之前的两两校正结果,以保持所有图像之间的感知差异。最后,逐点预测通过指数移动平均线(EMA)驱动的线性变换的封闭形式解来测量转换分数和基本真理之间的MSE。基于这些迭代修正,我们的SSGB-Net可以有效地平衡多个BIQA目标,并在皮尔逊线性相关系数(PLCC)、斯皮尔曼秩相关系数(SRCC)和均方根误差(RMSE)方面优于许多最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信