无参照 VMAF:基于深度神经网络的盲目视频质量评估方法

IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Axel De Decker;Jan De Cock;Peter Lambert;Glenn Van Wallendael
{"title":"无参照 VMAF:基于深度神经网络的盲目视频质量评估方法","authors":"Axel De Decker;Jan De Cock;Peter Lambert;Glenn Van Wallendael","doi":"10.1109/TBC.2024.3399479","DOIUrl":null,"url":null,"abstract":"As the demand for high-quality video content continues to rise, accurately assessing the visual quality of digital videos has become more crucial than ever before. However, evaluating the perceptual quality of an impaired video in the absence of the original reference signal remains a significant challenge. To address this problem, we propose a novel No-Reference (NR) video quality metric called NR-VMAF. Our method is designed to replicate the popular Full-Reference (FR) metric VMAF in scenarios where the reference signal is unavailable or impractical to obtain. Like its FR counterpart, NR-VMAF is tailored specifically for measuring video quality in the presence of compression and scaling artifacts. The proposed model utilizes a deep convolutional neural network to extract quality-aware features from the pixel information of the distorted video, thereby eliminating the need for manual feature engineering. By adopting a patch-based approach, we are able to process high-resolution video data without any information loss. While the current model is trained solely on H.265/HEVC videos, its performance is verified on subjective datasets containing mainly H.264/AVC content. We demonstrate that NR-VMAF outperforms current state-of-the-art NR metrics while achieving a prediction accuracy that is comparable to VMAF and other FR metrics. Based on this strong performance, we believe that NR-VMAF is a viable approach to efficient and reliable No-Reference video quality assessment.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"844-861"},"PeriodicalIF":3.2000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"No-Reference VMAF: A Deep Neural Network-Based Approach to Blind Video Quality Assessment\",\"authors\":\"Axel De Decker;Jan De Cock;Peter Lambert;Glenn Van Wallendael\",\"doi\":\"10.1109/TBC.2024.3399479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the demand for high-quality video content continues to rise, accurately assessing the visual quality of digital videos has become more crucial than ever before. However, evaluating the perceptual quality of an impaired video in the absence of the original reference signal remains a significant challenge. To address this problem, we propose a novel No-Reference (NR) video quality metric called NR-VMAF. Our method is designed to replicate the popular Full-Reference (FR) metric VMAF in scenarios where the reference signal is unavailable or impractical to obtain. Like its FR counterpart, NR-VMAF is tailored specifically for measuring video quality in the presence of compression and scaling artifacts. The proposed model utilizes a deep convolutional neural network to extract quality-aware features from the pixel information of the distorted video, thereby eliminating the need for manual feature engineering. By adopting a patch-based approach, we are able to process high-resolution video data without any information loss. While the current model is trained solely on H.265/HEVC videos, its performance is verified on subjective datasets containing mainly H.264/AVC content. We demonstrate that NR-VMAF outperforms current state-of-the-art NR metrics while achieving a prediction accuracy that is comparable to VMAF and other FR metrics. Based on this strong performance, we believe that NR-VMAF is a viable approach to efficient and reliable No-Reference video quality assessment.\",\"PeriodicalId\":13159,\"journal\":{\"name\":\"IEEE Transactions on Broadcasting\",\"volume\":\"70 3\",\"pages\":\"844-861\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Broadcasting\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10564175/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Broadcasting","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10564175/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

随着人们对高质量视频内容的需求不断增加,准确评估数字视频的视觉质量变得比以往任何时候都更加重要。然而,在没有原始参考信号的情况下评估受损视频的感知质量仍然是一项重大挑战。为了解决这个问题,我们提出了一种名为 NR-VMAF 的新型无参考(NR)视频质量度量方法。我们的方法旨在复制流行的全参考(FR)指标 VMAF,以应对参考信号不可用或无法获取的情况。与 FR 指标一样,NR-VMAF 专为测量存在压缩和缩放伪影的视频质量而量身定制。所提出的模型利用深度卷积神经网络从失真视频的像素信息中提取质量感知特征,从而消除了人工特征工程的需要。通过采用基于补丁的方法,我们能够在不丢失任何信息的情况下处理高分辨率视频数据。虽然当前的模型仅在 H.265/HEVC 视频上进行了训练,但其性能在主要包含 H.264/AVC 内容的主观数据集上得到了验证。我们证明,NR-VMAF 的性能优于当前最先进的 NR 指标,同时预测准确率与 VMAF 和其他 FR 指标相当。基于这种强大的性能,我们相信 NR-VMAF 是高效可靠的无参考视频质量评估的可行方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
No-Reference VMAF: A Deep Neural Network-Based Approach to Blind Video Quality Assessment
As the demand for high-quality video content continues to rise, accurately assessing the visual quality of digital videos has become more crucial than ever before. However, evaluating the perceptual quality of an impaired video in the absence of the original reference signal remains a significant challenge. To address this problem, we propose a novel No-Reference (NR) video quality metric called NR-VMAF. Our method is designed to replicate the popular Full-Reference (FR) metric VMAF in scenarios where the reference signal is unavailable or impractical to obtain. Like its FR counterpart, NR-VMAF is tailored specifically for measuring video quality in the presence of compression and scaling artifacts. The proposed model utilizes a deep convolutional neural network to extract quality-aware features from the pixel information of the distorted video, thereby eliminating the need for manual feature engineering. By adopting a patch-based approach, we are able to process high-resolution video data without any information loss. While the current model is trained solely on H.265/HEVC videos, its performance is verified on subjective datasets containing mainly H.264/AVC content. We demonstrate that NR-VMAF outperforms current state-of-the-art NR metrics while achieving a prediction accuracy that is comparable to VMAF and other FR metrics. Based on this strong performance, we believe that NR-VMAF is a viable approach to efficient and reliable No-Reference video quality assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Broadcasting
IEEE Transactions on Broadcasting 工程技术-电信学
CiteScore
9.40
自引率
31.10%
发文量
79
审稿时长
6-12 weeks
期刊介绍: The Society’s Field of Interest is “Devices, equipment, techniques and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.” In addition to this formal FOI statement, which is used to provide guidance to the Publications Committee in the selection of content, the AdCom has further resolved that “broadcast systems includes all aspects of transmission, propagation, and reception.”
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信