客观视频质量评估的显著性:什么是基本事实?

Wei Zhang, Hantao Liu
{"title":"客观视频质量评估的显著性:什么是基本事实?","authors":"Wei Zhang, Hantao Liu","doi":"10.1109/MMSP.2016.7813333","DOIUrl":null,"url":null,"abstract":"Finding ways to be able to objectively and reliably assess video quality as would be perceived by humans has become a pressing concern in the multimedia community. To enhance the performance of video quality metrics (VQMs), a research trend is to incorporate visual saliency aspects. Existing approaches have focused on utilizing a computational saliency model to improve a VQM. Since saliency models still remain limited in predicting where people look in videos, the benefits of inclusion of saliency in VQMs may heavily depend on the accuracy of the saliency model used. To gain an insight into the actual added value of saliency in VQMs, ground truth saliency obtained from eye-tracking instead of computational saliency is an essential prerequisite. However, collecting eye-tracking data within the context of video quality is confronted with a bias due to the involvement of massive stimulus repetition. In this paper, we introduce a new experimental methodology to alleviate such potential bias and consequently, to be able to deliver reliable intended data. We recorded eye movements from 160 human observers while they freely viewed 160 video stimuli distorted with different distortion types at various degradation levels. We analyse the extent to which ground truth saliency as well as computational saliency actually benefit existing state of the art VQMs. Our dataset opens new challenges for saliency modelling in video quality research and helps better gauge progress in developing saliency-based VQMs.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Saliency in objective video quality assessment: What is the ground truth?\",\"authors\":\"Wei Zhang, Hantao Liu\",\"doi\":\"10.1109/MMSP.2016.7813333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Finding ways to be able to objectively and reliably assess video quality as would be perceived by humans has become a pressing concern in the multimedia community. To enhance the performance of video quality metrics (VQMs), a research trend is to incorporate visual saliency aspects. Existing approaches have focused on utilizing a computational saliency model to improve a VQM. Since saliency models still remain limited in predicting where people look in videos, the benefits of inclusion of saliency in VQMs may heavily depend on the accuracy of the saliency model used. To gain an insight into the actual added value of saliency in VQMs, ground truth saliency obtained from eye-tracking instead of computational saliency is an essential prerequisite. However, collecting eye-tracking data within the context of video quality is confronted with a bias due to the involvement of massive stimulus repetition. In this paper, we introduce a new experimental methodology to alleviate such potential bias and consequently, to be able to deliver reliable intended data. We recorded eye movements from 160 human observers while they freely viewed 160 video stimuli distorted with different distortion types at various degradation levels. We analyse the extent to which ground truth saliency as well as computational saliency actually benefit existing state of the art VQMs. Our dataset opens new challenges for saliency modelling in video quality research and helps better gauge progress in developing saliency-based VQMs.\",\"PeriodicalId\":113192,\"journal\":{\"name\":\"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2016.7813333\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2016.7813333","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

寻找能够客观可靠地评估人类所感知的视频质量的方法已成为多媒体社区迫切关注的问题。为了提高视频质量指标(VQMs)的性能,一个研究趋势是结合视觉显著性方面。现有的方法集中在利用计算显著性模型来改进VQM。由于显著性模型在预测人们观看视频的位置方面仍然有限,因此在VQMs中包含显著性的好处可能在很大程度上取决于所使用的显著性模型的准确性。为了深入了解VQMs中显著性的实际附加价值,通过眼动追踪而不是计算显著性获得的ground truth显著性是必不可少的先决条件。然而,在视频质量的背景下,由于涉及大量的刺激重复,眼动追踪数据的收集面临着偏见。在本文中,我们引入了一种新的实验方法来减轻这种潜在的偏见,从而能够提供可靠的预期数据。我们记录了160名人类观察者的眼球运动,同时他们自由地观看了160个视频刺激,这些视频刺激具有不同的失真类型和不同的退化程度。我们分析了基础真理显著性和计算显著性在多大程度上实际上有利于现有的最先进的VQMs。我们的数据集为视频质量研究中的显著性建模带来了新的挑战,并有助于更好地衡量基于显著性的视频质量模型的开发进展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Saliency in objective video quality assessment: What is the ground truth?
Finding ways to be able to objectively and reliably assess video quality as would be perceived by humans has become a pressing concern in the multimedia community. To enhance the performance of video quality metrics (VQMs), a research trend is to incorporate visual saliency aspects. Existing approaches have focused on utilizing a computational saliency model to improve a VQM. Since saliency models still remain limited in predicting where people look in videos, the benefits of inclusion of saliency in VQMs may heavily depend on the accuracy of the saliency model used. To gain an insight into the actual added value of saliency in VQMs, ground truth saliency obtained from eye-tracking instead of computational saliency is an essential prerequisite. However, collecting eye-tracking data within the context of video quality is confronted with a bias due to the involvement of massive stimulus repetition. In this paper, we introduce a new experimental methodology to alleviate such potential bias and consequently, to be able to deliver reliable intended data. We recorded eye movements from 160 human observers while they freely viewed 160 video stimuli distorted with different distortion types at various degradation levels. We analyse the extent to which ground truth saliency as well as computational saliency actually benefit existing state of the art VQMs. Our dataset opens new challenges for saliency modelling in video quality research and helps better gauge progress in developing saliency-based VQMs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信