Qi Zheng, Zhengzhong Tu, Zhijian Hao, Xiaoyang Zeng, A. Bovik, Yibo Fan
{"title":"基于时空切片统计的盲视频质量评估","authors":"Qi Zheng, Zhengzhong Tu, Zhijian Hao, Xiaoyang Zeng, A. Bovik, Yibo Fan","doi":"10.1109/ICIP46576.2022.9897565","DOIUrl":null,"url":null,"abstract":"User-generated contents (UGC) have gained increased attention in the video quality community recently. Perceptual video quality assessment (VQA) of UGC videos is of great significance for content providers to monitor, process, and deliver massive numbers of UGC videos. Blind video quality prediction of UGC videos is challenging since complex mixtures of spatial and temporal distortions contribute to the overall perceptual quality. In this paper, we develop a simple, effective, and efficient blind VQA framework (STS-QA) based on the statistical analysis of space-time slices (STS) of videos. Specifically, we extract spatio-temporal statistical features along different orientations of video STS, that capture directional global motion, then train a shallow quality predictor. The proposed framework can be used to easily extend any existing video/image quality model to account for temporal or motion regularities. Our experimental results on three publicly available UGC databases demonstrate that our proposed STS-QA model can significantly boost prediction performance compared to baselines. The code will be released at: https://github.com/uniqzheng/STS_BVQA.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Blind Video Quality Assessment via Space-Time Slice Statistics\",\"authors\":\"Qi Zheng, Zhengzhong Tu, Zhijian Hao, Xiaoyang Zeng, A. Bovik, Yibo Fan\",\"doi\":\"10.1109/ICIP46576.2022.9897565\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"User-generated contents (UGC) have gained increased attention in the video quality community recently. Perceptual video quality assessment (VQA) of UGC videos is of great significance for content providers to monitor, process, and deliver massive numbers of UGC videos. Blind video quality prediction of UGC videos is challenging since complex mixtures of spatial and temporal distortions contribute to the overall perceptual quality. In this paper, we develop a simple, effective, and efficient blind VQA framework (STS-QA) based on the statistical analysis of space-time slices (STS) of videos. Specifically, we extract spatio-temporal statistical features along different orientations of video STS, that capture directional global motion, then train a shallow quality predictor. The proposed framework can be used to easily extend any existing video/image quality model to account for temporal or motion regularities. Our experimental results on three publicly available UGC databases demonstrate that our proposed STS-QA model can significantly boost prediction performance compared to baselines. The code will be released at: https://github.com/uniqzheng/STS_BVQA.\",\"PeriodicalId\":387035,\"journal\":{\"name\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP46576.2022.9897565\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP46576.2022.9897565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
最近,用户生成内容(UGC)在视频质量界受到了越来越多的关注。UGC视频的感知视频质量评估(Perceptual video quality assessment, VQA)对于内容提供商监控、处理和投放海量UGC视频具有重要意义。UGC视频的盲目视频质量预测具有挑战性,因为空间和时间扭曲的复杂混合会影响整体感知质量。本文基于视频时空切片(STS)的统计分析,开发了一种简单、有效、高效的盲VQA框架(STS- qa)。具体来说,我们沿着视频STS的不同方向提取时空统计特征,捕获定向全局运动,然后训练一个浅质量预测器。提出的框架可以很容易地扩展任何现有的视频/图像质量模型,以解释时间或运动规律。我们在三个公开可用的UGC数据库上的实验结果表明,与基线相比,我们提出的STS-QA模型可以显著提高预测性能。代码将在https://github.com/uniqzheng/STS_BVQA上发布。
Blind Video Quality Assessment via Space-Time Slice Statistics
User-generated contents (UGC) have gained increased attention in the video quality community recently. Perceptual video quality assessment (VQA) of UGC videos is of great significance for content providers to monitor, process, and deliver massive numbers of UGC videos. Blind video quality prediction of UGC videos is challenging since complex mixtures of spatial and temporal distortions contribute to the overall perceptual quality. In this paper, we develop a simple, effective, and efficient blind VQA framework (STS-QA) based on the statistical analysis of space-time slices (STS) of videos. Specifically, we extract spatio-temporal statistical features along different orientations of video STS, that capture directional global motion, then train a shallow quality predictor. The proposed framework can be used to easily extend any existing video/image quality model to account for temporal or motion regularities. Our experimental results on three publicly available UGC databases demonstrate that our proposed STS-QA model can significantly boost prediction performance compared to baselines. The code will be released at: https://github.com/uniqzheng/STS_BVQA.