N. Suresh, Pavan Manesh Mylavarapu, Naga Sailaja Mahankali, Sumohana S. Channappayya
{"title":"基于视频动作识别特征的快速高效无参考视频质量评估算法","authors":"N. Suresh, Pavan Manesh Mylavarapu, Naga Sailaja Mahankali, Sumohana S. Channappayya","doi":"10.1109/NCC55593.2022.9806466","DOIUrl":null,"url":null,"abstract":"This work addresses the problem of efficient noreference video quality assessment (NR-VQA). The motivation for this work is that even the best and fastest VQA algorithms do not achieve real-time performance. The speed of quality evaluation is impeded primarily by the spatio-temporal feature extraction stage. This impediment is common to both traditional as well as deep learning models. To address this issue, we explore the efficacy of features used in the action recognition problem for NR- VQA. Specifically, we leverage the efficiency offered by Gate Shift Module (GSM) in extracting spatio-temporal features. A simple yet effective improvement to the GSM model is proposed by adding the self-attention module. We first show that GSM features are indeed effective for NR-VQA. We then demonstrate a speed-up that is orders of magnitude faster than the current state-of-the-art VQA algorithms, albeit at the cost of overall performance. We evaluate the efficacy of our algorithm on both Standard Dynamic Range (SDR) and High Dynamic Range (HDR) datasets like KoNViD-1K, LIVE VQC, HDR.","PeriodicalId":403870,"journal":{"name":"2022 National Conference on Communications (NCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Fast and Efficient No-Reference Video Quality Assessment Algorithm Using Video Action Recognition Features\",\"authors\":\"N. Suresh, Pavan Manesh Mylavarapu, Naga Sailaja Mahankali, Sumohana S. Channappayya\",\"doi\":\"10.1109/NCC55593.2022.9806466\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work addresses the problem of efficient noreference video quality assessment (NR-VQA). The motivation for this work is that even the best and fastest VQA algorithms do not achieve real-time performance. The speed of quality evaluation is impeded primarily by the spatio-temporal feature extraction stage. This impediment is common to both traditional as well as deep learning models. To address this issue, we explore the efficacy of features used in the action recognition problem for NR- VQA. Specifically, we leverage the efficiency offered by Gate Shift Module (GSM) in extracting spatio-temporal features. A simple yet effective improvement to the GSM model is proposed by adding the self-attention module. We first show that GSM features are indeed effective for NR-VQA. We then demonstrate a speed-up that is orders of magnitude faster than the current state-of-the-art VQA algorithms, albeit at the cost of overall performance. We evaluate the efficacy of our algorithm on both Standard Dynamic Range (SDR) and High Dynamic Range (HDR) datasets like KoNViD-1K, LIVE VQC, HDR.\",\"PeriodicalId\":403870,\"journal\":{\"name\":\"2022 National Conference on Communications (NCC)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 National Conference on Communications (NCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NCC55593.2022.9806466\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC55593.2022.9806466","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
本工作解决了高效无参考视频质量评估(NR-VQA)的问题。这项工作的动机是,即使是最好和最快的VQA算法也无法实现实时性能。质量评价的速度主要受时空特征提取阶段的影响。这种障碍在传统和深度学习模型中都很常见。为了解决这个问题,我们探讨了在NR- VQA的动作识别问题中使用的特征的有效性。具体来说,我们利用门移模块(GSM)在提取时空特征方面提供的效率。通过增加自关注模块,对GSM模型进行了简单而有效的改进。我们首先证明GSM特性确实对NR-VQA有效。然后我们演示了一种比当前最先进的VQA算法快几个数量级的加速,尽管是以整体性能为代价的。我们评估了算法在标准动态范围(SDR)和高动态范围(HDR)数据集(如KoNViD-1K, LIVE VQC, HDR)上的有效性。
A Fast and Efficient No-Reference Video Quality Assessment Algorithm Using Video Action Recognition Features
This work addresses the problem of efficient noreference video quality assessment (NR-VQA). The motivation for this work is that even the best and fastest VQA algorithms do not achieve real-time performance. The speed of quality evaluation is impeded primarily by the spatio-temporal feature extraction stage. This impediment is common to both traditional as well as deep learning models. To address this issue, we explore the efficacy of features used in the action recognition problem for NR- VQA. Specifically, we leverage the efficiency offered by Gate Shift Module (GSM) in extracting spatio-temporal features. A simple yet effective improvement to the GSM model is proposed by adding the self-attention module. We first show that GSM features are indeed effective for NR-VQA. We then demonstrate a speed-up that is orders of magnitude faster than the current state-of-the-art VQA algorithms, albeit at the cost of overall performance. We evaluate the efficacy of our algorithm on both Standard Dynamic Range (SDR) and High Dynamic Range (HDR) datasets like KoNViD-1K, LIVE VQC, HDR.