{"title":"无参考深度压缩视频质量评估","authors":"M. Alizadeh, A. Mohammadi, M. Sharifkhani","doi":"10.1109/ICCKE.2018.8566395","DOIUrl":null,"url":null,"abstract":"A novel No-Reference Video Quality Assessment (NR-VQA), based on Convolutional Neural Network (CNN) for High Efficiency Video Codec (HEVC) is presented. Deep Compressed-domain Video Quality (DCVQ) measures the video quality, with compressed domain features such as motion vector, bit allocation, partitioning and quantization parameter. For the training of the network, P-MOS is used due to the limitation of existing datasets. The evaluation of the proposed method shows that it has “96%” correlation to subjective quality assessment (MOS). The method can work simultaneously with the decoding process and measures the quality in different resolutions.","PeriodicalId":283700,"journal":{"name":"2018 8th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"No-Reference Deep Compressed-Based Video Quality Assessment\",\"authors\":\"M. Alizadeh, A. Mohammadi, M. Sharifkhani\",\"doi\":\"10.1109/ICCKE.2018.8566395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel No-Reference Video Quality Assessment (NR-VQA), based on Convolutional Neural Network (CNN) for High Efficiency Video Codec (HEVC) is presented. Deep Compressed-domain Video Quality (DCVQ) measures the video quality, with compressed domain features such as motion vector, bit allocation, partitioning and quantization parameter. For the training of the network, P-MOS is used due to the limitation of existing datasets. The evaluation of the proposed method shows that it has “96%” correlation to subjective quality assessment (MOS). The method can work simultaneously with the decoding process and measures the quality in different resolutions.\",\"PeriodicalId\":283700,\"journal\":{\"name\":\"2018 8th International Conference on Computer and Knowledge Engineering (ICCKE)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 8th International Conference on Computer and Knowledge Engineering (ICCKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCKE.2018.8566395\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 8th International Conference on Computer and Knowledge Engineering (ICCKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCKE.2018.8566395","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
提出了一种基于卷积神经网络(CNN)的无参考视频质量评估(NR-VQA)的高效视频编解码器。深度压缩域视频质量(Deep compression -domain Video Quality, DCVQ)利用压缩域的运动矢量、位分配、分区和量化参数等特征来衡量视频质量。对于网络的训练,由于现有数据集的限制,使用了P-MOS。对该方法的评价表明,该方法与主观质量评价(MOS)的相关性为96%。该方法可以与解码过程同时工作,并在不同分辨率下测量质量。
No-Reference Deep Compressed-Based Video Quality Assessment
A novel No-Reference Video Quality Assessment (NR-VQA), based on Convolutional Neural Network (CNN) for High Efficiency Video Codec (HEVC) is presented. Deep Compressed-domain Video Quality (DCVQ) measures the video quality, with compressed domain features such as motion vector, bit allocation, partitioning and quantization parameter. For the training of the network, P-MOS is used due to the limitation of existing datasets. The evaluation of the proposed method shows that it has “96%” correlation to subjective quality assessment (MOS). The method can work simultaneously with the decoding process and measures the quality in different resolutions.