{"title":"基于切比切夫矩的无参考立体视频质量评价","authors":"Yuxin Chen, Ming-Chang Wen","doi":"10.1117/12.2644706","DOIUrl":null,"url":null,"abstract":"We propose a no-reference (NR) stereoscopic video quality assessment (SVQA) model based on Tchebichef moment in this paper. Specifically, we extract keyframes according to mutual information between adjacent frames, and then the extracted keyframes are segmented to patches to calculate low-order Tchebichef moments. Since the strong description ability of Tchebichef moment, and different order of Tchebichef moment can represent independent features with minimal information redundancy, we extract statistical features of Tchebichef moment on computed patches as spatial features. Considering the influence of distortions in spatiotemporal domain to video quality, we use the three-dimensional derivative of Gaussian filters to calculate the spatiotemporal energy responses and extract statistical features from the responses as spatiotemporal features. Finally, we combine the spatial and spatiotemporal features to predict the quality of stereoscopic videos. The proposed model is evaluated on the NAMA3DS1-COSPAD1, SVQA and Waterloo IVC phase I databases. The experimental results show that the proposed model achieved competitive performance as compared with existing SVQA models.","PeriodicalId":314555,"journal":{"name":"International Conference on Digital Image Processing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"No-reference stereoscopic video quality assessment based on Tchebichef moment\",\"authors\":\"Yuxin Chen, Ming-Chang Wen\",\"doi\":\"10.1117/12.2644706\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a no-reference (NR) stereoscopic video quality assessment (SVQA) model based on Tchebichef moment in this paper. Specifically, we extract keyframes according to mutual information between adjacent frames, and then the extracted keyframes are segmented to patches to calculate low-order Tchebichef moments. Since the strong description ability of Tchebichef moment, and different order of Tchebichef moment can represent independent features with minimal information redundancy, we extract statistical features of Tchebichef moment on computed patches as spatial features. Considering the influence of distortions in spatiotemporal domain to video quality, we use the three-dimensional derivative of Gaussian filters to calculate the spatiotemporal energy responses and extract statistical features from the responses as spatiotemporal features. Finally, we combine the spatial and spatiotemporal features to predict the quality of stereoscopic videos. The proposed model is evaluated on the NAMA3DS1-COSPAD1, SVQA and Waterloo IVC phase I databases. The experimental results show that the proposed model achieved competitive performance as compared with existing SVQA models.\",\"PeriodicalId\":314555,\"journal\":{\"name\":\"International Conference on Digital Image Processing\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Digital Image Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2644706\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Digital Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2644706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
No-reference stereoscopic video quality assessment based on Tchebichef moment
We propose a no-reference (NR) stereoscopic video quality assessment (SVQA) model based on Tchebichef moment in this paper. Specifically, we extract keyframes according to mutual information between adjacent frames, and then the extracted keyframes are segmented to patches to calculate low-order Tchebichef moments. Since the strong description ability of Tchebichef moment, and different order of Tchebichef moment can represent independent features with minimal information redundancy, we extract statistical features of Tchebichef moment on computed patches as spatial features. Considering the influence of distortions in spatiotemporal domain to video quality, we use the three-dimensional derivative of Gaussian filters to calculate the spatiotemporal energy responses and extract statistical features from the responses as spatiotemporal features. Finally, we combine the spatial and spatiotemporal features to predict the quality of stereoscopic videos. The proposed model is evaluated on the NAMA3DS1-COSPAD1, SVQA and Waterloo IVC phase I databases. The experimental results show that the proposed model achieved competitive performance as compared with existing SVQA models.