{"title":"基于稀疏表示和动态原子分类的视频质量评估","authors":"Zihui Zhang, Zongyao Hu","doi":"10.1109/ICCRD51685.2021.9386597","DOIUrl":null,"url":null,"abstract":"Finding that not all dictionary atoms are closely related to degradation in visual signal, we innovatively design a distortion sensitivity guided Dynamic Atom Classification (DAC) strategy to separate distorted signal. Then, we propose a novel DAC-based full-reference video quality assessment (VQA) method. The method includes two parts: spatial quality evaluation and temporal quality evaluation. Spatially, we train a distortion-aware dictionary, get sparse representation of video patches, and dynamically classify activated dictionary atoms. Every frame is separated into difference and basic components, and spatial similarity is aggregated by component similarities. Temporally, we calculate gradient similarity of frame difference to capture motion information. The experimental results indicate the effectiveness of the proposed algorithm compared with state-of-art VQA methods.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Video Quality Assessment by Sparse Representation and Dynamic Atom Classification\",\"authors\":\"Zihui Zhang, Zongyao Hu\",\"doi\":\"10.1109/ICCRD51685.2021.9386597\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Finding that not all dictionary atoms are closely related to degradation in visual signal, we innovatively design a distortion sensitivity guided Dynamic Atom Classification (DAC) strategy to separate distorted signal. Then, we propose a novel DAC-based full-reference video quality assessment (VQA) method. The method includes two parts: spatial quality evaluation and temporal quality evaluation. Spatially, we train a distortion-aware dictionary, get sparse representation of video patches, and dynamically classify activated dictionary atoms. Every frame is separated into difference and basic components, and spatial similarity is aggregated by component similarities. Temporally, we calculate gradient similarity of frame difference to capture motion information. The experimental results indicate the effectiveness of the proposed algorithm compared with state-of-art VQA methods.\",\"PeriodicalId\":294200,\"journal\":{\"name\":\"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)\",\"volume\":\"6 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCRD51685.2021.9386597\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRD51685.2021.9386597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Video Quality Assessment by Sparse Representation and Dynamic Atom Classification
Finding that not all dictionary atoms are closely related to degradation in visual signal, we innovatively design a distortion sensitivity guided Dynamic Atom Classification (DAC) strategy to separate distorted signal. Then, we propose a novel DAC-based full-reference video quality assessment (VQA) method. The method includes two parts: spatial quality evaluation and temporal quality evaluation. Spatially, we train a distortion-aware dictionary, get sparse representation of video patches, and dynamically classify activated dictionary atoms. Every frame is separated into difference and basic components, and spatial similarity is aggregated by component similarities. Temporally, we calculate gradient similarity of frame difference to capture motion information. The experimental results indicate the effectiveness of the proposed algorithm compared with state-of-art VQA methods.