Yilin Wang, Joong Gon Yim, N. Birkbeck, Junjie Ke, Hossein Talebi, Xi Chen, Feng Yang, Balu Adsumilli
{"title":"Revisiting the Efficiency of UGC Video Quality Assessment","authors":"Yilin Wang, Joong Gon Yim, N. Birkbeck, Junjie Ke, Hossein Talebi, Xi Chen, Feng Yang, Balu Adsumilli","doi":"10.1109/ICIP46576.2022.9897401","DOIUrl":null,"url":null,"abstract":"UGC video quality assessment (UGC-VQA) is a challenging research topic due to the high video diversity and limited public UGC quality datasets. State-of-the-art (SOTA) UGC quality models tend to use high complexity models, and rarely discuss the trade-off among complexity, accuracy, and generalizability. We propose a new perspective on UGC-VQA, and show that model complexity may not be critical to the performance, whereas a more diverse dataset is essential to train a better model. We illustrate this by using a light weight model, UVQ-lite, which has higher efficiency and better generalizability (less overfitting) than baseline SOTA models. We also propose a new way to analyze the sufficiency of the training set, by leveraging UVQ’s comprehensive features. Our results motivate a new perspective about the future of UGC-VQA research, which we believe is headed toward more efficient models and more diverse datasets.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP46576.2022.9897401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
UGC video quality assessment (UGC-VQA) is a challenging research topic due to the high video diversity and limited public UGC quality datasets. State-of-the-art (SOTA) UGC quality models tend to use high complexity models, and rarely discuss the trade-off among complexity, accuracy, and generalizability. We propose a new perspective on UGC-VQA, and show that model complexity may not be critical to the performance, whereas a more diverse dataset is essential to train a better model. We illustrate this by using a light weight model, UVQ-lite, which has higher efficiency and better generalizability (less overfitting) than baseline SOTA models. We also propose a new way to analyze the sufficiency of the training set, by leveraging UVQ’s comprehensive features. Our results motivate a new perspective about the future of UGC-VQA research, which we believe is headed toward more efficient models and more diverse datasets.