Yingan Cui, Zonghua Yu, Yuqin Feng, Huaijun Wang, Junhuai Li
{"title":"A multi-scale no-reference video quality assessment method based on transformer","authors":"Yingan Cui, Zonghua Yu, Yuqin Feng, Huaijun Wang, Junhuai Li","doi":"10.1007/s00530-024-01403-y","DOIUrl":null,"url":null,"abstract":"<p>Video quality assessment is essential for optimizing user experience, enhancing network efficiency, supporting video production and editing, improving advertising effectiveness, and strengthening security in monitoring and other domains. Reacting to the prevailing focus of current research on video detail distortion while overlooking the temporal relationships between video frames and the impact of content-dependent characteristics of the human visual system on video quality, this paper proposes a multi-scale no-reference video quality assessment method based on transformer. On the one hand, spatial features of the video are extracted using a network that combines swin-transformer and deformable convolution, and further information preservation is achieved through mixed pooling of features in video frames. On the other hand, a pyramid aggregation module is utilized to merge long-term and short-term memories, enhancing the ability to capture temporal changes. Experimental results on public datasets such as KoNViD-1k, CVD2014, and LIVE-VQC demonstrate the effectiveness of the proposed method in video quality prediction.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01403-y","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Video quality assessment is essential for optimizing user experience, enhancing network efficiency, supporting video production and editing, improving advertising effectiveness, and strengthening security in monitoring and other domains. Reacting to the prevailing focus of current research on video detail distortion while overlooking the temporal relationships between video frames and the impact of content-dependent characteristics of the human visual system on video quality, this paper proposes a multi-scale no-reference video quality assessment method based on transformer. On the one hand, spatial features of the video are extracted using a network that combines swin-transformer and deformable convolution, and further information preservation is achieved through mixed pooling of features in video frames. On the other hand, a pyramid aggregation module is utilized to merge long-term and short-term memories, enhancing the ability to capture temporal changes. Experimental results on public datasets such as KoNViD-1k, CVD2014, and LIVE-VQC demonstrate the effectiveness of the proposed method in video quality prediction.