Hong-Jheng Jhu, Xiaoyu Xiu, Yi-Wen Chen, Tsung-Chuan Ma, Xianglin Wang
{"title":"Adaptive Color Transform in VVC Standard","authors":"Hong-Jheng Jhu, Xiaoyu Xiu, Yi-Wen Chen, Tsung-Chuan Ma, Xianglin Wang","doi":"10.1109/VCIP49819.2020.9301798","DOIUrl":null,"url":null,"abstract":"This paper provides an in-depth overview of the adaptive color transform (ACT) tool that is adopted into the emerging versatile video coding (VVC) standard. With the ACT, prediction residuals in the original color space are adaptively converted into another color space to reduce the correlation among the three color components of video sequences in 4:4:4 chroma format. The residuals after color space conversion are then transformed, quantized and entropy-coded, following the VVC framework. YCgCo-R transforms, which can be easily implemented with shift and addition operations, are selected as the ACT core transforms to do the color space conversion. Additionally, to facilitate its implementations, the ACT is disabled in certain cases where the three color components do not share the same block partition, e.g. under separate tree partition mode or intra sub-partition prediction mode. Simulation results based on the VVC reference software show that ACT may provide significant coding gains with negligible impact on encoding and decoding runtime.","PeriodicalId":431880,"journal":{"name":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP49819.2020.9301798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper provides an in-depth overview of the adaptive color transform (ACT) tool that is adopted into the emerging versatile video coding (VVC) standard. With the ACT, prediction residuals in the original color space are adaptively converted into another color space to reduce the correlation among the three color components of video sequences in 4:4:4 chroma format. The residuals after color space conversion are then transformed, quantized and entropy-coded, following the VVC framework. YCgCo-R transforms, which can be easily implemented with shift and addition operations, are selected as the ACT core transforms to do the color space conversion. Additionally, to facilitate its implementations, the ACT is disabled in certain cases where the three color components do not share the same block partition, e.g. under separate tree partition mode or intra sub-partition prediction mode. Simulation results based on the VVC reference software show that ACT may provide significant coding gains with negligible impact on encoding and decoding runtime.