{"title":"Lifting-based Illumination Adaptive Transform (LIAT) using mesh-based illumination modelling","authors":"Maryam Haghighat, R. Mathew, A. Naman, D. Taubman","doi":"10.1109/ICIP.2017.8296834","DOIUrl":null,"url":null,"abstract":"State-of-the-art video coding techniques employ block-based illumination compensation to improve coding efficiency. In this work, we propose a Lifting-based Illumination Adaptive Transform (LIAT) to exploit temporal redundancy among frames that have illumination variations, such as the frames of low frame rate video or multi-view video. LIAT employs a mesh-based spatially affine model to represent illumination variations between two frames. In LIAT, transformed frames are jointly compressed, together with illumination information, into a layered rate-distortion optimal codestream, using the JPEG2000 format. We show that the LIAT framework significantly improves compression efficiency of temporal subband transforms for both predictive and more general transforms with predict and update steps.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2017.8296834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
State-of-the-art video coding techniques employ block-based illumination compensation to improve coding efficiency. In this work, we propose a Lifting-based Illumination Adaptive Transform (LIAT) to exploit temporal redundancy among frames that have illumination variations, such as the frames of low frame rate video or multi-view video. LIAT employs a mesh-based spatially affine model to represent illumination variations between two frames. In LIAT, transformed frames are jointly compressed, together with illumination information, into a layered rate-distortion optimal codestream, using the JPEG2000 format. We show that the LIAT framework significantly improves compression efficiency of temporal subband transforms for both predictive and more general transforms with predict and update steps.