Yun Liang, Mingqin Chen, Zesheng Huang, D. Gutierrez, A. Muñoz, Julio Marco
{"title":"A data-driven compression method for transient rendering","authors":"Yun Liang, Mingqin Chen, Zesheng Huang, D. Gutierrez, A. Muñoz, Julio Marco","doi":"10.1145/3306214.3338582","DOIUrl":null,"url":null,"abstract":"Monte Carlo methods for transient rendering have become a powerful instrument to generate reliable data in transient imaging applications, either for benchmarking, analysis, or as a source for data-driven approaches. However, due to the increased dimensionality of time-resolved renders, storage and data bandwidth are significant limiting constraints, where a single time-resolved render of a scene can take several hundreds of megabytes. In this work we propose a learning-based approach that makes use of deep encoder-decoder architectures to learn lower-dimensional feature vectors of time-resolved pixels. We demonstrate how our method is capable of compressing transient renders up to a factor of 32, and recover the full transient profile making use of a decoder. Additionally, we show how our learned features significantly mitigate variance on the recovered signal, addressing one of the pathological problems in transient rendering.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2019 Posters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306214.3338582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Monte Carlo methods for transient rendering have become a powerful instrument to generate reliable data in transient imaging applications, either for benchmarking, analysis, or as a source for data-driven approaches. However, due to the increased dimensionality of time-resolved renders, storage and data bandwidth are significant limiting constraints, where a single time-resolved render of a scene can take several hundreds of megabytes. In this work we propose a learning-based approach that makes use of deep encoder-decoder architectures to learn lower-dimensional feature vectors of time-resolved pixels. We demonstrate how our method is capable of compressing transient renders up to a factor of 32, and recover the full transient profile making use of a decoder. Additionally, we show how our learned features significantly mitigate variance on the recovered signal, addressing one of the pathological problems in transient rendering.