{"title":"IllumiDiff: Indoor Illumination Estimation from a Single Image with Diffusion Model.","authors":"Shiyuan Shen, Zhongyun Bao, Wenju Xu, Chunxia Xiao","doi":"10.1109/TVCG.2025.3553853","DOIUrl":null,"url":null,"abstract":"<p><p>Illumination estimation from a single indoor image is a promising yet challenging task. Existing indoor illumination estimation methods mainly regress lighting parameters or infer a panorama from a limited field-of-view image. Nevertheless, these methods fail to recover a panorama with both well-distributed illumination and detailed environment textures, leading to a lack of realism in rendering the embedded 3D objects with complex materials. This paper presents a novel multi-stage illumination estimation framework named IllumiDiff. Specifically, in Stage I, we first estimate illumination conditions from the input image, including the illumination distribution as well as the environmental texture of the scene. In Stage II, guided by the estimated illumination conditions, we design a conditional panoramic texture diffusion model to generate a high-quality LDR panorama. In Stage III, we leverage the illumination conditions to further reconstruct the LDR panorama to an HDR panorama. Extensive experiments demonstrate that our IllumiDiff can generate an HDR panorama with realistic illumination distribution and rich texture details from a single limited field-of-view indoor image. The generated panorama can produce impressive rendering results for the embedded 3D objects with various materials.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3553853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Illumination estimation from a single indoor image is a promising yet challenging task. Existing indoor illumination estimation methods mainly regress lighting parameters or infer a panorama from a limited field-of-view image. Nevertheless, these methods fail to recover a panorama with both well-distributed illumination and detailed environment textures, leading to a lack of realism in rendering the embedded 3D objects with complex materials. This paper presents a novel multi-stage illumination estimation framework named IllumiDiff. Specifically, in Stage I, we first estimate illumination conditions from the input image, including the illumination distribution as well as the environmental texture of the scene. In Stage II, guided by the estimated illumination conditions, we design a conditional panoramic texture diffusion model to generate a high-quality LDR panorama. In Stage III, we leverage the illumination conditions to further reconstruct the LDR panorama to an HDR panorama. Extensive experiments demonstrate that our IllumiDiff can generate an HDR panorama with realistic illumination distribution and rich texture details from a single limited field-of-view indoor image. The generated panorama can produce impressive rendering results for the embedded 3D objects with various materials.