{"title":"Anisotropic Spherical Gaussians Lighting Priors for Indoor Environment Map Estimation","authors":"Junhong Zhao;Bing Xue;Mengjie Zhang","doi":"10.1109/TIP.2025.3575902","DOIUrl":null,"url":null,"abstract":"High Dynamic Range (HDR) environment lighting is essential for augmented reality and visual editing applications, enabling realistic object relighting and seamless scene composition. However, the acquisition of accurate HDR environment maps remains resource-intensive, often requiring specialized devices such as light probes or 360° capture systems, and necessitating stitching during postprocessing. Existing deep learning-based methods attempt to estimate global illumination from partial-view images but often struggle with complex lighting conditions, particularly in indoor environments with diverse lighting variations. To address this challenge, we propose a novel method for estimating indoor HDR environment maps from single standard images, leveraging Anisotropic Spherical Gaussians (ASG) to model intricate lighting distributions as priors. Unlike traditional Spherical Gaussian (SG) representations, ASG can better capture anisotropic lighting properties, including complex shape, rotation, and spatial extent. Our approach introduces a transformer-based network with a two-stage training scheme to predict ASG parameters effectively. To leverage these predicted lighting priors for environment map generation, we introduce a novel generative projector that synthesizes environment maps with high-frequency textures. To train the generative projector, we propose a parameter-efficient adaptation method that transfers knowledge from SG-based guidance to ASG, enabling the model to preserve the generalizability of SG (e.g., spatial distribution and dominance of light sources) while enhancing its capacity to capture fine-grained anisotropic lighting characteristics. Experimental results demonstrate that our method yields environment maps with more precise lighting conditions and environment textures, facilitating the realistic rendering of lighting effects. The implementation code for ASG extraction can be found at <uri>https://github.com/junhong-jennifer-zhao/ASG-lighting</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3635-3647"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11028879/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
High Dynamic Range (HDR) environment lighting is essential for augmented reality and visual editing applications, enabling realistic object relighting and seamless scene composition. However, the acquisition of accurate HDR environment maps remains resource-intensive, often requiring specialized devices such as light probes or 360° capture systems, and necessitating stitching during postprocessing. Existing deep learning-based methods attempt to estimate global illumination from partial-view images but often struggle with complex lighting conditions, particularly in indoor environments with diverse lighting variations. To address this challenge, we propose a novel method for estimating indoor HDR environment maps from single standard images, leveraging Anisotropic Spherical Gaussians (ASG) to model intricate lighting distributions as priors. Unlike traditional Spherical Gaussian (SG) representations, ASG can better capture anisotropic lighting properties, including complex shape, rotation, and spatial extent. Our approach introduces a transformer-based network with a two-stage training scheme to predict ASG parameters effectively. To leverage these predicted lighting priors for environment map generation, we introduce a novel generative projector that synthesizes environment maps with high-frequency textures. To train the generative projector, we propose a parameter-efficient adaptation method that transfers knowledge from SG-based guidance to ASG, enabling the model to preserve the generalizability of SG (e.g., spatial distribution and dominance of light sources) while enhancing its capacity to capture fine-grained anisotropic lighting characteristics. Experimental results demonstrate that our method yields environment maps with more precise lighting conditions and environment textures, facilitating the realistic rendering of lighting effects. The implementation code for ASG extraction can be found at https://github.com/junhong-jennifer-zhao/ASG-lighting