多模型协同高斯溅射稀疏视图合成

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Changyue Shi , Chuxiao Yang , Xinyuan Hu , Yan Yang , Jiajun Ding , Min Tan
{"title":"多模型协同高斯溅射稀疏视图合成","authors":"Changyue Shi ,&nbsp;Chuxiao Yang ,&nbsp;Xinyuan Hu ,&nbsp;Yan Yang ,&nbsp;Jiajun Ding ,&nbsp;Min Tan","doi":"10.1016/j.imavis.2025.105512","DOIUrl":null,"url":null,"abstract":"<div><div>3D Gaussian Splatting (3DGS) generates a field composed of 3D Gaussians to represent a scene. As the number of input training views decreases, the range of possible solutions that fit only training views expands significantly, making it challenging to identify the optimal result for 3DGS. To this end, a synergistic method is proposed during training and rendering under sparse inputs. The proposed method consists of two main components: Synergistic Transition and Synergistic Rendering. During training, we utilize multiple Gaussian fields to synergize their contributions and determine whether each Gaussian primitive has fallen into an ambiguous region. These regions impede the process for Gaussian primitives to discover alternative positions. This work extends Stochastic Gradient Langevin Dynamic updating and proposes a reformulated version of it. With this reformulation, the Gaussian primitives stuck in ambiguous regions adjust their positions, enabling them to explore an alternative solution. Furthermore, a Synergistic Rendering strategy is implemented during the rendering process. With Gaussian fields trained in the first stage, this approach synergizes the parallel branches to improve the quality of the rendered outputs. With Synergistic Transition and Synergistic Rendering, our method achieves photo-realistic novel view synthesis results under sparse inputs. Extensive experiments demonstrate that our method outperforms previous methods across diverse datasets, including LLFF, Mip-NeRF360, and Blender.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"158 ","pages":"Article 105512"},"PeriodicalIF":4.2000,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MMGS: Multi-Model Synergistic Gaussian Splatting for Sparse View Synthesis\",\"authors\":\"Changyue Shi ,&nbsp;Chuxiao Yang ,&nbsp;Xinyuan Hu ,&nbsp;Yan Yang ,&nbsp;Jiajun Ding ,&nbsp;Min Tan\",\"doi\":\"10.1016/j.imavis.2025.105512\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>3D Gaussian Splatting (3DGS) generates a field composed of 3D Gaussians to represent a scene. As the number of input training views decreases, the range of possible solutions that fit only training views expands significantly, making it challenging to identify the optimal result for 3DGS. To this end, a synergistic method is proposed during training and rendering under sparse inputs. The proposed method consists of two main components: Synergistic Transition and Synergistic Rendering. During training, we utilize multiple Gaussian fields to synergize their contributions and determine whether each Gaussian primitive has fallen into an ambiguous region. These regions impede the process for Gaussian primitives to discover alternative positions. This work extends Stochastic Gradient Langevin Dynamic updating and proposes a reformulated version of it. With this reformulation, the Gaussian primitives stuck in ambiguous regions adjust their positions, enabling them to explore an alternative solution. Furthermore, a Synergistic Rendering strategy is implemented during the rendering process. With Gaussian fields trained in the first stage, this approach synergizes the parallel branches to improve the quality of the rendered outputs. With Synergistic Transition and Synergistic Rendering, our method achieves photo-realistic novel view synthesis results under sparse inputs. Extensive experiments demonstrate that our method outperforms previous methods across diverse datasets, including LLFF, Mip-NeRF360, and Blender.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"158 \",\"pages\":\"Article 105512\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625001003\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625001003","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

三维高斯飞溅(3DGS)生成一个由三维高斯分布组成的场来表示一个场景。随着输入训练视图数量的减少,只适合训练视图的可能解决方案的范围显着扩大,这使得确定3DGS的最佳结果变得具有挑战性。为此,提出了一种稀疏输入条件下训练和渲染的协同方法。该方法由两个主要部分组成:协同转换和协同渲染。在训练过程中,我们利用多个高斯场来协同它们的贡献,并确定每个高斯原语是否落入一个模糊区域。这些区域阻碍了高斯基元发现替代位置的过程。这项工作扩展了随机梯度朗之万动态更新,并提出了它的一个重新制定的版本。通过这种重新表述,困在模糊区域的高斯原语调整了它们的位置,使它们能够探索替代解决方案。此外,在渲染过程中实现了协同渲染策略。在第一阶段训练高斯场后,该方法协同并行分支以提高渲染输出的质量。该方法通过协同转换和协同渲染,在稀疏输入条件下实现了逼真的新视图合成结果。广泛的实验表明,我们的方法在不同的数据集上优于以前的方法,包括LLFF, Mip-NeRF360和Blender。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MMGS: Multi-Model Synergistic Gaussian Splatting for Sparse View Synthesis
3D Gaussian Splatting (3DGS) generates a field composed of 3D Gaussians to represent a scene. As the number of input training views decreases, the range of possible solutions that fit only training views expands significantly, making it challenging to identify the optimal result for 3DGS. To this end, a synergistic method is proposed during training and rendering under sparse inputs. The proposed method consists of two main components: Synergistic Transition and Synergistic Rendering. During training, we utilize multiple Gaussian fields to synergize their contributions and determine whether each Gaussian primitive has fallen into an ambiguous region. These regions impede the process for Gaussian primitives to discover alternative positions. This work extends Stochastic Gradient Langevin Dynamic updating and proposes a reformulated version of it. With this reformulation, the Gaussian primitives stuck in ambiguous regions adjust their positions, enabling them to explore an alternative solution. Furthermore, a Synergistic Rendering strategy is implemented during the rendering process. With Gaussian fields trained in the first stage, this approach synergizes the parallel branches to improve the quality of the rendered outputs. With Synergistic Transition and Synergistic Rendering, our method achieves photo-realistic novel view synthesis results under sparse inputs. Extensive experiments demonstrate that our method outperforms previous methods across diverse datasets, including LLFF, Mip-NeRF360, and Blender.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信