HR-2DGS: Hybrid regularization for sparse-view 3D reconstruction with 2D Gaussian splatting

IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yong Tang, Jiawen Yan, Yu Li, Yu Liang, Feng Wang, Jing Zhao
{"title":"HR-2DGS: Hybrid regularization for sparse-view 3D reconstruction with 2D Gaussian splatting","authors":"Yong Tang,&nbsp;Jiawen Yan,&nbsp;Yu Li,&nbsp;Yu Liang,&nbsp;Feng Wang,&nbsp;Jing Zhao","doi":"10.1016/j.cag.2025.104444","DOIUrl":null,"url":null,"abstract":"<div><div>Sparse-view 3D reconstruction has garnered widespread attention due to its demand for high-quality reconstruction under low-sampling data conditions. Existing NeRF-based methods rely on dense views and substantial computational resources, while 3DGS is limited by multi-view inconsistency and insufficient geometric detail recovery, making it challenging to achieve ideal results in sparse-view scenarios. This paper introduces HR-2DGS, a novel hybrid regularization framework based on 2D Gaussian Splatting (2DGS), which significantly enhances multi-view consistency and geometric recovery by dynamically fusing monocular depth estimates with rendered depth maps, incorporating hybrid normal regularization techniques. To further refine local details, we introduce a per-pixel depth normalization that leverages each pixel’s neighborhood statistics to emphasize fine-scale geometric variations. Experimental results on the LLFF and DTU datasets demonstrate that HR-2DGS outperforms existing methods in terms of PSNR, SSIM, and LPIPS, while requiring only 2.5GB of memory and a few minutes of training time for efficient training and real-time rendering.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104444"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325002857","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Sparse-view 3D reconstruction has garnered widespread attention due to its demand for high-quality reconstruction under low-sampling data conditions. Existing NeRF-based methods rely on dense views and substantial computational resources, while 3DGS is limited by multi-view inconsistency and insufficient geometric detail recovery, making it challenging to achieve ideal results in sparse-view scenarios. This paper introduces HR-2DGS, a novel hybrid regularization framework based on 2D Gaussian Splatting (2DGS), which significantly enhances multi-view consistency and geometric recovery by dynamically fusing monocular depth estimates with rendered depth maps, incorporating hybrid normal regularization techniques. To further refine local details, we introduce a per-pixel depth normalization that leverages each pixel’s neighborhood statistics to emphasize fine-scale geometric variations. Experimental results on the LLFF and DTU datasets demonstrate that HR-2DGS outperforms existing methods in terms of PSNR, SSIM, and LPIPS, while requiring only 2.5GB of memory and a few minutes of training time for efficient training and real-time rendering.

Abstract Image

HR-2DGS:基于二维高斯溅射的稀疏视图三维重建的混合正则化
稀疏视图三维重建由于需要在低采样数据条件下进行高质量的重建而受到广泛关注。现有的基于nerf的方法依赖于密集视图和大量的计算资源,而3DGS受限于多视图不一致和几何细节恢复不足,难以在稀疏视图场景下获得理想的结果。本文介绍了一种新的基于二维高斯飞溅(2DGS)的混合正则化框架HR-2DGS,该框架通过将单眼深度估计与渲染深度图动态融合,结合混合正态正则化技术,显著提高了多视图一致性和几何恢复能力。为了进一步细化局部细节,我们引入了逐像素深度归一化,利用每个像素的邻域统计来强调精细尺度的几何变化。在LLFF和DTU数据集上的实验结果表明,HR-2DGS在PSNR、SSIM和LPIPS方面优于现有方法,而仅需2.5GB内存和几分钟的训练时间即可实现高效的训练和实时渲染。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Graphics-Uk
Computers & Graphics-Uk 工程技术-计算机:软件工程
CiteScore
5.30
自引率
12.00%
发文量
173
审稿时长
38 days
期刊介绍: Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on: 1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains. 2. State-of-the-art papers on late-breaking, cutting-edge research on CG. 3. Information on innovative uses of graphics principles and technologies. 4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信