Junao Shen , Tian Feng , Haojie Dong , Jinkang Ji , Xinyu Wang , Tianjia Shao
{"title":"SuraGS:通过表面感知高斯溅射实现高效的少镜头新视图合成","authors":"Junao Shen , Tian Feng , Haojie Dong , Jinkang Ji , Xinyu Wang , Tianjia Shao","doi":"10.1016/j.cag.2025.104349","DOIUrl":null,"url":null,"abstract":"<div><div>Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) have benefited the task of novel view synthesis (NVS) that is critical to real-world applications. When extreme sparsity occurs in the training views (<em>i.e.</em>, few-shot NVS), the synthesized view images may undergo a significant quality degradation due to overfitting. Although the conventional 3DGS-based methods for few-shot NVS have reached an impressive milestone in terms of synthesis quality and efficiency, they are yet to emphasize the awareness of the scene’s surface as well as the redundancy of Gaussians, which cause blurring and artifacts over complex regions in the scene. In this paper, we propose an efficient method for few-shot novel view synthesis with surface-aware Gaussian splatting (SuraGS). Specifically, we design two surface-aware strategies on surface Gaussian optimization and behind-surface Gaussian pruning. The former drives the foremost Gaussian along each ray to possess appropriate positions and orientations with respect to the scene’s surface, and the latter leverages a pruning mask to explicitly remove redundant Gaussians. In addition, we design an auxiliary training strategy based on semi-supervised learning to counteract overfitting. Experiments on different datasets demonstrate that our SuraGS outperforms state-of-the-art methods for few-shot NVS in various metrics, with substantial efficiency improvements. In particular, the proposed method can reduce the number of Gaussians by up to 75% at the inference stage and GPU memory usage by up to 65% at the training stage.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104349"},"PeriodicalIF":2.8000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SuraGS: Toward efficient few-shot novel view synthesis via surface-aware Gaussian splatting\",\"authors\":\"Junao Shen , Tian Feng , Haojie Dong , Jinkang Ji , Xinyu Wang , Tianjia Shao\",\"doi\":\"10.1016/j.cag.2025.104349\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) have benefited the task of novel view synthesis (NVS) that is critical to real-world applications. When extreme sparsity occurs in the training views (<em>i.e.</em>, few-shot NVS), the synthesized view images may undergo a significant quality degradation due to overfitting. Although the conventional 3DGS-based methods for few-shot NVS have reached an impressive milestone in terms of synthesis quality and efficiency, they are yet to emphasize the awareness of the scene’s surface as well as the redundancy of Gaussians, which cause blurring and artifacts over complex regions in the scene. In this paper, we propose an efficient method for few-shot novel view synthesis with surface-aware Gaussian splatting (SuraGS). Specifically, we design two surface-aware strategies on surface Gaussian optimization and behind-surface Gaussian pruning. The former drives the foremost Gaussian along each ray to possess appropriate positions and orientations with respect to the scene’s surface, and the latter leverages a pruning mask to explicitly remove redundant Gaussians. In addition, we design an auxiliary training strategy based on semi-supervised learning to counteract overfitting. Experiments on different datasets demonstrate that our SuraGS outperforms state-of-the-art methods for few-shot NVS in various metrics, with substantial efficiency improvements. In particular, the proposed method can reduce the number of Gaussians by up to 75% at the inference stage and GPU memory usage by up to 65% at the training stage.</div></div>\",\"PeriodicalId\":50628,\"journal\":{\"name\":\"Computers & Graphics-Uk\",\"volume\":\"131 \",\"pages\":\"Article 104349\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Graphics-Uk\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0097849325001906\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325001906","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) have benefited the task of novel view synthesis (NVS) that is critical to real-world applications. When extreme sparsity occurs in the training views (i.e., few-shot NVS), the synthesized view images may undergo a significant quality degradation due to overfitting. Although the conventional 3DGS-based methods for few-shot NVS have reached an impressive milestone in terms of synthesis quality and efficiency, they are yet to emphasize the awareness of the scene’s surface as well as the redundancy of Gaussians, which cause blurring and artifacts over complex regions in the scene. In this paper, we propose an efficient method for few-shot novel view synthesis with surface-aware Gaussian splatting (SuraGS). Specifically, we design two surface-aware strategies on surface Gaussian optimization and behind-surface Gaussian pruning. The former drives the foremost Gaussian along each ray to possess appropriate positions and orientations with respect to the scene’s surface, and the latter leverages a pruning mask to explicitly remove redundant Gaussians. In addition, we design an auxiliary training strategy based on semi-supervised learning to counteract overfitting. Experiments on different datasets demonstrate that our SuraGS outperforms state-of-the-art methods for few-shot NVS in various metrics, with substantial efficiency improvements. In particular, the proposed method can reduce the number of Gaussians by up to 75% at the inference stage and GPU memory usage by up to 65% at the training stage.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.