Edge-aware denoising framework for real-time mobile ray tracing

IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Haosen Fu, Mingcong Ma, Junqiu Zhu, Lu Wang, Yanning Xu
{"title":"Edge-aware denoising framework for real-time mobile ray tracing","authors":"Haosen Fu,&nbsp;Mingcong Ma,&nbsp;Junqiu Zhu,&nbsp;Lu Wang,&nbsp;Yanning Xu","doi":"10.1016/j.gmod.2025.101301","DOIUrl":null,"url":null,"abstract":"<div><div>With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This <strong>frame rate is 67.42% higher</strong> than mobile-optimized SVGF, provides <strong>better visual quality</strong>, and <strong>reduces power consumption by 16.80%</strong>. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101301"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Graphical Models","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1524070325000487","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This frame rate is 67.42% higher than mobile-optimized SVGF, provides better visual quality, and reduces power consumption by 16.80%. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.
实时移动光线追踪的边缘感知去噪框架
随着移动硬件加速光线追踪的普及,低采样率(1spp)下的视觉质量由于蒙特卡罗路径追踪引入的高频噪声和时间伪影而显著恶化。传统的时空去噪方法,如时空方差引导滤波(spatial - temporal Variance-Guided Filtering, SVGF),通过融合多帧信息并使用几何缓冲(G-buffer)引导滤波器来有效抑制噪声。然而,它们对每帧方差计算和全局过滤的依赖给移动设备带来了令人望而却步的开销。本文在SVGF框架内提出了一种边缘感知、数据驱动的实时去噪架构,明确针对移动计算约束进行了定制。该方法引入了消除方差估计开销的两个关键创新:(1)自适应滤波核大小机制,该机制基于G-buffer的局部复杂度分析动态调整滤波范围;(2)数据驱动的权重表构建策略,将传统的计算过程转化为高效的实时查找操作。这些创新显著提高加工效率,同时保持边缘精度。在高通骁龙768G平台上的实验结果表明,该方法在1spp输入下可以达到55fps。该帧率比移动优化的SVGF高67.42%,提供了更好的视觉质量,并降低了16.80%的功耗。我们的解决方案提供了一个实用而高效的去噪框架,适用于移动游戏和AR/VR应用中的实时光线追踪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Graphical Models
Graphical Models 工程技术-计算机:软件工程
CiteScore
3.60
自引率
5.90%
发文量
15
审稿时长
47 days
期刊介绍: Graphical Models is recognized internationally as a highly rated, top tier journal and is focused on the creation, geometric processing, animation, and visualization of graphical models and on their applications in engineering, science, culture, and entertainment. GMOD provides its readers with thoroughly reviewed and carefully selected papers that disseminate exciting innovations, that teach rigorous theoretical foundations, that propose robust and efficient solutions, or that describe ambitious systems or applications in a variety of topics. We invite papers in five categories: research (contributions of novel theoretical or practical approaches or solutions), survey (opinionated views of the state-of-the-art and challenges in a specific topic), system (the architecture and implementation details of an innovative architecture for a complete system that supports model/animation design, acquisition, analysis, visualization?), application (description of a novel application of know techniques and evaluation of its impact), or lecture (an elegant and inspiring perspective on previously published results that clarifies them and teaches them in a new way). GMOD offers its authors an accelerated review, feedback from experts in the field, immediate online publication of accepted papers, no restriction on color and length (when justified by the content) in the online version, and a broad promotion of published papers. A prestigious group of editors selected from among the premier international researchers in their fields oversees the review process.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信