基于双输入特征融合网络的体绘制实时神经去噪

IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Chunxiao Xu, Xinran Xu, Jiatian Zhang, Yufei Liu, Yiheng Cao, Lingxiao Zhao
{"title":"基于双输入特征融合网络的体绘制实时神经去噪","authors":"Chunxiao Xu,&nbsp;Xinran Xu,&nbsp;Jiatian Zhang,&nbsp;Yufei Liu,&nbsp;Yiheng Cao,&nbsp;Lingxiao Zhao","doi":"10.1111/cgf.70276","DOIUrl":null,"url":null,"abstract":"<p>Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network\",\"authors\":\"Chunxiao Xu,&nbsp;Xinran Xu,&nbsp;Jiatian Zhang,&nbsp;Yufei Liu,&nbsp;Yiheng Cao,&nbsp;Lingxiao Zhao\",\"doi\":\"10.1111/cgf.70276\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.</p>\",\"PeriodicalId\":10687,\"journal\":{\"name\":\"Computer Graphics Forum\",\"volume\":\"44 6\",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Graphics Forum\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cgf.70276\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.70276","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

直接体绘制(DVR)是一种广泛应用于体数据可视化的技术。体径追踪(volumetric path tracing, VPT)是一种重要的DVR技术,它通过模拟光的传输来产生逼真的渲染结果,为用户提供了增强的感知和理解能力,特别是在医学成像领域。基于蒙特卡罗(MC)方法的VPT通常需要大量的样本来产生无噪声的结果。然而,在实时应用中,每像素只允许有限数量的样本,并且可能产生显著的噪声。本文介绍了一种新的神经去噪方法,该方法利用一种新的特征融合方法对VPT进行去噪。我们的方法使用了一种特征分解技术,根据噪声水平将辐射分成不同的分量。我们的新分解技术减轻了在当代解耦去噪算法中发现的偏差,并显示出更好的样本利用率。设计了一个轻量级的双输入网络,将这些组件与无噪声的地真相关联。此外,对于去噪视频帧序列,我们开发了一种基于学习的时间方法,该方法计算时间权重映射,将前一帧的重投影结果与空间去噪的当前帧混合在一起。对比结果表明,我们的网络比现有方法的推理速度更快,并且可以实时产生更高质量的去噪输出。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network

Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network

Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Graphics Forum
Computer Graphics Forum 工程技术-计算机:软件工程
CiteScore
5.80
自引率
12.00%
发文量
175
审稿时长
3-6 weeks
期刊介绍: Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信