Delving into Invisible Semantics for Generalized One-shot Neural Human Rendering.

Yihong Lin, Xuemiao Xu, Huaidong Zhang, Cheng Xu, Weijie Li, Yi Xie, Jing Qin, Shengfeng He
{"title":"Delving into Invisible Semantics for Generalized One-shot Neural Human Rendering.","authors":"Yihong Lin, Xuemiao Xu, Huaidong Zhang, Cheng Xu, Weijie Li, Yi Xie, Jing Qin, Shengfeng He","doi":"10.1109/TVCG.2025.3563229","DOIUrl":null,"url":null,"abstract":"<p><p>Traditional human neural radiance fields often overlook crucial body semantics, resulting in ambiguous reconstructions, particularly in occluded regions. To address this problem, we propose the Super-Semantic Disentangled Neural Renderer (SSD-NeRF), which employs rich regional semantic priors to enhance human rendering accuracy. This approach initiates with a Visible-Invisible Semantic Propagation module, ensuring coherent semantic assignment to occluded parts based on visible body segments. Furthermore, a Region-Wise Texture Propagation module independently extends textures from visible to occluded areas within semantic regions, thereby avoiding irrelevant texture mixtures and preserving semantic consistency. Additionally, a view-aware curricular learning approach is integrated to bolster the model's robustness and output quality across different viewpoints. Extensive evaluations confirm that SSD-NeRF surpasses leading methods, particularly in generating quality and structurally semantic reconstructions of unseen or occluded views and poses.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3563229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Traditional human neural radiance fields often overlook crucial body semantics, resulting in ambiguous reconstructions, particularly in occluded regions. To address this problem, we propose the Super-Semantic Disentangled Neural Renderer (SSD-NeRF), which employs rich regional semantic priors to enhance human rendering accuracy. This approach initiates with a Visible-Invisible Semantic Propagation module, ensuring coherent semantic assignment to occluded parts based on visible body segments. Furthermore, a Region-Wise Texture Propagation module independently extends textures from visible to occluded areas within semantic regions, thereby avoiding irrelevant texture mixtures and preserving semantic consistency. Additionally, a view-aware curricular learning approach is integrated to bolster the model's robustness and output quality across different viewpoints. Extensive evaluations confirm that SSD-NeRF surpasses leading methods, particularly in generating quality and structurally semantic reconstructions of unseen or occluded views and poses.

广义一次性神经人类渲染的不可见语义研究。
传统的人体神经辐射场往往忽略了关键的身体语义,导致模糊的重建,特别是在闭塞的区域。为了解决这个问题,我们提出了超语义解纠缠神经渲染器(SSD-NeRF),它利用丰富的区域语义先验来提高人类渲染的准确性。该方法从可见-不可见语义传播模块开始,确保基于可见身体片段的遮挡部分的语义分配一致。此外,区域明智的纹理传播模块独立地将纹理从可见区域扩展到语义区域内的遮挡区域,从而避免不相关的纹理混合并保持语义一致性。此外,集成了视图感知课程学习方法,以增强模型的鲁棒性和跨不同观点的输出质量。广泛的评估证实,SSD-NeRF超越了领先的方法,特别是在生成未见或遮挡的视图和姿势的质量和结构语义重建方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信