Fiber-level Woven Fabric Capture from a Single Photo

Zixuan Li, Pengfei Shen, Hanxiao Sun, Zibo Zhang, Yu Guo, Ligang Liu, Ling-Qi Yan, Steve Marschner, Milos Hasan, Beibei Wang
{"title":"Fiber-level Woven Fabric Capture from a Single Photo","authors":"Zixuan Li, Pengfei Shen, Hanxiao Sun, Zibo Zhang, Yu Guo, Ligang Liu, Ling-Qi Yan, Steve Marschner, Milos Hasan, Beibei Wang","doi":"arxiv-2409.06368","DOIUrl":null,"url":null,"abstract":"Accurately rendering the appearance of fabrics is challenging, due to their\ncomplex 3D microstructures and specialized optical properties. If we model the\ngeometry and optics of fabrics down to the fiber level, we can achieve\nunprecedented rendering realism, but this raises the difficulty of authoring or\ncapturing the fiber-level assets. Existing approaches can obtain fiber-level\ngeometry with special devices (e.g., CT) or complex hand-designed procedural\npipelines (manually tweaking a set of parameters). In this paper, we propose a\nunified framework to capture fiber-level geometry and appearance of woven\nfabrics using a single low-cost microscope image. We first use a simple neural\nnetwork to predict initial parameters of our geometric and appearance models.\nFrom this starting point, we further optimize the parameters of procedural\nfiber geometry and an approximated shading model via differentiable\nrasterization to match the microscope photo more accurately. Finally, we refine\nthe fiber appearance parameters via differentiable path tracing, converging to\naccurate fiber optical parameters, which are suitable for physically-based\nlight simulations to produce high-quality rendered results. We believe that our\nmethod is the first to utilize differentiable rendering at the microscopic\nlevel, supporting physically-based scattering from explicit fiber assemblies.\nOur fabric parameter estimation achieves high-quality re-rendering of measured\nwoven fabric samples in both distant and close-up views. These results can\nfurther be used for efficient rendering or converted to downstream\nrepresentations. We also propose a patch-space fiber geometry procedural\ngeneration and a two-scale path tracing framework for efficient rendering of\nfabric scenes.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurately rendering the appearance of fabrics is challenging, due to their complex 3D microstructures and specialized optical properties. If we model the geometry and optics of fabrics down to the fiber level, we can achieve unprecedented rendering realism, but this raises the difficulty of authoring or capturing the fiber-level assets. Existing approaches can obtain fiber-level geometry with special devices (e.g., CT) or complex hand-designed procedural pipelines (manually tweaking a set of parameters). In this paper, we propose a unified framework to capture fiber-level geometry and appearance of woven fabrics using a single low-cost microscope image. We first use a simple neural network to predict initial parameters of our geometric and appearance models. From this starting point, we further optimize the parameters of procedural fiber geometry and an approximated shading model via differentiable rasterization to match the microscope photo more accurately. Finally, we refine the fiber appearance parameters via differentiable path tracing, converging to accurate fiber optical parameters, which are suitable for physically-based light simulations to produce high-quality rendered results. We believe that our method is the first to utilize differentiable rendering at the microscopic level, supporting physically-based scattering from explicit fiber assemblies. Our fabric parameter estimation achieves high-quality re-rendering of measured woven fabric samples in both distant and close-up views. These results can further be used for efficient rendering or converted to downstream representations. We also propose a patch-space fiber geometry procedural generation and a two-scale path tracing framework for efficient rendering of fabric scenes.
从一张照片捕捉纤维级编织物
由于织物具有复杂的三维微观结构和特殊的光学特性,因此准确渲染织物的外观极具挑战性。如果我们将织物的几何和光学建模细化到纤维级,就能实现前所未有的渲染逼真度,但这也增加了编写或获取纤维级资产的难度。现有方法可以通过特殊设备(如 CT)或复杂的手工设计程序管道(手动调整一组参数)获得纤维级几何图形。在本文中,我们提出了一个统一的框架,利用单张低成本显微镜图像捕捉编织物的纤维级几何形状和外观。我们首先使用一个简单的神经网络来预测几何模型和外观模型的初始参数。从这个起点出发,我们进一步优化了程序化纤维几何模型和近似阴影模型的参数,通过差分光栅化来更精确地匹配显微镜照片。最后,我们通过可微分路径追踪来完善光纤外观参数,最终得到精确的光纤光学参数,这些参数适用于基于物理的光模拟,从而产生高质量的渲染结果。我们相信,我们的方法是第一个在显微镜级别利用可微分渲染的方法,支持基于物理的显式纤维组件散射。我们的织物参数估计实现了远景和近景测量织物样本的高质量重新渲染。这些结果可进一步用于高效渲染或转换为下游表示。我们还提出了补丁空间纤维几何程序生成和双尺度路径追踪框架,用于高效渲染织物场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信