CT2Hair:高保真3D头发建模使用计算机断层扫描

Yuefan Shen, Shunsuke Saito, Ziyan Wang, O. Maury, Chenglei Wu, J. Hodgins, Youyi Zheng, Giljoo Nam
{"title":"CT2Hair:高保真3D头发建模使用计算机断层扫描","authors":"Yuefan Shen, Shunsuke Saito, Ziyan Wang, O. Maury, Chenglei Wu, J. Hodgins, Youyi Zheng, Giljoo Nam","doi":"10.1145/3592106","DOIUrl":null,"url":null,"abstract":"We introduce CT2Hair, a fully automatic framework for creating high-fidelity 3D hair models that are suitable for use in downstream graphics applications. Our approach utilizes real-world hair wigs as input, and is able to reconstruct hair strands for a wide range of hair styles. Our method leverages computed tomography (CT) to create density volumes of the hair regions, allowing us to see through the hair unlike image-based approaches which are limited to reconstructing the visible surface. To address the noise and limited resolution of the input density volumes, we employ a coarse-to-fine approach. This process first recovers guide strands with estimated 3D orientation fields, and then populates dense strands through a novel neural interpolation of the guide strands. The generated strands are then refined to conform to the input density volumes. We demonstrate the robustness of our approach by presenting results on a wide variety of hair styles and conducting thorough evaluations on both real-world and synthetic datasets. Code and data for this paper are at github.com/facebookresearch/CT2Hair.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"5 1","pages":"1 - 13"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography\",\"authors\":\"Yuefan Shen, Shunsuke Saito, Ziyan Wang, O. Maury, Chenglei Wu, J. Hodgins, Youyi Zheng, Giljoo Nam\",\"doi\":\"10.1145/3592106\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce CT2Hair, a fully automatic framework for creating high-fidelity 3D hair models that are suitable for use in downstream graphics applications. Our approach utilizes real-world hair wigs as input, and is able to reconstruct hair strands for a wide range of hair styles. Our method leverages computed tomography (CT) to create density volumes of the hair regions, allowing us to see through the hair unlike image-based approaches which are limited to reconstructing the visible surface. To address the noise and limited resolution of the input density volumes, we employ a coarse-to-fine approach. This process first recovers guide strands with estimated 3D orientation fields, and then populates dense strands through a novel neural interpolation of the guide strands. The generated strands are then refined to conform to the input density volumes. We demonstrate the robustness of our approach by presenting results on a wide variety of hair styles and conducting thorough evaluations on both real-world and synthetic datasets. Code and data for this paper are at github.com/facebookresearch/CT2Hair.\",\"PeriodicalId\":7077,\"journal\":{\"name\":\"ACM Transactions on Graphics (TOG)\",\"volume\":\"5 1\",\"pages\":\"1 - 13\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Graphics (TOG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3592106\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Graphics (TOG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3592106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍CT2Hair,这是一个全自动框架,用于创建高保真3D头发模型,适用于下游图形应用程序。我们的方法利用真实世界的假发作为输入,并且能够重建各种发型的头发。我们的方法利用计算机断层扫描(CT)来创建头发区域的密度体积,使我们能够透过头发看到头发,而不像基于图像的方法仅限于重建可见表面。为了解决噪声和输入密度体积的有限分辨率,我们采用了一种从粗到精的方法。该方法首先利用估计的三维方向场恢复导链,然后通过一种新颖的导链神经插值方法填充密集导链。生成的股线然后被精炼以符合输入密度体积。我们通过展示各种发型的结果,并对现实世界和合成数据集进行全面评估,证明了我们方法的稳健性。本文的代码和数据在github.com/facebookresearch/CT2Hair。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography
We introduce CT2Hair, a fully automatic framework for creating high-fidelity 3D hair models that are suitable for use in downstream graphics applications. Our approach utilizes real-world hair wigs as input, and is able to reconstruct hair strands for a wide range of hair styles. Our method leverages computed tomography (CT) to create density volumes of the hair regions, allowing us to see through the hair unlike image-based approaches which are limited to reconstructing the visible surface. To address the noise and limited resolution of the input density volumes, we employ a coarse-to-fine approach. This process first recovers guide strands with estimated 3D orientation fields, and then populates dense strands through a novel neural interpolation of the guide strands. The generated strands are then refined to conform to the input density volumes. We demonstrate the robustness of our approach by presenting results on a wide variety of hair styles and conducting thorough evaluations on both real-world and synthetic datasets. Code and data for this paper are at github.com/facebookresearch/CT2Hair.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信