Toward image-based facial hair modeling

Tomás Lay, A. Zinke, A. Weber, T. Vetter
{"title":"Toward image-based facial hair modeling","authors":"Tomás Lay, A. Zinke, A. Weber, T. Vetter","doi":"10.1145/1925059.1925077","DOIUrl":null,"url":null,"abstract":"In this paper we present a novel efficient and fully automated technique to synthesize realistic facial hair---such as beards and eyebrows---on 3D head models. The method requires registered texture images of a target model on which hair needs to be generated. In a first stage of our two-step approach a statistical measure for hair density is computed for each pixel of the texture. In addition, other geometric features such as 2D pixel orientations are extracted, which are subsequently used to generate a 3D model of the individual hair strands. Missing or incomplete information is estimated based on statistical models derived from a database of texture images of over 70 individuals. Using the new approach, characteristics of the hair extracted from a given head may be also transferred to another target.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spring conference on Computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1925059.1925077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

In this paper we present a novel efficient and fully automated technique to synthesize realistic facial hair---such as beards and eyebrows---on 3D head models. The method requires registered texture images of a target model on which hair needs to be generated. In a first stage of our two-step approach a statistical measure for hair density is computed for each pixel of the texture. In addition, other geometric features such as 2D pixel orientations are extracted, which are subsequently used to generate a 3D model of the individual hair strands. Missing or incomplete information is estimated based on statistical models derived from a database of texture images of over 70 individuals. Using the new approach, characteristics of the hair extracted from a given head may be also transferred to another target.
面向基于图像的面部毛发建模
在本文中,我们提出了一种新的高效和全自动的技术来合成逼真的面部毛发-如胡须和眉毛-在3D头部模型上。该方法需要在需要生成毛发的目标模型上注册纹理图像。在我们的两步方法的第一阶段,为纹理的每个像素计算毛发密度的统计度量。此外,提取其他几何特征,如2D像素方向,随后用于生成单个头发的3D模型。缺失或不完整的信息是基于统计模型,从数据库中提取的纹理图像超过70个人。利用这种新方法,从一个给定的头部提取的头发特征也可以转移到另一个目标上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信