2008 IEEE International Conference on Shape Modeling and Applications最新文献

筛选
英文 中文
SHape REtrieval contest 2008: Stability of watertight models 形状检索大赛2008:水密模型的稳定性
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547975
S. Biasotti, M. Attene
{"title":"SHape REtrieval contest 2008: Stability of watertight models","authors":"S. Biasotti, M. Attene","doi":"10.1109/SMI.2008.4547975","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547975","url":null,"abstract":"In this report we present the results of the Stability on Watertight Models Track. The aim of this track is to evaluate the stability of algorithms with respect to input perturbations that modify the representation of the object without changing its overall shape significantly. Examples of these perturbations include geometric noise, varying sampling patterns, small shape deformations and topological noise.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130141441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
SHape REtrieval contest 2008: 3D face scans 形状检索比赛2008:3D面部扫描
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547979
F. T. Haar, M. Daoudi, R. Veltkamp
{"title":"SHape REtrieval contest 2008: 3D face scans","authors":"F. T. Haar, M. Daoudi, R. Veltkamp","doi":"10.1109/SMI.2008.4547979","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547979","url":null,"abstract":"Three-Dimensional face recognition is a challenging task with a large number of proposed solutions [1, 2]. With variations in pose and expression the identification of a face scan based on 3D geometry is difficult. To improve on this task and to evaluate existing face matching methods large sets of 3D faces were constructed, such as the FRGC [3], BU-3DFE [4], and the GavabDB [5] database. When used in the same experimental way, these publicly available sets allow for a fair comparison of different methods. Usually, researchers compare the recognition rates (or identification rates) of different methods. To identify a person, its 3D face scan is enrolled as query in the database and if the most similar scan (other than the query) in the database belongs to the same person, he or she is identified correctly. For a set of queries, the recognition rate is computed as the average of zeros (no identification) and ones (correct identification). However, the recognition rate is a limited evaluation measure, because it considers merely the closest match of each query. In case you are using a database that contains two scans per expression per subject and you use each scan as query once, you are bound to find the similar scan on top of the ranked list. Such an experiment boosts the recognition rate, but gives no insight of the expression invariance of different methods. For that, an evaluation measure is required that takes a larger part of the ranked list into account. In this contest we compare different face matching methods using a large number of performance measures. As a test set we have used a processed subset of the GavabDB [5], which contains several expressions and pose variations per subject. 2 DATABASE For the retrieval contest of 3D faces we have used a subset of the GavabDB [5]. The GavabDB consists of Minolta Vi-700 laser range scans from 61 different subjects. The subjects, of which 45 are male and 16 are female, are all Caucasian. Each subject was scanned nine times for different poses and expressions, namely six neutral expression scans and three scans with an expression. The neutral scans include two different frontal scans, one scan while looking up ( +35 ), one scan while looking down ( -35 ), one scan from the right side ( +90 ), and one from the left side ( -90 ). The expression scans include one with a smile, one with a pronounced laugh, and an “arbitrary expression” freely chosen by the subject.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
SHREC’08 entry: Forward neural network-based 3D model retrieval 论文题目:基于前向神经网络的三维模型检索
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547992
Yujie Liu, Xiaolan Yao, Zongmin Li
{"title":"SHREC’08 entry: Forward neural network-based 3D model retrieval","authors":"Yujie Liu, Xiaolan Yao, Zongmin Li","doi":"10.1109/SMI.2008.4547992","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547992","url":null,"abstract":"In this paper, a forward neural network (FNN) is used for 3D model retrieval. Also the descriptor based on exponentially decaying Euclidean distance transforms (EDT) is adapted to represent the feature of a 3D model. As a kind of machine learning method, FNN is trained by the PSB trained data, and then used to sort the testing data set in this contest.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127761974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SHREC’08 entry: Visual based 3D CAD retrieval using Fourier Mellin Transform SHREC ' 08条目:基于视觉的三维CAD检索使用傅里叶梅林变换
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547984
Xiaolan Li, A. Godil, A. I. Wagan
{"title":"SHREC’08 entry: Visual based 3D CAD retrieval using Fourier Mellin Transform","authors":"Xiaolan Li, A. Godil, A. I. Wagan","doi":"10.1109/SMI.2008.4547984","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547984","url":null,"abstract":"Fourier Mellin Transform (FMT) has been used effectively in previous work for 2D image analysis, reconstruction and retrieval. In this paper, we perform 3D shape retrieval based on FMT on the Purdue Shape Benchmark. The whole procedure includes three steps: 1) generate silhouettes along the six principle directions for each 3D model; 2) compute a collection of FMT coefficients for all the silhouettes, which are translation, scale, and rotation invariant; and 3) compute a match measure between the query coefficients collection and those in the 3D shape repositories. The main contribution of this paper is the novel approach to extract the 3D signatures by Fourier Mellin Transform. Our experimental results validate the effectiveness of our approach.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"28 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Salient local visual features for shape-based 3D model retrieval 基于形状的三维模型检索的显著局部视觉特征
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547955
Ryutarou Ohbuchi, Kunio Osada, T. Furuya, T. Banno
{"title":"Salient local visual features for shape-based 3D model retrieval","authors":"Ryutarou Ohbuchi, Kunio Osada, T. Furuya, T. Banno","doi":"10.1109/SMI.2008.4547955","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547955","url":null,"abstract":"In this paper, we describe a shape-based 3D model retrieval method based on multi-scale local visual features. The features are extracted from 2D range images of the model viewed from uniformly sampled locations on a view sphere. The method is appearance-based, and accepts all the models that can be rendered as a range image. For each range image, a set of 2D multi-scale local visual features is computed by using the scale invariant feature transform [22] algorithm. To reduce cost of distance computation and feature storage, a set of local features describing a 3D model is integrated into a histogram using the bag-of-features approach. Our experiments using two standard benchmarks, one for articulated shapes and the other for rigid shapes, showed that the methods achieved the performance comparable or superior to some of the most powerful 3D shape retrieval methods.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133029649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 300
SHape REtrieval contest 2008: Generic models 形状检索竞赛2008:通用模型
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547978
Ryutarou Ohbuchi
{"title":"SHape REtrieval contest 2008: Generic models","authors":"Ryutarou Ohbuchi","doi":"10.1109/SMI.2008.4547978","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547978","url":null,"abstract":"The first of the SHREC series of 3D model retrieval contests, SHREC 2006 [5] organized by Prof. Veltkamp et al. has made an impact in the way researchers compare performances of their 3D model retrieval methods. The task was to retrieve polygon soup models found in the Princeton Shape Benchmark database [5] having diverse shape and semantics. While many researchers used the SHREC 2006 as their benchmark, there has been no “official” contest since 2006 that used the same SHREC 2006 format but with up-to-date algorithms and methods. The SHREC 2007 added new tracks, e.g., for 3D face models, watertight models, protein models, CAD models, partial matching, and relevance feedback. However, the format of SHREC 2006 was missing. This SHREC 2008 Generic Models Track (GMT) tries to repeat the SHREC 2006 so that we can compare state-of-the-art methods for polygon soup models by using a stable benchmark dataset and ground truth classifications. A change from the SHREC 2006 to the SHREC 2008 GMT is the acknowledgement of learning based algorithms for 3D model retrieval. The SHREC 2008 GMT has two entry categories depending on if supervised learning is used or not. We wanted to encourage various forms of learning algorithms, as we believe learning algorithms are as essential as features themselves for effective 3D model retrieval. At the same time, we do not want to discourage methods without supervised learning. So we created two sub-tracks, one for unsupervised methods and the other for supervised methods. To test the behavior of supervised method for the queries having “unseen” ground truth classifications, we added a new set of queries, in addition to the original set of queries used in the SHREC 2006.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131134418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Example based skeletonization using harmonic one-forms 基于实例的骨架化使用谐波一形式
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547949
Ying He, Xian Xiao, S. H. Soon
{"title":"Example based skeletonization using harmonic one-forms","authors":"Ying He, Xian Xiao, S. H. Soon","doi":"10.1109/SMI.2008.4547949","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547949","url":null,"abstract":"This paper presents a method to extract skeletons using examples. Our method is based on the observation that many deformations in real world applications are isometric or near isometric. By taking advantage of the intrinsic property of harmonic 1-form, i.e., it is determined by the metric and independent of the resolution and embedding, our method can easily find a consistent mapping between the reference and example poses which can be in different resolutions and triangulations. We first construct the skeleton-like Reeb graph of a harmonic function defined on the given poses. Then by examining the changes of mean curvatures, we identify the initial locations of joints. Finally we refine the joint locations by solving a constrained optimization problem. To demonstrate the efficacy of our method, we apply the extracted skeletons to pose space deformation and skeleton transfer.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131143797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
OCTOR: OCcurrence selecTOR in pattern hierarchies OCTOR:模式层次结构中的发生选择器
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547972
J. Jang, J. Rossignac
{"title":"OCTOR: OCcurrence selecTOR in pattern hierarchies","authors":"J. Jang, J. Rossignac","doi":"10.1109/SMI.2008.4547972","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547972","url":null,"abstract":"Hierarchies of patterns of features, of sub-assemblies, or of CSG sub-expressions are used in architectural and mechanical CAD to eliminate laborious repetitions from the design process. Yet, often the placement, shape, or even existence of a selection of the repeated occurrences in the pattern must be adjusted. The specification of a desired selection of occurrences in a hierarchy of patterns is often tedious (involving repetitive steps) or difficult (requiring interaction with an abstract representation of the hierarchy graph). The OCTOR system introduced here addresses these two drawbacks simultaneously, offering an effective and intuitive solution, which requires only two mouse-clicks to specify any one of a wide range of possible selections. It does not require expanding the graph or storing an explicit list of the selected occurrences and is simple to compute. It is hence well suited for a variety of CAD applications, including CSG, feature-based design, assembly mock-up, and animation. We discuss a novel representation of a selection, a technology that makes it possible to use only two mouse-clicks for each selection, and the persistence of these selections when the hierarchy of patterns is edited.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A 3D face matching framework 三维人脸匹配框架
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547956
F. T. Haar, R. Veltkamp
{"title":"A 3D face matching framework","authors":"F. T. Haar, R. Veltkamp","doi":"10.1109/SMI.2008.4547956","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547956","url":null,"abstract":"Many 3D face matching techniques have been developed to perform face recognition. Among these techniques are variants of 3D facial curve matching, which are techniques that reduce the amount of face data to one or a few 3D curves. The face's central profile, for instance, proved to work well. However, the selection of the optimal set of 3D curves and the best way to match them is still underexposed. We propose a 3D face matching framework that allows profile and contour based face matching. Using this framework we evaluate profile and contour types including those described in literature, and select subsets of facial curves for effective and efficient face matching. Results on the 3D face retrieval track of SHREC'07 (the 3D SHape Retrieval Contest) shows the highest mean average precision achieved so far, using only three facial curves of 45 samples each.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123847753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
SHape REtrieval Contest (SHREC) 2008 形状检索比赛(SHREC) 2008
2008 IEEE International Conference on Shape Modeling and Applications Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547974
R. Veltkamp, F. T. Haar
{"title":"SHape REtrieval Contest (SHREC) 2008","authors":"R. Veltkamp, F. T. Haar","doi":"10.1109/SMI.2008.4547974","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547974","url":null,"abstract":"Since 2006 the 3D shape retrieval contest SHREC has been organized. The general objective is to evaluate the effectiveness of 3D shape retrieval algorithms. 3D media retrieval is overlooked in most commercial search engines, while at the same time it is expected to represent a huge amount of traffic and data stored in the Internet. Recent advances in technology have made available cost-effective scanning devices that could not even be imagined a decade ago. It is now possible to acquire 3D data of a physical object in a few seconds and produce a digital model of its geometry that can be easily shared on the Internet. On the other hand, most PCs connected to the Internet are nowadays equipped with high-performance 3D graphics hardware, that support rendering, interaction and processing capabilities from home environments to enterprise scenarios.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116801095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信