J-HGBU '11最新文献

筛选
英文 中文
Estimation and utilization of articulations in recovering non-rigid structure from motion using motion subspaces 利用运动子空间从运动中恢复非刚性结构的关节估计和利用
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072587
M. Rohith, C. Kambhamettu
{"title":"Estimation and utilization of articulations in recovering non-rigid structure from motion using motion subspaces","authors":"M. Rohith, C. Kambhamettu","doi":"10.1145/2072572.2072587","DOIUrl":"https://doi.org/10.1145/2072572.2072587","url":null,"abstract":"Estimation of non-rigid structure from motion (NRSFM) has often been performed as a linear combination of basis shapes. However, when dealing with scenes containing human articulated motion (especially in presence of clothing), the number of basis shapes precludes accurate results. We model deformation as a combination of articulated and non-rigid surface deformation. We propose a novel algorithm for segmenting motion to find articulated components and a hierarchical NRSFM algorithm which estimates articulated and non-rigid structure separately. Our method attempts to remove the articulated motion from the observation matrix leaving behind only the non-rigid component which can be represented with fewer bases. Results show that our method successfully segments motion in a variety of cases and structure reconstruction is improved using the hierarchical framework.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116218635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Person authentication using 3D human motion 使用3D人体运动的身份验证
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072586
Felipe Gómez-Caballero, T. Shinozaki, S. Furui, K. Shinoda
{"title":"Person authentication using 3D human motion","authors":"Felipe Gómez-Caballero, T. Shinozaki, S. Furui, K. Shinoda","doi":"10.1145/2072572.2072586","DOIUrl":"https://doi.org/10.1145/2072572.2072586","url":null,"abstract":"This paper presents a novel approach to identify and/or verify persons by using three-dimensional dynamic and structural features extracted from human motion depicted on image streams. These features are extracted from body landmarks which are detected and tracked when the person is asked to perform specific movements, representing the dynamics of specific parts of the body, as well as the structural traits formed by the pose of the person. Gaussian mixture model (GMM) based systems are tested on a dataset containing arm movements. Experimental results confirmed that the proposed approach is promising for person authentication tasks.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126061971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The florence 2D/3D hybrid face dataset 佛罗伦萨2D/3D混合人脸数据集
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072597
Andrew D. Bagdanov, A. Bimbo, I. Masi
{"title":"The florence 2D/3D hybrid face dataset","authors":"Andrew D. Bagdanov, A. Bimbo, I. Masi","doi":"10.1145/2072572.2072597","DOIUrl":"https://doi.org/10.1145/2072572.2072597","url":null,"abstract":"This article describes a new dataset under construction at the Media Integration and Communication Center and the University of Florence. The dataset consists of high-resolution 3D scans of human faces along with several video sequences of varying resolution and zoom level. Each subject is recorded under various scenarios, settings and conditions. This dataset is being constructed specifically to support research on techniques that bridge the gap between 2D, appearance-based recognition techniques, and fully 3D approaches. It is designed to simulate, in a controlled fashion, realistic surveillance conditions and to probe the efficacy of exploiting 3D models in real scenarios.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125115842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 137
Human motion classification and management based on mocap data analysis 基于动作捕捉数据分析的人体动作分类与管理
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072594
H. Kadu, May-Chen Kuo, C.-C. Jay Kuo
{"title":"Human motion classification and management based on mocap data analysis","authors":"H. Kadu, May-Chen Kuo, C.-C. Jay Kuo","doi":"10.1145/2072572.2072594","DOIUrl":"https://doi.org/10.1145/2072572.2072594","url":null,"abstract":"Human motion understanding based on motion capture (mocap) data is investigated. Recent rapid developments and applications of mocap systems have resulted in a large corpus of mocap sequences, and an automated annotation technique that can classify basic motion types into multiple categories is needed. A novel technique for automated mocap data classification is developed in this work. Specifically, we adopt the tree-structured vector quantization (TSVQ) method to approximate human poses by codewords and approximate the dynamics of mocap sequences by a codeword sequence. To classify mocap data into different categories, we consider three approaches: 1) the spatial domain approach based on the histogram of codewords, 2) the spatial-time domain approach via codeword sequence matching, and 3) a decision fusion approach. We test the proposed algorithm on the CMU mocap database using the n-fold cross validation procedure and obtain a correct classification rate of 97%.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Incorporating uncertainty in a layered HMM architecture for human activity recognition 在人类活动识别的分层HMM体系结构中引入不确定性
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072584
Michael Glodek, Lutz Bigalke, Martin Schels, F. Schwenker
{"title":"Incorporating uncertainty in a layered HMM architecture for human activity recognition","authors":"Michael Glodek, Lutz Bigalke, Martin Schels, F. Schwenker","doi":"10.1145/2072572.2072584","DOIUrl":"https://doi.org/10.1145/2072572.2072584","url":null,"abstract":"In this study, conditioned HMM (CHMM), which inherit the structure from the latent-dynamic conditional random field(LDCRF) proposed by Morency et al. but is also based on a Bayesian network [1, 2]. Within the model a sequence of class labels is influencing a Markov chain of hidden states which are able to emit observations. The structure allows that several classes make use of the same hidden state.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134388040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
The sounds of social life: naturalistic (acoustic) observation sampling 社会生活的声音:自然(声学)观察抽样
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072574
M. Mehl, Fenne große Deters
{"title":"The sounds of social life: naturalistic (acoustic) observation sampling","authors":"M. Mehl, Fenne große Deters","doi":"10.1145/2072572.2072574","DOIUrl":"https://doi.org/10.1145/2072572.2072574","url":null,"abstract":"This paper reviews a novel methodology called the Electronically Activated Recorder or EAR. The EAR is a portable audio recorder that periodically records snippets of ambient sounds from participants' momentary environments. In tracking moment-to-moment ambient sounds, it yields acoustic logs of people's days as they naturally unfold. In sampling only a fraction of the time, it protects participants' privacy. As a naturalistic observation method, it provides an observer's account of daily life and is optimized for the assessment of audible aspects of social environments, behaviors, and interactions. The paper discusses the EAR method conceptually and methodologically and identifies three ways in which it can enrich research in psychology and related fields. Specifically, it can (1) provide ecological, behavioral criteria that are independent of self-report, (2) calibrate psychological effects against frequencies of real-world behavior, and (3) help with the assessment of subtle and habitual behaviors that evade self-report.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124590910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computational study of human communication dynamic 人类交流动态的计算研究
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072578
Louis-Philippe Morency
{"title":"Computational study of human communication dynamic","authors":"Louis-Philippe Morency","doi":"10.1145/2072572.2072578","DOIUrl":"https://doi.org/10.1145/2072572.2072578","url":null,"abstract":"Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves and with the speaker through gesture, gaze, posture and facial expressions. To correctly interpret the high-level communicative signals, an observer needs to jointly integrate all spoken words, subtle prosodic changes and simultaneous gestures from all participants. In this paper, we present our ongoing research effort at USC MultiComp Lab to create models of human communication dynamic that explicitly take into consideration the multimodal and interpersonal aspects of human face-to-face interactions. The computational framework presented in this paper has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders (e.g., autism spectrum disorder).","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134102826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A survey of perception and computation of human beauty 对人类美的感知和计算的调查
J-HGBU '11 Pub Date : 2011-12-01 DOI: 10.1145/2072572.2072580
H. Gunes
{"title":"A survey of perception and computation of human beauty","authors":"H. Gunes","doi":"10.1145/2072572.2072580","DOIUrl":"https://doi.org/10.1145/2072572.2072580","url":null,"abstract":"Perception of (facial or bodily) beauty has long been debated amongst philosophers, artists, psychologists and anthropologists. Ancient philosophers claimed that there is a timeless, aesthetic ideal concept of beauty based on proportions, symmetry, harmony, and geometry, that goes well beyond the observer. Modern philosophers, on the other hand, have commonly suggested that beauty is in the eye of the beholder, and that beauty canons depend on culture. Despite the continuous interest and extensive research in cognitive, evolutionary and social sciences, modeling and analysis of human beauty and aesthetic canons remains open. Therefore, this paper aims to put the beauty trait under the spotlight by investigating various aspects involved in its perception and computation.","PeriodicalId":404943,"journal":{"name":"J-HGBU '11","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130540009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信