3D Gesture Recognition by Superquadrics

Ilya M. Afanasyev, M. Cecco
{"title":"3D Gesture Recognition by Superquadrics","authors":"Ilya M. Afanasyev, M. Cecco","doi":"10.5220/0004348404290433","DOIUrl":null,"url":null,"abstract":"Abstract: This paper presents 3D gesture recognition and localization method based on processing 3D data of hands in color gloves acquired by 3D depth sensor, like Microsoft Kinect. RGB information of every 3D datapoints is used to segment 3D point cloud into 12 parts (a forearm, a palm and 10 for fingers). The object (a hand with fingers) should be a-priori known and anthropometrically modeled by SuperQuadrics (SQ) with certain scaling and shape parameters. The gesture (pose) is estimated hierarchically by RANSAC-object search with a least square fitting the segments of 3D point cloud to corresponding SQ-models: at first – a pose of the hand (forearm & palm), and then positions of fingers. The solution is verified by evaluating the matching score, i.e. the number of inliers corresponding to the appropriate distances from SQ surfaces and 3D datapoints, which are satisfied to an assigned distance threshold.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Vision Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0004348404290433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Abstract: This paper presents 3D gesture recognition and localization method based on processing 3D data of hands in color gloves acquired by 3D depth sensor, like Microsoft Kinect. RGB information of every 3D datapoints is used to segment 3D point cloud into 12 parts (a forearm, a palm and 10 for fingers). The object (a hand with fingers) should be a-priori known and anthropometrically modeled by SuperQuadrics (SQ) with certain scaling and shape parameters. The gesture (pose) is estimated hierarchically by RANSAC-object search with a least square fitting the segments of 3D point cloud to corresponding SQ-models: at first – a pose of the hand (forearm & palm), and then positions of fingers. The solution is verified by evaluating the matching score, i.e. the number of inliers corresponding to the appropriate distances from SQ surfaces and 3D datapoints, which are satisfied to an assigned distance threshold.
Superquadrics的3D手势识别
摘要:本文提出了一种基于三维深度传感器(如Microsoft Kinect)采集的彩色手套手部三维数据处理的三维手势识别与定位方法。利用每个3D数据点的RGB信息将3D点云分割成12个部分(前臂、手掌和手指)。物体(有手指的手)应该是先验已知的,并通过SuperQuadrics (SQ)进行人体测量建模,具有一定的缩放和形状参数。手势(姿势)通过ransac对象搜索分层估计,使用最小二乘法将3D点云的片段拟合到相应的sqs模型:首先-手的姿势(前臂和手掌),然后是手指的位置。通过评估匹配分数来验证该解决方案,即与SQ表面和3D数据点的适当距离对应的内层数满足指定的距离阈值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信