A dataset of egocentric and exocentric view hands in interactive senses

IF 1 Q3 MULTIDISCIPLINARY SCIENCES
Cui Cui , Mohd Shahrizal Sunar , Goh Eg Su
{"title":"A dataset of egocentric and exocentric view hands in interactive senses","authors":"Cui Cui ,&nbsp;Mohd Shahrizal Sunar ,&nbsp;Goh Eg Su","doi":"10.1016/j.dib.2024.111003","DOIUrl":null,"url":null,"abstract":"<div><div>The dataset presents raw data on the egocentric (first-person view) and exocentric (third-person view) perspectives, including 47166 frame images. Egocentric and exocentric frame images are recorded from original iPhone videos simultaneously. The egocentric view captures the details of proximity hand gestures and attentiveness of the iPhone wearer, while the exocentric view captures the hand gestures in the top-down view of all participants. The data provides frame images of two, three, and four people engaged in interactive games such as Poker, Checkers, and Dice. Furthermore, the data was collected in the real environment under natural, white, yellow, and dim light conditions. The dataset contains diverse hand gestures, including remarkable instances such as motion blur, extremely deformed, sharp shadows, and extremely dim light. Moreover, researchers working on artificial intelligence (AI) interaction games in extended reality can create sub-datasets from the metadata for one or both perspectives in the egocentric or exocentric views, facilitating the AI understanding of hand gestures in human interactive games. Furthermore, researchers can extract hand gestures considered relevant studies for hand-object interaction, such as hands deformed by holding a chess piece, blurred hand gripping containers at Dice, and hands obscured by playing cards. Researchers can annotate rectangular boxes, and hand edges for semi-supervised and supervised hand detection, hand segmentation, and hand classification to improve the ability of the AI to distinguish between each player's hand gestures. Unsupervised, self-supervised research can also be done directly using this dataset.</div></div>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S235234092400965X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The dataset presents raw data on the egocentric (first-person view) and exocentric (third-person view) perspectives, including 47166 frame images. Egocentric and exocentric frame images are recorded from original iPhone videos simultaneously. The egocentric view captures the details of proximity hand gestures and attentiveness of the iPhone wearer, while the exocentric view captures the hand gestures in the top-down view of all participants. The data provides frame images of two, three, and four people engaged in interactive games such as Poker, Checkers, and Dice. Furthermore, the data was collected in the real environment under natural, white, yellow, and dim light conditions. The dataset contains diverse hand gestures, including remarkable instances such as motion blur, extremely deformed, sharp shadows, and extremely dim light. Moreover, researchers working on artificial intelligence (AI) interaction games in extended reality can create sub-datasets from the metadata for one or both perspectives in the egocentric or exocentric views, facilitating the AI understanding of hand gestures in human interactive games. Furthermore, researchers can extract hand gestures considered relevant studies for hand-object interaction, such as hands deformed by holding a chess piece, blurred hand gripping containers at Dice, and hands obscured by playing cards. Researchers can annotate rectangular boxes, and hand edges for semi-supervised and supervised hand detection, hand segmentation, and hand classification to improve the ability of the AI to distinguish between each player's hand gestures. Unsupervised, self-supervised research can also be done directly using this dataset.
交互式感官中的 "自中心 "和 "外中心 "视图手数据集
数据集提供了以自我为中心(第一人称视角)和以外部为中心(第三人称视角)视角的原始数据,包括 47166 帧图像。自中心视角和外中心视角的帧图像是同时从 iPhone 原始视频中录制的。自中心视角捕捉了 iPhone 佩戴者的近距离手势细节和注意力,而外中心视角则捕捉了所有参与者自上而下视角中的手势。数据提供了两人、三人和四人参与扑克、跳棋和骰子等互动游戏的帧图像。此外,数据是在自然光、白光、黄光和暗光条件下的真实环境中收集的。该数据集包含多种手势,其中包括运动模糊、极度变形、锐利阴影和光线极暗等显著情况。此外,研究扩展现实中的人工智能(AI)交互游戏的人员可以从元数据中为自中心视图或外中心视图中的一个或两个视角创建子数据集,从而促进人工智能对人类交互游戏中手势的理解。此外,研究人员还可以提取被认为与手-物互动研究相关的手势,如握住棋子而变形的手、在骰子游戏中握住容器的模糊手以及被扑克牌遮挡的手。研究人员可以为矩形框和手部边缘添加注释,以便进行半监督和监督手部检测、手部分割和手部分类,从而提高人工智能区分每位玩家手势的能力。也可以直接使用该数据集进行无监督、自监督研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Data in Brief
Data in Brief MULTIDISCIPLINARY SCIENCES-
CiteScore
3.10
自引率
0.00%
发文量
996
审稿时长
70 days
期刊介绍: Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信