多主体:用于篮球馆单人篮球动作识别的多主体视频数据集

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhijie Han , Wansong Qin , Yalu Wang , Qixiang Wang , Yongbin Shi
{"title":"多主体:用于篮球馆单人篮球动作识别的多主体视频数据集","authors":"Zhijie Han ,&nbsp;Wansong Qin ,&nbsp;Yalu Wang ,&nbsp;Qixiang Wang ,&nbsp;Yongbin Shi","doi":"10.1016/j.cviu.2024.104193","DOIUrl":null,"url":null,"abstract":"<div><div>Computer vision technology is becoming a research focus in the field of basketball. Despite the abundance of datasets centered on basketball games, there remains a significant gap in the availability of a large-scale, multi-subject, and fine-grained dataset for the recognition of basketball actions in real-world sports scenarios, particularly for amateur players. Such datasets are crucial for advancing the application of computer vision tasks in the real world. To address this gap, we deployed multi-view cameras in a civilian basketball gym, constructed a real basketball data acquisition platform, and acquired a challenging multi-subject video dataset, named MultiSubjects. The MultiSubjects v1.0 dataset features a variety of ages, body types, attire, genders, and basketball actions, providing researchers with a high-quality and diverse resource of basketball action data. We collected a total of 1,000 distinct subjects from video data between September and December 2023, classified and labeled three basic basketball actions, and assigned a unique identity ID to each subject, provided a total of 6,144 video clips, 436,460 frames, and labeled 6,144 instances of actions with clear temporal boundaries using 436,460 human body bounding boxes. Additionally, complete frame-wise skeleton keypoint coordinates for the entire action are provided. We used some representative video action recognition algorithms as well as skeleton-based action recognition algorithms on the MultiSubjects v1.0 dataset and analyzed the results. The results confirm that the quality of our dataset surpasses that of popular video action recognition datasets, it also presents that skeleton-based action recognition remains a challenging task. The link to our dataset is: <span><span>https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104193"},"PeriodicalIF":4.3000,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MultiSubjects: A multi-subject video dataset for single-person basketball action recognition from basketball gym\",\"authors\":\"Zhijie Han ,&nbsp;Wansong Qin ,&nbsp;Yalu Wang ,&nbsp;Qixiang Wang ,&nbsp;Yongbin Shi\",\"doi\":\"10.1016/j.cviu.2024.104193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Computer vision technology is becoming a research focus in the field of basketball. Despite the abundance of datasets centered on basketball games, there remains a significant gap in the availability of a large-scale, multi-subject, and fine-grained dataset for the recognition of basketball actions in real-world sports scenarios, particularly for amateur players. Such datasets are crucial for advancing the application of computer vision tasks in the real world. To address this gap, we deployed multi-view cameras in a civilian basketball gym, constructed a real basketball data acquisition platform, and acquired a challenging multi-subject video dataset, named MultiSubjects. The MultiSubjects v1.0 dataset features a variety of ages, body types, attire, genders, and basketball actions, providing researchers with a high-quality and diverse resource of basketball action data. We collected a total of 1,000 distinct subjects from video data between September and December 2023, classified and labeled three basic basketball actions, and assigned a unique identity ID to each subject, provided a total of 6,144 video clips, 436,460 frames, and labeled 6,144 instances of actions with clear temporal boundaries using 436,460 human body bounding boxes. Additionally, complete frame-wise skeleton keypoint coordinates for the entire action are provided. We used some representative video action recognition algorithms as well as skeleton-based action recognition algorithms on the MultiSubjects v1.0 dataset and analyzed the results. The results confirm that the quality of our dataset surpasses that of popular video action recognition datasets, it also presents that skeleton-based action recognition remains a challenging task. The link to our dataset is: <span><span>https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":\"249 \",\"pages\":\"Article 104193\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314224002741\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002741","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

计算机视觉技术正成为篮球领域的研究重点。尽管有大量以篮球比赛为中心的数据集,但大规模、多主体、细粒度的数据集在现实世界运动场景中的篮球动作识别(尤其是业余球员的动作识别)方面仍存在巨大差距。此类数据集对于推动计算机视觉任务在现实世界中的应用至关重要。为了填补这一空白,我们在民用篮球馆部署了多视角摄像机,构建了一个真实的篮球数据采集平台,并获取了一个具有挑战性的多主体视频数据集,命名为 "MultiSubjects"。MultiSubjects v1.0 数据集包含各种年龄、体型、服装、性别和篮球动作,为研究人员提供了高质量、多样化的篮球动作数据资源。我们从 2023 年 9 月至 12 月期间的视频数据中收集了总共 1000 个不同的受试者,对三个基本篮球动作进行了分类和标记,并为每个受试者分配了唯一的身份 ID,共提供了 6,144 个视频片段、436,460 个帧,并使用 436,460 个人体边界框标记了 6,144 个具有明确时间界限的动作实例。此外,我们还提供了整个动作的完整帧骨架关键点坐标。我们在多主体 v1.0 数据集上使用了一些具有代表性的视频动作识别算法以及基于骨架的动作识别算法,并对结果进行了分析。结果证实,我们数据集的质量超过了流行的视频动作识别数据集,同时也表明基于骨架的动作识别仍然是一项具有挑战性的任务。我们数据集的链接是:https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MultiSubjects: A multi-subject video dataset for single-person basketball action recognition from basketball gym
Computer vision technology is becoming a research focus in the field of basketball. Despite the abundance of datasets centered on basketball games, there remains a significant gap in the availability of a large-scale, multi-subject, and fine-grained dataset for the recognition of basketball actions in real-world sports scenarios, particularly for amateur players. Such datasets are crucial for advancing the application of computer vision tasks in the real world. To address this gap, we deployed multi-view cameras in a civilian basketball gym, constructed a real basketball data acquisition platform, and acquired a challenging multi-subject video dataset, named MultiSubjects. The MultiSubjects v1.0 dataset features a variety of ages, body types, attire, genders, and basketball actions, providing researchers with a high-quality and diverse resource of basketball action data. We collected a total of 1,000 distinct subjects from video data between September and December 2023, classified and labeled three basic basketball actions, and assigned a unique identity ID to each subject, provided a total of 6,144 video clips, 436,460 frames, and labeled 6,144 instances of actions with clear temporal boundaries using 436,460 human body bounding boxes. Additionally, complete frame-wise skeleton keypoint coordinates for the entire action are provided. We used some representative video action recognition algorithms as well as skeleton-based action recognition algorithms on the MultiSubjects v1.0 dataset and analyzed the results. The results confirm that the quality of our dataset surpasses that of popular video action recognition datasets, it also presents that skeleton-based action recognition remains a challenging task. The link to our dataset is: https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Vision and Image Understanding
Computer Vision and Image Understanding 工程技术-工程:电子与电气
CiteScore
7.80
自引率
4.40%
发文量
112
审稿时长
79 days
期刊介绍: The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views. Research Areas Include: • Theory • Early vision • Data structures and representations • Shape • Range • Motion • Matching and recognition • Architecture and languages • Vision systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信