{"title":"MultiSubjects: A multi-subject video dataset for single-person basketball action recognition from basketball gym","authors":"","doi":"10.1016/j.cviu.2024.104193","DOIUrl":null,"url":null,"abstract":"<div><div>Computer vision technology is becoming a research focus in the field of basketball. Despite the abundance of datasets centered on basketball games, there remains a significant gap in the availability of a large-scale, multi-subject, and fine-grained dataset for the recognition of basketball actions in real-world sports scenarios, particularly for amateur players. Such datasets are crucial for advancing the application of computer vision tasks in the real world. To address this gap, we deployed multi-view cameras in a civilian basketball gym, constructed a real basketball data acquisition platform, and acquired a challenging multi-subject video dataset, named MultiSubjects. The MultiSubjects v1.0 dataset features a variety of ages, body types, attire, genders, and basketball actions, providing researchers with a high-quality and diverse resource of basketball action data. We collected a total of 1,000 distinct subjects from video data between September and December 2023, classified and labeled three basic basketball actions, and assigned a unique identity ID to each subject, provided a total of 6,144 video clips, 436,460 frames, and labeled 6,144 instances of actions with clear temporal boundaries using 436,460 human body bounding boxes. Additionally, complete frame-wise skeleton keypoint coordinates for the entire action are provided. We used some representative video action recognition algorithms as well as skeleton-based action recognition algorithms on the MultiSubjects v1.0 dataset and analyzed the results. The results confirm that the quality of our dataset surpasses that of popular video action recognition datasets, it also presents that skeleton-based action recognition remains a challenging task. The link to our dataset is: <span><span>https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002741","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Computer vision technology is becoming a research focus in the field of basketball. Despite the abundance of datasets centered on basketball games, there remains a significant gap in the availability of a large-scale, multi-subject, and fine-grained dataset for the recognition of basketball actions in real-world sports scenarios, particularly for amateur players. Such datasets are crucial for advancing the application of computer vision tasks in the real world. To address this gap, we deployed multi-view cameras in a civilian basketball gym, constructed a real basketball data acquisition platform, and acquired a challenging multi-subject video dataset, named MultiSubjects. The MultiSubjects v1.0 dataset features a variety of ages, body types, attire, genders, and basketball actions, providing researchers with a high-quality and diverse resource of basketball action data. We collected a total of 1,000 distinct subjects from video data between September and December 2023, classified and labeled three basic basketball actions, and assigned a unique identity ID to each subject, provided a total of 6,144 video clips, 436,460 frames, and labeled 6,144 instances of actions with clear temporal boundaries using 436,460 human body bounding boxes. Additionally, complete frame-wise skeleton keypoint coordinates for the entire action are provided. We used some representative video action recognition algorithms as well as skeleton-based action recognition algorithms on the MultiSubjects v1.0 dataset and analyzed the results. The results confirm that the quality of our dataset surpasses that of popular video action recognition datasets, it also presents that skeleton-based action recognition remains a challenging task. The link to our dataset is: https://huggingface.co/datasets/Henu-Software/Henu-MultiSubjects.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems