Computer Vision Applications in Sports最新文献

筛选
英文 中文
Aerodynamic analysis via foreground segmentation 前景分割气动分析
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.17863/CAM.8293
P. Carey, Stuart Bennett, Joan Lasenby, T. Purnell
{"title":"Aerodynamic analysis via foreground segmentation","authors":"P. Carey, Stuart Bennett, Joan Lasenby, T. Purnell","doi":"10.17863/CAM.8293","DOIUrl":"https://doi.org/10.17863/CAM.8293","url":null,"abstract":"Results from wind-tunnel testing of athletes cannot always be repeated on the track, but reducing aerodynamic drag is critical for racing. Drag force is highly correlated with an athlete's frontal area, so in this paper we describe a system to segment an athlete from the very challenging background found in a standard racing environment. Given an accurate segmentation, a front-on view, and the athlete's position (for scaling), one can effectively count the pixels and thereby measure the moving area. The method described does not rely on alteration of the track lighting, background, or athlete's appearance. An image-matting algorithm more used in the film industry is combined with an innovative model-based pre-process to allow the whole measurement to be automated. Area results have better than one percent error compared to handextracted measurements over a representative period, while frame-by-frame measurements capture expected cyclic variation. A near real-time implementation permits rapid iteration of aerodynamic experiments during training.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129730265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Comparison of a Virtual Game-Day Experience on Varying Devices 不同设备上的虚拟游戏日体验比较
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-346
John W. V. Miller, Holly Baiotto, Anastacia MacAllister, Melynda Hoover, Gabe Evans, Jonathan Schlueter, Vijay Kalivarapu, E. Winer
{"title":"Comparison of a Virtual Game-Day Experience on Varying Devices","authors":"John W. V. Miller, Holly Baiotto, Anastacia MacAllister, Melynda Hoover, Gabe Evans, Jonathan Schlueter, Vijay Kalivarapu, E. Winer","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-346","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-346","url":null,"abstract":"Collegiate athletics, particularly football, provide tremendous value to schools through branding, revenue, and publicity. As a result, extensive effort is put into recruiting talented students. When recruiting, home games are exceptional tools used to show a school's unique game-day atmosphere. However, this is not a viable option during the offseason or for off-site visits. This paper explores a solution to these challenges by using virtual reality (VR) to recreate the game-day experience. The Virtual Reality Application Center in conjunction with Iowa State University (ISU) athletics, created a VR application mimicking the game-day experience at ISU. This application was displayed using the world's highest resolution six-sided CAVETM, an Oculus Rift DK2 computer-driven head mounted display (HMD) and a Merge VR smart phone-driven HMD. A between-subjects user study compared presence between the different systems and a video control. In total, 82 students participated, indicating their presence using the Witmer and Singer questionnaire. Results revealed that while the CAVETM scored the highest in presence, the Oculus and Merge only experienced a slight drop compared to the CAVETM. This result suggests that the mobile ultra-low-cost Merge is a viable alternative to the CAVE TM and Oculus for delivering the game-day experience to ISU recruits.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133492928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Virtual tracking shots for sports analysis 用于运动分析的虚拟跟踪投篮
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-342
Stuart Bennett, Joan Lasenby, T. Purnell
{"title":"Virtual tracking shots for sports analysis","authors":"Stuart Bennett, Joan Lasenby, T. Purnell","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-342","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-342","url":null,"abstract":"Reviewing athletic performance is a critical part of modern sports training, but snapshots only showing part of a course or exercise can be misleading, while travelling cameras are expensive. In this paper we describe a system merging the output of many autonomous inexpensive camera nodes distributed around a course to reliably synthesize tracking shots of multiple athletes training concurrently. Issues such as uncontrolled lighting, athlete occlusions and overtaking/pack-motion are dealt with, as is compensating for the quirks of cheap image sensors. The resultant system is entirely automated, inexpensive, scalable and provides output in near real-time, allowing coaching staff to give immediate and relevant feedback on a performance. Requiring no alteration to existing training exercises has boosted the system's uptake by coaches, with over 100,000 videos recorded to date.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pose Estimation for Deriving Kinematic Parameters of Competitive Swimmers 基于姿态估计的竞技游泳运动员运动参数提取
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-345
D. Zecha, C. Eggert, R. Lienhart
{"title":"Pose Estimation for Deriving Kinematic Parameters of Competitive Swimmers","authors":"D. Zecha, C. Eggert, R. Lienhart","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-345","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-345","url":null,"abstract":"In the field of competitive swimming a quantitative evaluation of kinematic parameters is a valuable tool for coaches but also a labor intensive task. We present a system which is able to automate the extraction of many kinematic parameters such as stroke frequency, kick rates and stroke-specific intra-cyclic parameters from video footage of an athlete. While this task can in principle be solved by human pose estimation, the problem is exacerbated by permanently changing self-occlusion and severe noise caused by air bubbles, splashes, light reflection and light refraction. Current approaches for pose estimation are unable to provide the necessary localization precision under these conditions in order to enable accurate estimates of all desired kinematic parameters. In this paper we reduce the problem of kinematic parameter derivation to detecting key frames with a deep neural network human pose estimator. We show that we can correctly detect key frames with a precision which is on par with the human annotation performance. From the correctly located key frames, aforementioned parameters can be successfully inferred.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127015923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Digital Playbook - A Teaching Tool for American Football 数字剧本-美式足球教学工具
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-347
M. Vorstandlechner, M. Gelautz, Christoph Putz
{"title":"Digital Playbook - A Teaching Tool for American Football","authors":"M. Vorstandlechner, M. Gelautz, Christoph Putz","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-347","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-347","url":null,"abstract":"","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Goal!! Event detection in sports video 目标! !体育视频中的事件检测
Computer Vision Applications in Sports Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-344
Grigorios Tsagkatakis, M. Jaber, P. Tsakalides
{"title":"Goal!! Event detection in sports video","authors":"Grigorios Tsagkatakis, M. Jaber, P. Tsakalides","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-344","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-344","url":null,"abstract":"Understanding complex events from unstructured video, like scoring a goal in a football game, is an extremely challenging task due to the dynamics, complexity and variation of video sequences. In this work, we attack this problem exploiting the capabilities of the recently developed framework of deep learning. We consider independently encoding spatial and temporal information via convolutional neural networks and fusion of features via regularized Autoencoders. To demonstrate the capacities of the proposed scheme, a new dataset is compiled, composed of goal and no-goal sequences. Experimental results demonstrate that extremely high classification accuracy can be achieved, from a dramatically limited number of examples, by leveraging pretrained models with fine-tuned fusion of spatio-temporal features. Introduction Analyzing unstructured video streams is a challenging task for multiple reasons [10]. A first challenge is associated with the complexity of real world dynamics that are manifested in such video streams, including changes in viewpoint, illumination and quality. In addition, while annotated image datasets are prevalent, a smaller number of labeled datasets are available for video analytics. Last, the analysis of massive, high dimensional video streams is extremely demanding, requiring significantly higher computational resources compared to still imagery [11]. In this work, we focus on the analysis of a particular type of videos showing multi-person sport activities and more specifically football (soccer) games. Sport videos in general are acquired from different vantage points and the decision of selecting a single stream for broadcasting is taken by the director. As a result, the broadcasted video stream is characterized by varying acquisition conditions like zooming-in near the goalpost during a goal and zooming-out to cover the full field. In this complex situation, we consider the high level objective of detecting specific and semantically meaningful events like an opponent team scoring a goal. Succeeding in this task will allow the automatic transcription of games, video summarization and automatic statistical analysis. Despite the many challenges associated with video analytics, the human brain is able to extract meaning and provide contextual information in a limited amount of time and from a limited set of training examples. From a computational perspective, the process of event detection in a video sequence amounts to two foundamental steps, namely (i) spatio-temporal feature extraction and (ii) example classification. Typically, feature extraction approaches rely on highly engineered handcrafted features like the SIFT, which however are not able to generalize to more challenging cases. To achieve this objective, we consider the state-of-theart framework of deep learning [18] and more specifically the case of Convolutional Neural Networks (CNNs) [16], which has taken by storm almost all problems related to computer vision, ran","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126482068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信