Frames Extraction from Table Tennis Competition Videos for Action Classification Using Optical Flow and Fuzzy Rules

Chao-Jen Wang, Jieh-Ren Chang, H. Lin, Chiu-Ju Lu
{"title":"Frames Extraction from Table Tennis Competition Videos for Action Classification Using Optical Flow and Fuzzy Rules","authors":"Chao-Jen Wang, Jieh-Ren Chang, H. Lin, Chiu-Ju Lu","doi":"10.1109/ICASI57738.2023.10179577","DOIUrl":null,"url":null,"abstract":"To recognize actions using a neural network model, it is necessary to extract the correct frames from the video for the input of model. Extraction of frames is an important issue that could be poor recognition results or costs computation time. This study proposes a new extraction method that combines optical flow and fuzzy rules. First, optical flow is used to calculate the values of the x and y vectors of the motion in consecutive frames. After expert discussion, rules are formulated to define the optical flow values for each action as fuzzy semantic words and stored as a fuzzy rule base. For the experiment, serving action is further subdivided into tossing, hitting and receiving parts in table tennis video. Using fuzzy rules based on the x and y optical flow values of different actions, the current action type can be determined, and action frames can be extracted more accurately, improving the accuracy of table tennis action recognition, the final result of table tennis action recognition reached up to 69.8% accuracy.","PeriodicalId":281254,"journal":{"name":"2023 9th International Conference on Applied System Innovation (ICASI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 9th International Conference on Applied System Innovation (ICASI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASI57738.2023.10179577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

To recognize actions using a neural network model, it is necessary to extract the correct frames from the video for the input of model. Extraction of frames is an important issue that could be poor recognition results or costs computation time. This study proposes a new extraction method that combines optical flow and fuzzy rules. First, optical flow is used to calculate the values of the x and y vectors of the motion in consecutive frames. After expert discussion, rules are formulated to define the optical flow values for each action as fuzzy semantic words and stored as a fuzzy rule base. For the experiment, serving action is further subdivided into tossing, hitting and receiving parts in table tennis video. Using fuzzy rules based on the x and y optical flow values of different actions, the current action type can be determined, and action frames can be extracted more accurately, improving the accuracy of table tennis action recognition, the final result of table tennis action recognition reached up to 69.8% accuracy.
基于光流和模糊规则的乒乓球比赛视频帧提取动作分类
为了使用神经网络模型识别动作,必须从视频中提取正确的帧作为模型的输入。帧的提取是一个重要的问题,可能会导致较差的识别结果或花费计算时间。本文提出了一种结合光流和模糊规则的提取方法。首先,利用光流计算连续帧中运动的x和y向量的值。经过专家讨论,制定规则,将每个动作的光流值定义为模糊语义词,并存储为模糊规则库。本实验将乒乓球录像中的发球动作进一步细分为投掷、击球和接球三个部分。利用基于不同动作的x、y光流值的模糊规则,可以确定当前动作类型,更准确地提取动作帧,提高了乒乓球动作识别的准确率,最终乒乓球动作识别的结果准确率高达69.8%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信