Anatomical Planes-Based Representation for Recognizing Two-Person Interactions from Partially Observed Video Sequences: A Feasibility Study

R. Alazrai, Mohammad Hababeh, B. Alsaify, M. Daoud
{"title":"Anatomical Planes-Based Representation for Recognizing Two-Person Interactions from Partially Observed Video Sequences: A Feasibility Study","authors":"R. Alazrai, Mohammad Hababeh, B. Alsaify, M. Daoud","doi":"10.1109/ICEEE52452.2021.9415910","DOIUrl":null,"url":null,"abstract":"This paper presents a new approach for two-person interaction recognition from partially observed video sequences. The proposed approach employs the 3D joint positions captured by a Microsoft Kinect sensor to construct a view-invariant anatomical planes-based descriptor, called the two-person motion-pose geometric descriptor (TP-MPGD), that quantifies the activities performed by two interacting persons at each video frame. Using the TP-MPGDs extracted from the frames of the input videos, we construct a two-phase classification framework to recognize the class of the interaction performed by two persons. The performance of the proposed approach has been evaluated using a publicly available interaction dataset that comprises the 3D joint positions data recorded using the Kinect sensor for 21 pairs of subjects while performing eight interactions. Moreover, we have developed five different evaluation scenarios, including one evaluation scenario that is based on fully observed video sequences and four other evaluation scenarios that are based on partially observed video sequences. The classification accuracies obtained for each of the five evaluation scenarios demonstrate the feasibility of our proposed approach to recognize two-person interactions from fully observed and partially observed video sequences.","PeriodicalId":429645,"journal":{"name":"2021 8th International Conference on Electrical and Electronics Engineering (ICEEE)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th International Conference on Electrical and Electronics Engineering (ICEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEE52452.2021.9415910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a new approach for two-person interaction recognition from partially observed video sequences. The proposed approach employs the 3D joint positions captured by a Microsoft Kinect sensor to construct a view-invariant anatomical planes-based descriptor, called the two-person motion-pose geometric descriptor (TP-MPGD), that quantifies the activities performed by two interacting persons at each video frame. Using the TP-MPGDs extracted from the frames of the input videos, we construct a two-phase classification framework to recognize the class of the interaction performed by two persons. The performance of the proposed approach has been evaluated using a publicly available interaction dataset that comprises the 3D joint positions data recorded using the Kinect sensor for 21 pairs of subjects while performing eight interactions. Moreover, we have developed five different evaluation scenarios, including one evaluation scenario that is based on fully observed video sequences and four other evaluation scenarios that are based on partially observed video sequences. The classification accuracies obtained for each of the five evaluation scenarios demonstrate the feasibility of our proposed approach to recognize two-person interactions from fully observed and partially observed video sequences.
基于解剖平面的表征从部分观察到的视频序列中识别两人互动:可行性研究
本文提出了一种基于部分观察视频序列的二人交互识别新方法。该方法采用微软Kinect传感器捕获的3D关节位置来构建基于视图不变解剖平面的描述符,称为两人运动姿势几何描述符(TP-MPGD),量化两个相互作用的人在每个视频帧中执行的活动。利用从输入视频帧中提取的TP-MPGDs,我们构建了一个两阶段的分类框架来识别两个人进行的交互的类别。使用公开可用的交互数据集对所提出方法的性能进行了评估,该数据集包括使用Kinect传感器记录的21对受试者在执行8次交互时的3D关节位置数据。此外,我们开发了五种不同的评估场景,包括基于完全观察视频序列的评估场景和基于部分观察视频序列的其他四种评估场景。五个评估场景的分类精度证明了我们提出的方法从完全观察和部分观察的视频序列中识别两人互动的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信