Rim Trabelsi, Jagannadan Varadarajan, Yong Pei, Le Zhang, I. Jabri, A. Bouallègue, P. Moulin
{"title":"基于鲁棒多模态线索的二元人类交互识别","authors":"Rim Trabelsi, Jagannadan Varadarajan, Yong Pei, Le Zhang, I. Jabri, A. Bouallègue, P. Moulin","doi":"10.1145/3132515.3132517","DOIUrl":null,"url":null,"abstract":"Activity analysis methods usually tend to focus on elementary human actions but ignore to analyze complex scenarios. In this paper, we focus particularly on classifying interactions between two persons in a supervised fashion. We propose a robust multi-modal proxemic descriptor based on 3D joint locations, depth and color videos. The proposed descriptor incorporates inter-person and intra-person joint distances calculated from 3D skeleton data and multi-frame dense optical flow features obtained from the application of temporal Convolutional neural networks (CNN) on depth and color images. The descriptors from the three modalities are derived from sparse key-frames surrounding high activity content and fused using a linear SVM classifier. Through experiments on two publicly available RGB-D interaction datasets, we show that our method can efficiently classify complex interactions using only short video snippet, outperforming existing state-of-the-art results.","PeriodicalId":395519,"journal":{"name":"Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Robust Multi-Modal Cues for Dyadic Human Interaction Recognition\",\"authors\":\"Rim Trabelsi, Jagannadan Varadarajan, Yong Pei, Le Zhang, I. Jabri, A. Bouallègue, P. Moulin\",\"doi\":\"10.1145/3132515.3132517\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Activity analysis methods usually tend to focus on elementary human actions but ignore to analyze complex scenarios. In this paper, we focus particularly on classifying interactions between two persons in a supervised fashion. We propose a robust multi-modal proxemic descriptor based on 3D joint locations, depth and color videos. The proposed descriptor incorporates inter-person and intra-person joint distances calculated from 3D skeleton data and multi-frame dense optical flow features obtained from the application of temporal Convolutional neural networks (CNN) on depth and color images. The descriptors from the three modalities are derived from sparse key-frames surrounding high activity content and fused using a linear SVM classifier. Through experiments on two publicly available RGB-D interaction datasets, we show that our method can efficiently classify complex interactions using only short video snippet, outperforming existing state-of-the-art results.\",\"PeriodicalId\":395519,\"journal\":{\"name\":\"Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3132515.3132517\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3132515.3132517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Multi-Modal Cues for Dyadic Human Interaction Recognition
Activity analysis methods usually tend to focus on elementary human actions but ignore to analyze complex scenarios. In this paper, we focus particularly on classifying interactions between two persons in a supervised fashion. We propose a robust multi-modal proxemic descriptor based on 3D joint locations, depth and color videos. The proposed descriptor incorporates inter-person and intra-person joint distances calculated from 3D skeleton data and multi-frame dense optical flow features obtained from the application of temporal Convolutional neural networks (CNN) on depth and color images. The descriptors from the three modalities are derived from sparse key-frames surrounding high activity content and fused using a linear SVM classifier. Through experiments on two publicly available RGB-D interaction datasets, we show that our method can efficiently classify complex interactions using only short video snippet, outperforming existing state-of-the-art results.