Jungin Park, Sangryul Jeon, Seungryong Kim, Jiyoung Lee, Sunok Kim, K. Sohn
{"title":"学习检测、关联和识别未修剪视频中的人类行为和周围场景","authors":"Jungin Park, Sangryul Jeon, Seungryong Kim, Jiyoung Lee, Sunok Kim, K. Sohn","doi":"10.1145/3265987.3265989","DOIUrl":null,"url":null,"abstract":"While recognizing human actions and surrounding scenes addresses different aspects of video understanding, they have strong correlations that can be used to complement the singular information of each other. In this paper, we propose an approach for joint action and scene recognition that is formulated in an end-to-end learning framework based on temporal attention techniques and the fusion of them. By applying temporal attention modules to the generic feature network, action and scene features are extracted efficiently, and then they are composed to a single feature vector through the proposed fusion module. Our experiments on the CoVieW18 dataset show that our model is able to detect temporal attention with only weak supervision, and remarkably improves multi-task action and scene classification accuracies.","PeriodicalId":151362,"journal":{"name":"Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning to Detect, Associate, and Recognize Human Actions and Surrounding Scenes in Untrimmed Videos\",\"authors\":\"Jungin Park, Sangryul Jeon, Seungryong Kim, Jiyoung Lee, Sunok Kim, K. Sohn\",\"doi\":\"10.1145/3265987.3265989\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While recognizing human actions and surrounding scenes addresses different aspects of video understanding, they have strong correlations that can be used to complement the singular information of each other. In this paper, we propose an approach for joint action and scene recognition that is formulated in an end-to-end learning framework based on temporal attention techniques and the fusion of them. By applying temporal attention modules to the generic feature network, action and scene features are extracted efficiently, and then they are composed to a single feature vector through the proposed fusion module. Our experiments on the CoVieW18 dataset show that our model is able to detect temporal attention with only weak supervision, and remarkably improves multi-task action and scene classification accuracies.\",\"PeriodicalId\":151362,\"journal\":{\"name\":\"Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3265987.3265989\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3265987.3265989","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning to Detect, Associate, and Recognize Human Actions and Surrounding Scenes in Untrimmed Videos
While recognizing human actions and surrounding scenes addresses different aspects of video understanding, they have strong correlations that can be used to complement the singular information of each other. In this paper, we propose an approach for joint action and scene recognition that is formulated in an end-to-end learning framework based on temporal attention techniques and the fusion of them. By applying temporal attention modules to the generic feature network, action and scene features are extracted efficiently, and then they are composed to a single feature vector through the proposed fusion module. Our experiments on the CoVieW18 dataset show that our model is able to detect temporal attention with only weak supervision, and remarkably improves multi-task action and scene classification accuracies.