{"title":"基于几何特征的人体肢体语言理解动作检测","authors":"Neha Shirbhate, K. Talele","doi":"10.1109/IC3I.2016.7918034","DOIUrl":null,"url":null,"abstract":"In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Human body language understanding for action detection using geometric features\",\"authors\":\"Neha Shirbhate, K. Talele\",\"doi\":\"10.1109/IC3I.2016.7918034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.\",\"PeriodicalId\":305971,\"journal\":{\"name\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"volume\":\"98 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3I.2016.7918034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3I.2016.7918034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human body language understanding for action detection using geometric features
In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.