{"title":"基于欧几里得距离积分的骨骼人体动作识别","authors":"Yi Gao, Zhaokun Liu, Xinmeng Wu, Guangyuan Wu, Jiahui Zhao, Xiaofan Zhao","doi":"10.1145/3512576.3512585","DOIUrl":null,"url":null,"abstract":"With the growing popularity of somatosensory interaction devices, human action recognition becomes more and more attractive in many application scenarios. Nowadays, it takes a long time for some behavior recognition methods to train a model. To solve the problem, this paper proposes a human skeleton model for human action recognition. The joint coordinates are extracted using OpenPose and the thermodynamic diagram, and then we extract partial features and overall features, as well as Euclidean distances between joints. All the data collected above makes up the motion features of independent video frames. What's more, we leverage Multilayer Perceptron (MLP) classifier to classify all the motion features. On the KTH and ICPR datasets, we test the accuracy of the mode verified by changing several parameters. When the image resolution is 320*240, weight is 12 and stride is 1, the highest accuracy rate of single-person behavior recognition is 0.821. When weight is 10 and stride is 1, the highest accuracy rate of multi-person behavior recognition is 0.812, and the high running speed enables the mode to be real-time.","PeriodicalId":278114,"journal":{"name":"Proceedings of the 2021 9th International Conference on Information Technology: IoT and Smart City","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Skeleton-based Human Action Recognition by the Integration of Euclidean distance\",\"authors\":\"Yi Gao, Zhaokun Liu, Xinmeng Wu, Guangyuan Wu, Jiahui Zhao, Xiaofan Zhao\",\"doi\":\"10.1145/3512576.3512585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the growing popularity of somatosensory interaction devices, human action recognition becomes more and more attractive in many application scenarios. Nowadays, it takes a long time for some behavior recognition methods to train a model. To solve the problem, this paper proposes a human skeleton model for human action recognition. The joint coordinates are extracted using OpenPose and the thermodynamic diagram, and then we extract partial features and overall features, as well as Euclidean distances between joints. All the data collected above makes up the motion features of independent video frames. What's more, we leverage Multilayer Perceptron (MLP) classifier to classify all the motion features. On the KTH and ICPR datasets, we test the accuracy of the mode verified by changing several parameters. When the image resolution is 320*240, weight is 12 and stride is 1, the highest accuracy rate of single-person behavior recognition is 0.821. When weight is 10 and stride is 1, the highest accuracy rate of multi-person behavior recognition is 0.812, and the high running speed enables the mode to be real-time.\",\"PeriodicalId\":278114,\"journal\":{\"name\":\"Proceedings of the 2021 9th International Conference on Information Technology: IoT and Smart City\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 9th International Conference on Information Technology: IoT and Smart City\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3512576.3512585\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 9th International Conference on Information Technology: IoT and Smart City","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3512576.3512585","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Skeleton-based Human Action Recognition by the Integration of Euclidean distance
With the growing popularity of somatosensory interaction devices, human action recognition becomes more and more attractive in many application scenarios. Nowadays, it takes a long time for some behavior recognition methods to train a model. To solve the problem, this paper proposes a human skeleton model for human action recognition. The joint coordinates are extracted using OpenPose and the thermodynamic diagram, and then we extract partial features and overall features, as well as Euclidean distances between joints. All the data collected above makes up the motion features of independent video frames. What's more, we leverage Multilayer Perceptron (MLP) classifier to classify all the motion features. On the KTH and ICPR datasets, we test the accuracy of the mode verified by changing several parameters. When the image resolution is 320*240, weight is 12 and stride is 1, the highest accuracy rate of single-person behavior recognition is 0.821. When weight is 10 and stride is 1, the highest accuracy rate of multi-person behavior recognition is 0.812, and the high running speed enables the mode to be real-time.