Tanvir Fatima Naik Bukht, Hameedur Rahman, Ahmad Jalal
{"title":"基于特征融合和决策树的人体动作识别新框架","authors":"Tanvir Fatima Naik Bukht, Hameedur Rahman, Ahmad Jalal","doi":"10.1109/ICACS55311.2023.10089752","DOIUrl":null,"url":null,"abstract":"Image-based detection of human actions has recently emerged as a hot research area in the fields of computer vision and pattern recognition. It is concerned with detecting a person's actions or behavior from a static image. In this article, we are using a decision tree to develop an action recognition technique. In order to enhance the clarity of the video frames, the proposed method begins by implementing the HSI color transformation in the initial stage. Subsequently, it utilizes filters to minimize noise. The silhouette is extracted using a statistical method. We use SIFT and ORB for feature extraction. Next, using a parallel process, extract the shape and texture features needed for fusion applying name length control features. Additionally, the best high-dimensional data for classification is explored using vectors and the t-distributed stochastic neighbour embedding (t-SNE).The final step involves the features to be input into a decision tree, where they will be sorted into relevant human actions based on those final characteristics. The recognition rate of the UT interaction data used in the experimental process is 94.6%.","PeriodicalId":357522,"journal":{"name":"2023 4th International Conference on Advancements in Computational Sciences (ICACS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"A Novel Framework for Human Action Recognition Based on Features Fusion and Decision Tree\",\"authors\":\"Tanvir Fatima Naik Bukht, Hameedur Rahman, Ahmad Jalal\",\"doi\":\"10.1109/ICACS55311.2023.10089752\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image-based detection of human actions has recently emerged as a hot research area in the fields of computer vision and pattern recognition. It is concerned with detecting a person's actions or behavior from a static image. In this article, we are using a decision tree to develop an action recognition technique. In order to enhance the clarity of the video frames, the proposed method begins by implementing the HSI color transformation in the initial stage. Subsequently, it utilizes filters to minimize noise. The silhouette is extracted using a statistical method. We use SIFT and ORB for feature extraction. Next, using a parallel process, extract the shape and texture features needed for fusion applying name length control features. Additionally, the best high-dimensional data for classification is explored using vectors and the t-distributed stochastic neighbour embedding (t-SNE).The final step involves the features to be input into a decision tree, where they will be sorted into relevant human actions based on those final characteristics. The recognition rate of the UT interaction data used in the experimental process is 94.6%.\",\"PeriodicalId\":357522,\"journal\":{\"name\":\"2023 4th International Conference on Advancements in Computational Sciences (ICACS)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 4th International Conference on Advancements in Computational Sciences (ICACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACS55311.2023.10089752\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Conference on Advancements in Computational Sciences (ICACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACS55311.2023.10089752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Framework for Human Action Recognition Based on Features Fusion and Decision Tree
Image-based detection of human actions has recently emerged as a hot research area in the fields of computer vision and pattern recognition. It is concerned with detecting a person's actions or behavior from a static image. In this article, we are using a decision tree to develop an action recognition technique. In order to enhance the clarity of the video frames, the proposed method begins by implementing the HSI color transformation in the initial stage. Subsequently, it utilizes filters to minimize noise. The silhouette is extracted using a statistical method. We use SIFT and ORB for feature extraction. Next, using a parallel process, extract the shape and texture features needed for fusion applying name length control features. Additionally, the best high-dimensional data for classification is explored using vectors and the t-distributed stochastic neighbour embedding (t-SNE).The final step involves the features to be input into a decision tree, where they will be sorted into relevant human actions based on those final characteristics. The recognition rate of the UT interaction data used in the experimental process is 94.6%.