{"title":"基于三维泽尼克矩的人体动作识别","authors":"Okay Arik, A. Bingöl","doi":"10.1109/SSD.2014.6808758","DOIUrl":null,"url":null,"abstract":"In this work, 3D Zernike moments have been used to classify 7 basic coarse human actions in markerless 3D video sequences. The time trajectories of the Zernike moments of the moving subject have been taken as features. Even though Zernike moment orders of about 15 to 20 are required to characterize and/or reconstruct a general 3D image with reasonable fidelity, it has been found that fewer number of moments are sufficient for satisfactory action classification, due to the accumulative nature of video data. In our work, we have obtained greater than 95% recognition accuracy using as low as 3rd order Zernike moments, over the 7 basic actions considered. Recognition accuracy increased to more than 98% with 5th order moments.","PeriodicalId":168063,"journal":{"name":"2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Human action recognition using 3D zernike moments\",\"authors\":\"Okay Arik, A. Bingöl\",\"doi\":\"10.1109/SSD.2014.6808758\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, 3D Zernike moments have been used to classify 7 basic coarse human actions in markerless 3D video sequences. The time trajectories of the Zernike moments of the moving subject have been taken as features. Even though Zernike moment orders of about 15 to 20 are required to characterize and/or reconstruct a general 3D image with reasonable fidelity, it has been found that fewer number of moments are sufficient for satisfactory action classification, due to the accumulative nature of video data. In our work, we have obtained greater than 95% recognition accuracy using as low as 3rd order Zernike moments, over the 7 basic actions considered. Recognition accuracy increased to more than 98% with 5th order moments.\",\"PeriodicalId\":168063,\"journal\":{\"name\":\"2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14)\",\"volume\":\"85 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSD.2014.6808758\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSD.2014.6808758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In this work, 3D Zernike moments have been used to classify 7 basic coarse human actions in markerless 3D video sequences. The time trajectories of the Zernike moments of the moving subject have been taken as features. Even though Zernike moment orders of about 15 to 20 are required to characterize and/or reconstruct a general 3D image with reasonable fidelity, it has been found that fewer number of moments are sufficient for satisfactory action classification, due to the accumulative nature of video data. In our work, we have obtained greater than 95% recognition accuracy using as low as 3rd order Zernike moments, over the 7 basic actions considered. Recognition accuracy increased to more than 98% with 5th order moments.