{"title":"稀疏深度自编码器估计基于骨骼的步态异常指数","authors":"Trong-Nguyen Nguyen, H. Huynh, J. Meunier","doi":"10.1109/CCE.2018.8465714","DOIUrl":null,"url":null,"abstract":"This paper proposes an approach estimating a gait abnormality index based on skeletal information provided by a depth camera. Differently from related works where the extraction of hand-crafted features is required to describe gait characteristics, our method automatically performs that stage with the support of a deep auto-encoder. In order to get visually interpretable features, we embedded a constraint of sparsity into the model. Similarly to most gait-related studies, the temporal factor is also considered as a post-processing in our system. This method provided promising results when experimenting on a dataset containing nearly one hundred thousand skeleton samples.","PeriodicalId":118716,"journal":{"name":"2018 IEEE Seventh International Conference on Communications and Electronics (ICCE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Estimating skeleton-based gait abnormality index by sparse deep auto-encoder\",\"authors\":\"Trong-Nguyen Nguyen, H. Huynh, J. Meunier\",\"doi\":\"10.1109/CCE.2018.8465714\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes an approach estimating a gait abnormality index based on skeletal information provided by a depth camera. Differently from related works where the extraction of hand-crafted features is required to describe gait characteristics, our method automatically performs that stage with the support of a deep auto-encoder. In order to get visually interpretable features, we embedded a constraint of sparsity into the model. Similarly to most gait-related studies, the temporal factor is also considered as a post-processing in our system. This method provided promising results when experimenting on a dataset containing nearly one hundred thousand skeleton samples.\",\"PeriodicalId\":118716,\"journal\":{\"name\":\"2018 IEEE Seventh International Conference on Communications and Electronics (ICCE)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Seventh International Conference on Communications and Electronics (ICCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCE.2018.8465714\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Seventh International Conference on Communications and Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCE.2018.8465714","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Estimating skeleton-based gait abnormality index by sparse deep auto-encoder
This paper proposes an approach estimating a gait abnormality index based on skeletal information provided by a depth camera. Differently from related works where the extraction of hand-crafted features is required to describe gait characteristics, our method automatically performs that stage with the support of a deep auto-encoder. In order to get visually interpretable features, we embedded a constraint of sparsity into the model. Similarly to most gait-related studies, the temporal factor is also considered as a post-processing in our system. This method provided promising results when experimenting on a dataset containing nearly one hundred thousand skeleton samples.