{"title":"基于深度学习的多模态传感器融合步态分类","authors":"S. Yunas, Abdullah S. Alharthi, K. Ozanyan","doi":"10.1109/SAS48726.2020.9220037","DOIUrl":null,"url":null,"abstract":"Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities.","PeriodicalId":223737,"journal":{"name":"2020 IEEE Sensors Applications Symposium (SAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Multi-modality sensor fusion for gait classification using deep learning\",\"authors\":\"S. Yunas, Abdullah S. Alharthi, K. Ozanyan\",\"doi\":\"10.1109/SAS48726.2020.9220037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities.\",\"PeriodicalId\":223737,\"journal\":{\"name\":\"2020 IEEE Sensors Applications Symposium (SAS)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Sensors Applications Symposium (SAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SAS48726.2020.9220037\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Sensors Applications Symposium (SAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAS48726.2020.9220037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-modality sensor fusion for gait classification using deep learning
Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities.