{"title":"Gesture Recognition Based on Flexible Data Glove Using Deep Learning Algorithms","authors":"Kai Wang, Gang Zhao","doi":"10.1109/AINIT59027.2023.10212923","DOIUrl":null,"url":null,"abstract":"Gesture recognition based on wearable devices helps to build an intelligent human-computer interaction. However, the sensing units of current gesture acquisition devices are mostly rigid MEMS with poor user experience. Meanwhile, most existing studies directly stack gesture sensing data, ignoring the interaction of gesture signals within the same modal sensing channel and between different modal sensor channels in terms of spatiotemporal characteristics. To address the above problems, we use flexible data glove as gesture capture devices and propose a framework named self-attention temporal-spatial feature fusion for gesture recognition (STFGes) to recognize gestures by integrating multi-sensors data. In addition, we conduct comprehensive experiments to build a dataset that can be used for training and testing. The experimental results show that STFGes achieves 97.02% recognition accuracy for 10 dynamic daily Chinese Sign Language (CSL) and outperforms other methods.","PeriodicalId":276778,"journal":{"name":"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AINIT59027.2023.10212923","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gesture recognition based on wearable devices helps to build an intelligent human-computer interaction. However, the sensing units of current gesture acquisition devices are mostly rigid MEMS with poor user experience. Meanwhile, most existing studies directly stack gesture sensing data, ignoring the interaction of gesture signals within the same modal sensing channel and between different modal sensor channels in terms of spatiotemporal characteristics. To address the above problems, we use flexible data glove as gesture capture devices and propose a framework named self-attention temporal-spatial feature fusion for gesture recognition (STFGes) to recognize gestures by integrating multi-sensors data. In addition, we conduct comprehensive experiments to build a dataset that can be used for training and testing. The experimental results show that STFGes achieves 97.02% recognition accuracy for 10 dynamic daily Chinese Sign Language (CSL) and outperforms other methods.