{"title":"基于智能手机传感器的变压器人体活动识别","authors":"Y. Liang, Kaile Feng, Zizhuo Ren","doi":"10.1109/CCAI57533.2023.10201297","DOIUrl":null,"url":null,"abstract":"Capturing the spatial and temporal relationships of time-series signals is a significant obstacle for human activity recognition based on wearable devices. Traditional artificial intelligence algorithms cannot handle it well, with convolution-based models focusing on local feature extraction and recurrent networks lacking consideration of the spatial domain. This paper offers a deep learning architecture based on transformer to address the aforementioned issue with data collected from smart-phones embedded with three-axis accelerometers. The transformer model, as a deep learning network mainly applied to natural language processing (NLP), is good at processing time-series information, where the self-attention mechanism captures the dependencies of perceptual signals in the temporal and spatial domains, improving the overall comprehensibility. We implement convolutional neural networks (CNN) and long and short-term memory networks (LSTM) for evaluation while our proposed model achieves an average classification accuracy of 94.84%, which is an improvement compared to the traditional model.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human Activity Recognition Based on Transformer via Smart-phone Sensors\",\"authors\":\"Y. Liang, Kaile Feng, Zizhuo Ren\",\"doi\":\"10.1109/CCAI57533.2023.10201297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Capturing the spatial and temporal relationships of time-series signals is a significant obstacle for human activity recognition based on wearable devices. Traditional artificial intelligence algorithms cannot handle it well, with convolution-based models focusing on local feature extraction and recurrent networks lacking consideration of the spatial domain. This paper offers a deep learning architecture based on transformer to address the aforementioned issue with data collected from smart-phones embedded with three-axis accelerometers. The transformer model, as a deep learning network mainly applied to natural language processing (NLP), is good at processing time-series information, where the self-attention mechanism captures the dependencies of perceptual signals in the temporal and spatial domains, improving the overall comprehensibility. We implement convolutional neural networks (CNN) and long and short-term memory networks (LSTM) for evaluation while our proposed model achieves an average classification accuracy of 94.84%, which is an improvement compared to the traditional model.\",\"PeriodicalId\":285760,\"journal\":{\"name\":\"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCAI57533.2023.10201297\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCAI57533.2023.10201297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human Activity Recognition Based on Transformer via Smart-phone Sensors
Capturing the spatial and temporal relationships of time-series signals is a significant obstacle for human activity recognition based on wearable devices. Traditional artificial intelligence algorithms cannot handle it well, with convolution-based models focusing on local feature extraction and recurrent networks lacking consideration of the spatial domain. This paper offers a deep learning architecture based on transformer to address the aforementioned issue with data collected from smart-phones embedded with three-axis accelerometers. The transformer model, as a deep learning network mainly applied to natural language processing (NLP), is good at processing time-series information, where the self-attention mechanism captures the dependencies of perceptual signals in the temporal and spatial domains, improving the overall comprehensibility. We implement convolutional neural networks (CNN) and long and short-term memory networks (LSTM) for evaluation while our proposed model achieves an average classification accuracy of 94.84%, which is an improvement compared to the traditional model.