Location Independent Gesture Recognition Using Channel State Information

Israel Elujide, Chunhai Feng, Aref Shiran, Jian Li, Yonghe Liu
{"title":"Location Independent Gesture Recognition Using Channel State Information","authors":"Israel Elujide, Chunhai Feng, Aref Shiran, Jian Li, Yonghe Liu","doi":"10.1109/CCNC49033.2022.9700590","DOIUrl":null,"url":null,"abstract":"Gesture recognition has been the subject of intensive research in recent years owing to its wide applications. Unlike traditional systems, which usually require wearable sensors, many recent works have achieved the desirable gesture recognition performance using wireless channel state information from commercially available WiFi devices. However, existing works generally require training new models for different locations due to the location-dependent nature of channel state information. This paper proposes a location-independent system that can recognize gestures performed in a new location without training a new model. Our approach uses disentanglement that extricates location and other extraneous information from those needed for gesture recognition. The implementation is based on an unsupervised invariance induction framework consisting of feature extraction, a multi-output latent space, gesture recognition, and decoder modules. The key idea in designing this system is to separate gesture-dependent features from location-dependent features. Specifically, the feature extraction module consisting of a long short-term memory network is employed to select representative features; it essentially serves as an encoder to generate the latent space. During the training process, the network learns to cluster features representation for the gesture recognition and decoder by minimizing the total loss of the gesture recognition and decoder modules. We test our system with a dataset collected from various subjects performing four different gestures in multiple locations in seven rooms with different layouts. The results show that our location-independent gesture recognition system can achieve 88.69% accuracy for new locations.","PeriodicalId":269305,"journal":{"name":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC49033.2022.9700590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Gesture recognition has been the subject of intensive research in recent years owing to its wide applications. Unlike traditional systems, which usually require wearable sensors, many recent works have achieved the desirable gesture recognition performance using wireless channel state information from commercially available WiFi devices. However, existing works generally require training new models for different locations due to the location-dependent nature of channel state information. This paper proposes a location-independent system that can recognize gestures performed in a new location without training a new model. Our approach uses disentanglement that extricates location and other extraneous information from those needed for gesture recognition. The implementation is based on an unsupervised invariance induction framework consisting of feature extraction, a multi-output latent space, gesture recognition, and decoder modules. The key idea in designing this system is to separate gesture-dependent features from location-dependent features. Specifically, the feature extraction module consisting of a long short-term memory network is employed to select representative features; it essentially serves as an encoder to generate the latent space. During the training process, the network learns to cluster features representation for the gesture recognition and decoder by minimizing the total loss of the gesture recognition and decoder modules. We test our system with a dataset collected from various subjects performing four different gestures in multiple locations in seven rooms with different layouts. The results show that our location-independent gesture recognition system can achieve 88.69% accuracy for new locations.
使用通道状态信息的位置独立手势识别
手势识别由于其广泛的应用,近年来一直是研究的热点。与通常需要可穿戴传感器的传统系统不同,许多最近的工作已经利用来自商用WiFi设备的无线信道状态信息实现了理想的手势识别性能。然而,由于通道状态信息的位置依赖性,现有的工作通常需要针对不同的位置训练新的模型。本文提出了一种不依赖于位置的系统,它可以在不训练新模型的情况下识别在新位置执行的手势。我们的方法使用解纠缠,从手势识别所需的位置和其他无关信息中解脱出来。该算法的实现基于无监督不变性归纳框架,该框架由特征提取、多输出潜在空间、手势识别和解码器模块组成。设计该系统的关键思想是将手势依赖特征与位置依赖特征分离开来。具体而言,采用长短期记忆网络组成的特征提取模块选择具有代表性的特征;它本质上是作为一个编码器来产生潜在空间。在训练过程中,网络通过最小化手势识别和解码器模块的总损失来学习聚类手势识别和解码器的特征表示。我们使用从不同受试者收集的数据集来测试我们的系统,这些受试者在七个不同布局的房间的多个位置执行四种不同的手势。结果表明,我们的位置无关手势识别系统对新位置的识别准确率可以达到88.69%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信