Jin Zhang, Zhuangzhuang Chen, Chengwen Luo, Bo Wei, S. Kanhere, Jian-qiang Li
{"title":"MetaGanFi: Cross-Domain Unseen Individual Identification Using WiFi Signals","authors":"Jin Zhang, Zhuangzhuang Chen, Chengwen Luo, Bo Wei, S. Kanhere, Jian-qiang Li","doi":"10.1145/3550306","DOIUrl":null,"url":null,"abstract":"Human has an unique gait and prior works show increasing potentials in using WiFi signals to capture the unique signature of individuals’ gait. However, existing WiFi-based human identification (HI) systems have not been ready for real-world deployment due to various strong assumptions including identification of known users and sufficient training data captured in predefined domains such as fixed walking trajectory/orientation, WiFi layout (receivers locations) and multipath environment (deployment time and site). In this paper, we propose a WiFi-based HI system, MetaGanFi, which is able to accurately identify unseen individuals in uncontrolled domain with only one or few samples. To achieve this, the MetaGanFi proposes a domain unification model, CCG-GAN that utilizes a conditional cycle generative adversarial networks to filter out irrelevant perturbations incurred by interfering domains. Moreover, the MetaGanFi proposes a domain-agnostic meta learning model, DA-Meta that could quickly adapt from one/few data samples to accurately recognize unseen individuals. The comprehensive evaluation applied on a real-world dataset show that the MetaGanFi can identify unseen individuals with average accuracies of 87.25% and 93.50% for 1 and 5 available data samples (shot) cases, captured in varying trajectory and multipath environment, 86.84% and 91.25% for 1 and 5-shot cases in varying WiFi layout scenarios, while the overall inference process of domain unification and identification takes about 0.1 second per sample.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"152:1-152:21"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3550306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Human has an unique gait and prior works show increasing potentials in using WiFi signals to capture the unique signature of individuals’ gait. However, existing WiFi-based human identification (HI) systems have not been ready for real-world deployment due to various strong assumptions including identification of known users and sufficient training data captured in predefined domains such as fixed walking trajectory/orientation, WiFi layout (receivers locations) and multipath environment (deployment time and site). In this paper, we propose a WiFi-based HI system, MetaGanFi, which is able to accurately identify unseen individuals in uncontrolled domain with only one or few samples. To achieve this, the MetaGanFi proposes a domain unification model, CCG-GAN that utilizes a conditional cycle generative adversarial networks to filter out irrelevant perturbations incurred by interfering domains. Moreover, the MetaGanFi proposes a domain-agnostic meta learning model, DA-Meta that could quickly adapt from one/few data samples to accurately recognize unseen individuals. The comprehensive evaluation applied on a real-world dataset show that the MetaGanFi can identify unseen individuals with average accuracies of 87.25% and 93.50% for 1 and 5 available data samples (shot) cases, captured in varying trajectory and multipath environment, 86.84% and 91.25% for 1 and 5-shot cases in varying WiFi layout scenarios, while the overall inference process of domain unification and identification takes about 0.1 second per sample.