Self-supervised Human Pose Recovery for Through-wall Radar Based on Convolutional Neural Networks

Zhijie Zheng, S. Ye, Guangyou Fang
{"title":"Self-supervised Human Pose Recovery for Through-wall Radar Based on Convolutional Neural Networks","authors":"Zhijie Zheng, S. Ye, Guangyou Fang","doi":"10.1109/piers55526.2022.9793087","DOIUrl":null,"url":null,"abstract":"Through-wall radar (TWR) can penetrate through non-metallic occlusions and detect hidden human targets. Nonetheless, because of its low imaging spatial resolution, most of the current methods can only obtain the low-level human detecting information from the TWR signals, for example, the human trunk positions. More complex human information, such as complete pose outline, has remained intractable. In this paper, a novel self-supervised human pose recovery method for TWR based on convolutional neural networks (CNNs) is proposed. We adopt a self-supervised teacher-student learning pipeline for the method. During training, we attach a camera to the radar to simultaneously collect pairs of RGB images and TWR signals. A vision-based pretrained teacher network extracts human pose information from RGB images and generates the human outline masks as pseudo labels. A student network learns to extract the patterns in corresponding TWR signals and predicts the masks that are close to the labels above. There is no external supervision in the training process, so it is unnecessary to label the dataset manually. After training, the method can recover accurate human pose just from TWR signals. The experiments are conducted in two different scenarios. In a scenario without wall occlusion, we collected synchronized radar signals and images for method training and accuracy evaluation. The quantitative results show comparable predictions with the state-of-the-art methods in non-wallocclusive scenarios. In a wall-occlusive scenario, we solely collect radar signals for generalization evaluation. The accurate qualitative predictions show complete human pose recovery with wall occlusions.","PeriodicalId":422383,"journal":{"name":"2022 Photonics & Electromagnetics Research Symposium (PIERS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Photonics & Electromagnetics Research Symposium (PIERS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/piers55526.2022.9793087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Through-wall radar (TWR) can penetrate through non-metallic occlusions and detect hidden human targets. Nonetheless, because of its low imaging spatial resolution, most of the current methods can only obtain the low-level human detecting information from the TWR signals, for example, the human trunk positions. More complex human information, such as complete pose outline, has remained intractable. In this paper, a novel self-supervised human pose recovery method for TWR based on convolutional neural networks (CNNs) is proposed. We adopt a self-supervised teacher-student learning pipeline for the method. During training, we attach a camera to the radar to simultaneously collect pairs of RGB images and TWR signals. A vision-based pretrained teacher network extracts human pose information from RGB images and generates the human outline masks as pseudo labels. A student network learns to extract the patterns in corresponding TWR signals and predicts the masks that are close to the labels above. There is no external supervision in the training process, so it is unnecessary to label the dataset manually. After training, the method can recover accurate human pose just from TWR signals. The experiments are conducted in two different scenarios. In a scenario without wall occlusion, we collected synchronized radar signals and images for method training and accuracy evaluation. The quantitative results show comparable predictions with the state-of-the-art methods in non-wallocclusive scenarios. In a wall-occlusive scenario, we solely collect radar signals for generalization evaluation. The accurate qualitative predictions show complete human pose recovery with wall occlusions.
基于卷积神经网络的穿壁雷达自监督人体姿态恢复
穿壁雷达(TWR)可以穿透非金属遮挡物,探测隐藏的人体目标。然而,由于其成像空间分辨率较低,目前大多数方法只能从TWR信号中获得低级别的人体检测信息,例如人体躯干位置。更复杂的人类信息,如完整的姿势轮廓,仍然难以处理。提出了一种基于卷积神经网络(cnn)的TWR自监督人体姿态恢复方法。我们采用了一种自我监督的师生学习管道。在训练过程中,我们在雷达上安装了一个摄像头,同时收集RGB图像和TWR信号对。基于视觉的预训练教师网络从RGB图像中提取人体姿态信息,并生成人体轮廓蒙版作为伪标签。学生网络学习提取相应TWR信号中的模式,并预测与上述标签接近的掩模。在训练过程中没有外部监督,因此不需要手动标记数据集。经过训练,该方法可以仅从TWR信号中恢复出准确的人体姿态。实验是在两种不同的情况下进行的。在没有墙遮挡的情况下,我们收集了同步雷达信号和图像,用于方法训练和准确性评估。定量结果显示,在非封闭场景下,与最先进的方法预测结果相当。在墙壁遮挡的情况下,我们只收集雷达信号进行泛化评估。准确的定性预测显示完全的人体姿势恢复与壁闭塞。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信