DeformGait: Gait Recognition under Posture Changes using Deformation Patterns between Gait Feature Pairs

Chi Xu, Daisuke Adachi, Yasushi Makihara, Y. Yagi, Jianfeng Lu
{"title":"DeformGait: Gait Recognition under Posture Changes using Deformation Patterns between Gait Feature Pairs","authors":"Chi Xu, Daisuke Adachi, Yasushi Makihara, Y. Yagi, Jianfeng Lu","doi":"10.1109/IJCB48548.2020.9304902","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a unified convolutional neural network (CNN) framework for robust gait recognition against posture changes (e.g., those induced by walking speed changes). In order to mitigate the posture changes, we first register an input matching pair of gait features with different postures by a deformable registration network, which estimates a deformation field to transform the input pair both into their intermediate posture. The pair of the registered features is then fed into a recognition network. Furthermore, ways of the deformation (i.e., deformation patterns) can differ between the same subject pairs (e.g., only posture deformation) and different subject pairs (e.g., not only posture deformation but also body shape deformation), which implies the deformation pattern can be another cue to distinguish the same subject pairs from the different subject pairs. We therefore introduce another recognition network whose input is the deformation pattern. Finally, the deformable registration network, and the two recognition networks for the registered features and the deformation patterns, constitute the whole framework, named DeformGait, and they are trained in an end-to-end manner by minimizing a loss function which is appropriately designed for each of verification and identification scenario. Experiments on the publicly available dataset containing the largest speed variations demonstrate that the proposed method achieves the state-of-the-art performance in both identification and verification scenarios.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCB48548.2020.9304902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this paper, we propose a unified convolutional neural network (CNN) framework for robust gait recognition against posture changes (e.g., those induced by walking speed changes). In order to mitigate the posture changes, we first register an input matching pair of gait features with different postures by a deformable registration network, which estimates a deformation field to transform the input pair both into their intermediate posture. The pair of the registered features is then fed into a recognition network. Furthermore, ways of the deformation (i.e., deformation patterns) can differ between the same subject pairs (e.g., only posture deformation) and different subject pairs (e.g., not only posture deformation but also body shape deformation), which implies the deformation pattern can be another cue to distinguish the same subject pairs from the different subject pairs. We therefore introduce another recognition network whose input is the deformation pattern. Finally, the deformable registration network, and the two recognition networks for the registered features and the deformation patterns, constitute the whole framework, named DeformGait, and they are trained in an end-to-end manner by minimizing a loss function which is appropriately designed for each of verification and identification scenario. Experiments on the publicly available dataset containing the largest speed variations demonstrate that the proposed method achieves the state-of-the-art performance in both identification and verification scenarios.
变形步态:基于步态特征对之间变形模式的姿态变化下的步态识别
在本文中,我们提出了一个统一的卷积神经网络(CNN)框架,用于针对姿势变化(例如,由步行速度变化引起的步态变化)的鲁棒步态识别。为了减轻姿态变化,首先通过可变形配准网络对不同姿态的步态特征进行配准,该配准网络估计一个变形场,将输入特征对转换为它们的中间姿态。然后将这对特征输入到识别网络中。此外,变形的方式(即变形模式)可以在相同的主语对(例如,只有姿势变形)和不同的主语对(例如,不仅姿势变形而且体型变形)之间有所不同,这意味着变形模式可以是区分相同的主语对和不同的主语对的另一个线索。因此,我们引入了另一种以变形模式为输入的识别网络。最后,将可变形配准网络,以及对所配准特征和变形模式的两个识别网络构成了一个名为deform步态的整体框架,并通过最小化每个验证和识别场景的适当设计的损失函数,以端到端的方式对它们进行训练。在包含最大速度变化的公开可用数据集上的实验表明,所提出的方法在识别和验证场景中都达到了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信