{"title":"一种变形场引导的步态识别特征学习框架","authors":"Wei Huo;Ke Wang;Jun Tang;Yan Zhang;Feng Chen","doi":"10.1109/TBIOM.2026.3659149","DOIUrl":null,"url":null,"abstract":"Gait recognition is a promising biometric recognition technique that uses walking patterns for authentication. It is known that motion representation stands as a long-term challenge for the task of gait recognition. To address this issue, most recent methods have conducted intensive studies on multi-scale temporal modeling and fine-grained spatial information aggregation, which generally characterize motion information in an implicit manner. How to quantitatively represent the change process of human body contours and dynamic motion differences remains an open problem. In this paper, we propose a novel motion representation for gait recognition stemming from deformation fields produced by the classical non-rigid point-set registration. Deformation fields are seamlessly integrated into the proposed gait recognition framework GaitDFG to yield discriminative motion features. GaitDFG mainly consists of three key components including Silhouette Feature extraction Network (SFNet), Deformation field Feature extraction Network (DFNet), and Knowledge Distillation Module (KDM). SFNet is employed to capture dynamic appearance motion difference and aggregate contextual information between neighboring frames from the input silhouette sequence. Furthermore, a multi-scale spatial perception module in DFNet is developed to extract the motion features of deformation fields to explore more useful motion clues. Besides, since real-time computation of deformation fields is infeasible in real-world scenarios, we design a deformation field feature simulation module to mimic the features of deformation fields for inference, which is learned from DFNet via knowledge distillation. Consequently, in the inference stage, we can fuse silhouette features and simulated deformation field features to perform gait recognition. Extensive experiments are conducted to validate the effectiveness of GaitDFG, demonstrating state-of-the-art performance on the standard gait recognition benchmarks, including CASIA-B (in-the-lab), GREW (in-the-wild) and CCPG (cloth-changing).","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"285-294"},"PeriodicalIF":5.0000,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GaitDFG: A Deformation Field-Guided Feature Learning Framework for Gait Recognition\",\"authors\":\"Wei Huo;Ke Wang;Jun Tang;Yan Zhang;Feng Chen\",\"doi\":\"10.1109/TBIOM.2026.3659149\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gait recognition is a promising biometric recognition technique that uses walking patterns for authentication. It is known that motion representation stands as a long-term challenge for the task of gait recognition. To address this issue, most recent methods have conducted intensive studies on multi-scale temporal modeling and fine-grained spatial information aggregation, which generally characterize motion information in an implicit manner. How to quantitatively represent the change process of human body contours and dynamic motion differences remains an open problem. In this paper, we propose a novel motion representation for gait recognition stemming from deformation fields produced by the classical non-rigid point-set registration. Deformation fields are seamlessly integrated into the proposed gait recognition framework GaitDFG to yield discriminative motion features. GaitDFG mainly consists of three key components including Silhouette Feature extraction Network (SFNet), Deformation field Feature extraction Network (DFNet), and Knowledge Distillation Module (KDM). SFNet is employed to capture dynamic appearance motion difference and aggregate contextual information between neighboring frames from the input silhouette sequence. Furthermore, a multi-scale spatial perception module in DFNet is developed to extract the motion features of deformation fields to explore more useful motion clues. Besides, since real-time computation of deformation fields is infeasible in real-world scenarios, we design a deformation field feature simulation module to mimic the features of deformation fields for inference, which is learned from DFNet via knowledge distillation. Consequently, in the inference stage, we can fuse silhouette features and simulated deformation field features to perform gait recognition. Extensive experiments are conducted to validate the effectiveness of GaitDFG, demonstrating state-of-the-art performance on the standard gait recognition benchmarks, including CASIA-B (in-the-lab), GREW (in-the-wild) and CCPG (cloth-changing).\",\"PeriodicalId\":73307,\"journal\":{\"name\":\"IEEE transactions on biometrics, behavior, and identity science\",\"volume\":\"8 2\",\"pages\":\"285-294\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2026-01-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biometrics, behavior, and identity science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11367732/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11367732/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GaitDFG: A Deformation Field-Guided Feature Learning Framework for Gait Recognition
Gait recognition is a promising biometric recognition technique that uses walking patterns for authentication. It is known that motion representation stands as a long-term challenge for the task of gait recognition. To address this issue, most recent methods have conducted intensive studies on multi-scale temporal modeling and fine-grained spatial information aggregation, which generally characterize motion information in an implicit manner. How to quantitatively represent the change process of human body contours and dynamic motion differences remains an open problem. In this paper, we propose a novel motion representation for gait recognition stemming from deformation fields produced by the classical non-rigid point-set registration. Deformation fields are seamlessly integrated into the proposed gait recognition framework GaitDFG to yield discriminative motion features. GaitDFG mainly consists of three key components including Silhouette Feature extraction Network (SFNet), Deformation field Feature extraction Network (DFNet), and Knowledge Distillation Module (KDM). SFNet is employed to capture dynamic appearance motion difference and aggregate contextual information between neighboring frames from the input silhouette sequence. Furthermore, a multi-scale spatial perception module in DFNet is developed to extract the motion features of deformation fields to explore more useful motion clues. Besides, since real-time computation of deformation fields is infeasible in real-world scenarios, we design a deformation field feature simulation module to mimic the features of deformation fields for inference, which is learned from DFNet via knowledge distillation. Consequently, in the inference stage, we can fuse silhouette features and simulated deformation field features to perform gait recognition. Extensive experiments are conducted to validate the effectiveness of GaitDFG, demonstrating state-of-the-art performance on the standard gait recognition benchmarks, including CASIA-B (in-the-lab), GREW (in-the-wild) and CCPG (cloth-changing).