A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition

Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang
{"title":"A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition","authors":"Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang","doi":"10.1109/TBIOM.2024.3384704","DOIUrl":null,"url":null,"abstract":"Gait recognition is a biometric technology that has received extensive attention. Most existing gait recognition algorithms are unimodal, and a few multimodal gait recognition algorithms perform multimodal fusion only once. None of these algorithms may fully exploit the complementary advantages of the multiple modalities. In this paper, by considering the temporal and spatial characteristics of gait data, we propose a multi-stage feature fusion strategy (MSFFS), which performs multimodal fusions at different stages in the feature extraction process. Also, we propose an adaptive feature fusion module (AFFM) that considers the semantic association between silhouettes and skeletons. The fusion process fuses different silhouette areas with their more related skeleton joints. Since visual appearance changes and time passage co-occur in a gait period, we propose a multiscale spatial-temporal feature extractor (MSSTFE) to learn the spatial-temporal linkage features thoroughly. Specifically, MSSTFE extracts and aggregates spatial-temporal linkages information at different spatial scales. Combining the strategy and modules mentioned above, we propose a multi-stage adaptive feature fusion (MSAFF) neural network, which shows state-of-the-art performance in many experiments on three datasets. Besides, MSAFF is equipped with feature dimensional pooling (FD Pooling), which can significantly reduce the dimension of the gait representations without hindering the accuracy. The code can be found here. \n<uri>https://github.com/ShinanZou/MSAFF</uri>\n.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"539-549"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10490158/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Gait recognition is a biometric technology that has received extensive attention. Most existing gait recognition algorithms are unimodal, and a few multimodal gait recognition algorithms perform multimodal fusion only once. None of these algorithms may fully exploit the complementary advantages of the multiple modalities. In this paper, by considering the temporal and spatial characteristics of gait data, we propose a multi-stage feature fusion strategy (MSFFS), which performs multimodal fusions at different stages in the feature extraction process. Also, we propose an adaptive feature fusion module (AFFM) that considers the semantic association between silhouettes and skeletons. The fusion process fuses different silhouette areas with their more related skeleton joints. Since visual appearance changes and time passage co-occur in a gait period, we propose a multiscale spatial-temporal feature extractor (MSSTFE) to learn the spatial-temporal linkage features thoroughly. Specifically, MSSTFE extracts and aggregates spatial-temporal linkages information at different spatial scales. Combining the strategy and modules mentioned above, we propose a multi-stage adaptive feature fusion (MSAFF) neural network, which shows state-of-the-art performance in many experiments on three datasets. Besides, MSAFF is equipped with feature dimensional pooling (FD Pooling), which can significantly reduce the dimension of the gait representations without hindering the accuracy. The code can be found here. https://github.com/ShinanZou/MSAFF .
用于多模态步态识别的多阶段自适应特征融合神经网络
步态识别是一项受到广泛关注的生物识别技术。现有的步态识别算法大多是单模态的,少数多模态步态识别算法只进行一次多模态融合。这些算法都无法充分利用多模态的互补优势。本文通过考虑步态数据的时间和空间特征,提出了一种多阶段特征融合策略(MSFFS),在特征提取过程的不同阶段执行多模态融合。此外,我们还提出了一种自适应特征融合模块(AFFM),该模块考虑了剪影和骨骼之间的语义关联。融合过程将不同的剪影区域与其更相关的骨架关节融合在一起。由于视觉外观变化和时间流逝同时出现在步态周期中,我们提出了多尺度时空特征提取器(MSSTFE)来全面学习时空联系特征。具体来说,MSSTFE 提取并聚合不同空间尺度的时空联系信息。结合上述策略和模块,我们提出了一种多级自适应特征融合(MSAFF)神经网络,该网络在三个数据集的多次实验中表现出了一流的性能。此外,MSAFF 还配备了特征维度池化(FD Pooling)功能,可以在不影响准确性的前提下显著降低步态表征的维度。代码请见 https://github.com/ShinanZou/MSAFF。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信