Pan Jiaxing , Zhang Baohua , Zhang Jiale , Gu Yu , Shan Chongrui , Sun Yanxia , Wu Dongyang
{"title":"A Visible-Infrared person re-identification algorithm based on skeleton Insight Criss-Cross network","authors":"Pan Jiaxing , Zhang Baohua , Zhang Jiale , Gu Yu , Shan Chongrui , Sun Yanxia , Wu Dongyang","doi":"10.1016/j.jvcir.2025.104395","DOIUrl":null,"url":null,"abstract":"<div><div>There are significant inter-class differences in the cross-modal feature space. If the pedestrian skeleton information is used as the discrimination basis for cross-modal person re-identification, the problem of mismatch between the skeleton features and the ID attributes is inevitable. In order to solve the above problems, this paper proposes a novel Skeleton Insight Criss-Cross Network (SI-CCN), which consists of a Skeleton Insight Module (SIM) and a Criss-Cross Module (CCM). The former uses the skeleton hierarchical mechanism to extract the key skeleton information of the pedestrian limb area, obtain the characteristics of the skeleton key points at the pixel level, and the skeleton key points are used as the graph nodes to construct the skeleton posture structure of the pedestrian. And as a result, the SIM module can not only accurately capture the spatial information of various parts of the pedestrian, but also maintain the relative positional relationship between the key points of the skeleton to form a complete skeleton structure. The latter cooperatively optimizes the characteristics of high-dimensional skeleton and low-dimensional identity identification by using a cross-learning mechanism. In order to effectively capture the diverse skeleton posture, the attention distribution of the two in the feature extraction process is dynamically adjusted to integrate identity details at the same time, and the consistency of cross-modal features is improved. The experiments on the two cross-modal person re-identification data sets of SYSU-MM01 and RegDB show that the Rank-1 and mAP of the SI-CCN on the SYSU-MM01 data set are 81.94% and 76.92%, respectively, and the Rank-1 and mAP on the RegDB data set are 95.49% and 95.67%, respectively. The proposed method has better performance than that of the recent representative methods.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104395"},"PeriodicalIF":2.6000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325000094","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
There are significant inter-class differences in the cross-modal feature space. If the pedestrian skeleton information is used as the discrimination basis for cross-modal person re-identification, the problem of mismatch between the skeleton features and the ID attributes is inevitable. In order to solve the above problems, this paper proposes a novel Skeleton Insight Criss-Cross Network (SI-CCN), which consists of a Skeleton Insight Module (SIM) and a Criss-Cross Module (CCM). The former uses the skeleton hierarchical mechanism to extract the key skeleton information of the pedestrian limb area, obtain the characteristics of the skeleton key points at the pixel level, and the skeleton key points are used as the graph nodes to construct the skeleton posture structure of the pedestrian. And as a result, the SIM module can not only accurately capture the spatial information of various parts of the pedestrian, but also maintain the relative positional relationship between the key points of the skeleton to form a complete skeleton structure. The latter cooperatively optimizes the characteristics of high-dimensional skeleton and low-dimensional identity identification by using a cross-learning mechanism. In order to effectively capture the diverse skeleton posture, the attention distribution of the two in the feature extraction process is dynamically adjusted to integrate identity details at the same time, and the consistency of cross-modal features is improved. The experiments on the two cross-modal person re-identification data sets of SYSU-MM01 and RegDB show that the Rank-1 and mAP of the SI-CCN on the SYSU-MM01 data set are 81.94% and 76.92%, respectively, and the Rank-1 and mAP on the RegDB data set are 95.49% and 95.67%, respectively. The proposed method has better performance than that of the recent representative methods.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.