{"title":"用于多目标跟踪的遮挡相关图卷积神经网络","authors":"Yubo Zhang , Liying Zheng , Qingming Huang","doi":"10.1016/j.imavis.2024.105317","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-Object Tracking (MOT) has recently been improved by Graph Convolutional Neural Networks (GCNNs) for its good performance in characterizing interactive features. However, GCNNs prefer assigning smaller proportions to node features if a node has more neighbors, presenting challenges in distinguishing objects with similar neighbors which is common in dense scenes. This paper designs an Occlusion-Related GCNN (OR-GCNN) based on which an interactive similarity module is further built. Specifically, the interactive similarity module first uses learnable weights to calculate the edge weights between tracklets and detection objects, which balances the appearance cosine similarity and Intersection over Union (IoU). Then, the module determines the proportion of node features with the help of an occlusion weight comes from a MultiLayer Perceptron (MLP). These occlusion weights, the edge weights, and the node features are then served to our OR-GCNN to obtain interactive features. Finally, by integrating interactive similarity into a common MOT framework, such as BoT-SORT, one gets a tracker that efficiently alleviates the issues in dense MOT task. The experimental results on MOT16 and MOT17 benchmarks show that our model achieves the MOTA of 80.6 and 81.1 and HOTA of 65.3 and 65.1 on MOT16 and MOT17, respectively, which outperforms the state-of-the-art trackers, including ByteTrack, BoT-SORT, GCNNMatch, GNMOT, and GSM.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105317"},"PeriodicalIF":4.2000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Occlusion-related graph convolutional neural network for multi-object tracking\",\"authors\":\"Yubo Zhang , Liying Zheng , Qingming Huang\",\"doi\":\"10.1016/j.imavis.2024.105317\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-Object Tracking (MOT) has recently been improved by Graph Convolutional Neural Networks (GCNNs) for its good performance in characterizing interactive features. However, GCNNs prefer assigning smaller proportions to node features if a node has more neighbors, presenting challenges in distinguishing objects with similar neighbors which is common in dense scenes. This paper designs an Occlusion-Related GCNN (OR-GCNN) based on which an interactive similarity module is further built. Specifically, the interactive similarity module first uses learnable weights to calculate the edge weights between tracklets and detection objects, which balances the appearance cosine similarity and Intersection over Union (IoU). Then, the module determines the proportion of node features with the help of an occlusion weight comes from a MultiLayer Perceptron (MLP). These occlusion weights, the edge weights, and the node features are then served to our OR-GCNN to obtain interactive features. Finally, by integrating interactive similarity into a common MOT framework, such as BoT-SORT, one gets a tracker that efficiently alleviates the issues in dense MOT task. The experimental results on MOT16 and MOT17 benchmarks show that our model achieves the MOTA of 80.6 and 81.1 and HOTA of 65.3 and 65.1 on MOT16 and MOT17, respectively, which outperforms the state-of-the-art trackers, including ByteTrack, BoT-SORT, GCNNMatch, GNMOT, and GSM.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"152 \",\"pages\":\"Article 105317\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885624004220\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624004220","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Occlusion-related graph convolutional neural network for multi-object tracking
Multi-Object Tracking (MOT) has recently been improved by Graph Convolutional Neural Networks (GCNNs) for its good performance in characterizing interactive features. However, GCNNs prefer assigning smaller proportions to node features if a node has more neighbors, presenting challenges in distinguishing objects with similar neighbors which is common in dense scenes. This paper designs an Occlusion-Related GCNN (OR-GCNN) based on which an interactive similarity module is further built. Specifically, the interactive similarity module first uses learnable weights to calculate the edge weights between tracklets and detection objects, which balances the appearance cosine similarity and Intersection over Union (IoU). Then, the module determines the proportion of node features with the help of an occlusion weight comes from a MultiLayer Perceptron (MLP). These occlusion weights, the edge weights, and the node features are then served to our OR-GCNN to obtain interactive features. Finally, by integrating interactive similarity into a common MOT framework, such as BoT-SORT, one gets a tracker that efficiently alleviates the issues in dense MOT task. The experimental results on MOT16 and MOT17 benchmarks show that our model achieves the MOTA of 80.6 and 81.1 and HOTA of 65.3 and 65.1 on MOT16 and MOT17, respectively, which outperforms the state-of-the-art trackers, including ByteTrack, BoT-SORT, GCNNMatch, GNMOT, and GSM.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.