Chenglizhao Chen;Chaoying Bai;Jia Song;Xu Yu;Shanchen Pang
{"title":"Omni-Directional View Person Re-Identification Through 3D Human Reconstruction","authors":"Chenglizhao Chen;Chaoying Bai;Jia Song;Xu Yu;Shanchen Pang","doi":"10.1109/LSP.2025.3529619","DOIUrl":null,"url":null,"abstract":"Person re-identification (ReID) aims to identify the same individual across different cameras. Most existing researches focus on horizontal perspectives, where cameras and individuals are positioned at similar heights. However, in real-word applications, cameras are usually mounted at varying heights (e.g., either high-view or low-view) to achieve a broader field of view. Hence, some studies have explored high-view ReID, yet these rely heavily on manually annotating large datasets, which is extremely time-consuming and not publicly available. To improve, we propose a “controllable” data generation protocol that automatically generates omni-directional view data. This protocol can extend any common ReID dataset into an extensive omni-directional view one. By upgrading existing ReID SOTAs with the enhanced data, they can be made to handle ReID tasks with varying camera angles. B.t.w., to verify the effectiveness, we still need “real” data for testing. Thus, we constructed a small testing dataset containing diverse camera angles. Extensive quantitative results demonstrate that our solution is generic and can be applied to any SOTA ReID to achieve extensive performance promotions, e.g., 3% –12% improvement in mAP.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"796-800"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10839551/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Person re-identification (ReID) aims to identify the same individual across different cameras. Most existing researches focus on horizontal perspectives, where cameras and individuals are positioned at similar heights. However, in real-word applications, cameras are usually mounted at varying heights (e.g., either high-view or low-view) to achieve a broader field of view. Hence, some studies have explored high-view ReID, yet these rely heavily on manually annotating large datasets, which is extremely time-consuming and not publicly available. To improve, we propose a “controllable” data generation protocol that automatically generates omni-directional view data. This protocol can extend any common ReID dataset into an extensive omni-directional view one. By upgrading existing ReID SOTAs with the enhanced data, they can be made to handle ReID tasks with varying camera angles. B.t.w., to verify the effectiveness, we still need “real” data for testing. Thus, we constructed a small testing dataset containing diverse camera angles. Extensive quantitative results demonstrate that our solution is generic and can be applied to any SOTA ReID to achieve extensive performance promotions, e.g., 3% –12% improvement in mAP.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.