{"title":"Hybrid Representation Learning for End-to-End Multi-Person Pose Estimation","authors":"Qiyuan Dai;Qiang Ling","doi":"10.1109/TCSVT.2025.3536381","DOIUrl":null,"url":null,"abstract":"Recent multi-person pose estimation methods design end-to-end pipelines under the DETR framework. However, these methods involve complex keypoint decoding processes because the DETR framework cannot be directly used for pose estimation, which results in constrained performance and ineffective information interaction between human instances. To tackle this issue, we propose a hybrid representation learning method for end-to-end multi-person pose estimation. Our method represents instance-level and keypoint-level information as hybrid queries based on point set prediction and can facilitate parallel interaction between instance-level and keypoint-level representations in a unified decoder. We also employ the instance segmentation task for auxiliary training to enrich the spatial context of hybrid representations. Furthermore, we introduce a pose-unified query selection (PUQS) strategy and an instance-gated module (IGM) to improve the keypoint decoding process. PUQS predicts local pose proposals to produce scale-aware instance initializations and can avoid the scale assignment mistake of one-to-one matching. IGM refines instance contents and filters out invalid information with the message of cross-instance interaction and can enhance the decoder’s capability to handle queries of instances. Compared with current end-to-end multi-person pose estimation methods, our method can detect human instances and body keypoints simultaneously through a concise decoding process. Extensive experiments on COCO Keypoint and CrowdPose benchmarks demonstrate that our method outperforms some state-of-the-art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 7","pages":"6437-6451"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10857357/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Recent multi-person pose estimation methods design end-to-end pipelines under the DETR framework. However, these methods involve complex keypoint decoding processes because the DETR framework cannot be directly used for pose estimation, which results in constrained performance and ineffective information interaction between human instances. To tackle this issue, we propose a hybrid representation learning method for end-to-end multi-person pose estimation. Our method represents instance-level and keypoint-level information as hybrid queries based on point set prediction and can facilitate parallel interaction between instance-level and keypoint-level representations in a unified decoder. We also employ the instance segmentation task for auxiliary training to enrich the spatial context of hybrid representations. Furthermore, we introduce a pose-unified query selection (PUQS) strategy and an instance-gated module (IGM) to improve the keypoint decoding process. PUQS predicts local pose proposals to produce scale-aware instance initializations and can avoid the scale assignment mistake of one-to-one matching. IGM refines instance contents and filters out invalid information with the message of cross-instance interaction and can enhance the decoder’s capability to handle queries of instances. Compared with current end-to-end multi-person pose estimation methods, our method can detect human instances and body keypoints simultaneously through a concise decoding process. Extensive experiments on COCO Keypoint and CrowdPose benchmarks demonstrate that our method outperforms some state-of-the-art methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.