Hybrid Representation Learning for End-to-End Multi-Person Pose Estimation

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Qiyuan Dai;Qiang Ling
{"title":"Hybrid Representation Learning for End-to-End Multi-Person Pose Estimation","authors":"Qiyuan Dai;Qiang Ling","doi":"10.1109/TCSVT.2025.3536381","DOIUrl":null,"url":null,"abstract":"Recent multi-person pose estimation methods design end-to-end pipelines under the DETR framework. However, these methods involve complex keypoint decoding processes because the DETR framework cannot be directly used for pose estimation, which results in constrained performance and ineffective information interaction between human instances. To tackle this issue, we propose a hybrid representation learning method for end-to-end multi-person pose estimation. Our method represents instance-level and keypoint-level information as hybrid queries based on point set prediction and can facilitate parallel interaction between instance-level and keypoint-level representations in a unified decoder. We also employ the instance segmentation task for auxiliary training to enrich the spatial context of hybrid representations. Furthermore, we introduce a pose-unified query selection (PUQS) strategy and an instance-gated module (IGM) to improve the keypoint decoding process. PUQS predicts local pose proposals to produce scale-aware instance initializations and can avoid the scale assignment mistake of one-to-one matching. IGM refines instance contents and filters out invalid information with the message of cross-instance interaction and can enhance the decoder’s capability to handle queries of instances. Compared with current end-to-end multi-person pose estimation methods, our method can detect human instances and body keypoints simultaneously through a concise decoding process. Extensive experiments on COCO Keypoint and CrowdPose benchmarks demonstrate that our method outperforms some state-of-the-art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 7","pages":"6437-6451"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10857357/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Recent multi-person pose estimation methods design end-to-end pipelines under the DETR framework. However, these methods involve complex keypoint decoding processes because the DETR framework cannot be directly used for pose estimation, which results in constrained performance and ineffective information interaction between human instances. To tackle this issue, we propose a hybrid representation learning method for end-to-end multi-person pose estimation. Our method represents instance-level and keypoint-level information as hybrid queries based on point set prediction and can facilitate parallel interaction between instance-level and keypoint-level representations in a unified decoder. We also employ the instance segmentation task for auxiliary training to enrich the spatial context of hybrid representations. Furthermore, we introduce a pose-unified query selection (PUQS) strategy and an instance-gated module (IGM) to improve the keypoint decoding process. PUQS predicts local pose proposals to produce scale-aware instance initializations and can avoid the scale assignment mistake of one-to-one matching. IGM refines instance contents and filters out invalid information with the message of cross-instance interaction and can enhance the decoder’s capability to handle queries of instances. Compared with current end-to-end multi-person pose estimation methods, our method can detect human instances and body keypoints simultaneously through a concise decoding process. Extensive experiments on COCO Keypoint and CrowdPose benchmarks demonstrate that our method outperforms some state-of-the-art methods.
端到端多人姿态估计的混合表示学习
最近的多人姿态估计方法在DETR框架下设计端到端管道。然而,这些方法涉及复杂的关键点解码过程,因为DETR框架不能直接用于姿态估计,从而导致性能受限,并且人实例之间的信息交互无效。为了解决这个问题,我们提出了一种端到端多人姿态估计的混合表示学习方法。我们的方法将实例级和关键点级信息表示为基于点集预测的混合查询,并且可以促进统一解码器中实例级和关键点级表示之间的并行交互。我们还使用实例分割任务进行辅助训练,以丰富混合表示的空间上下文。此外,我们还引入了姿态统一查询选择(PUQS)策略和实例门控模块(IGM)来改进关键点解码过程。PUQS预测局部姿态建议,产生尺度感知的实例初始化,避免了一对一匹配的尺度分配错误。IGM细化实例内容,并通过跨实例交互的消息过滤掉无效信息,可以增强解码器处理实例查询的能力。与现有的端到端多人姿态估计方法相比,该方法可以通过简洁的解码过程同时检测人体实例和人体关键点。在COCO Keypoint和CrowdPose基准测试上的大量实验表明,我们的方法优于一些最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信