Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction

Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim
{"title":"Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction","authors":"Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim","doi":"10.1109/TIP.2025.3559788","DOIUrl":null,"url":null,"abstract":"A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: <monospace><uri>https://github.com/DCVL-3D/Eigenpose_release</uri></monospace>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2379-2391"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10967032/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: https://github.com/DCVL-3D/Eigenpose_release.
特征:遮挡鲁棒三维人体网格重建
提出了一种基于单幅图像的遮挡鲁棒三维人体网格重建方法。由于咬合已成为该领域亟待解决的主要问题,人们对各种类型的咬合(如人对人咬合、人对物咬合、自咬合等)进行了有意义的努力。虽然近年来的许多研究取得了显著的进展,但由于缺乏外观信息,以往的基于回归的方法在处理遮挡问题时仍然存在各自的局限性。为了解决这一问题,我们提出了一种基于位姿相关子空间分析的人体网格重建新方法。具体来说,我们首先通过对姿态矩阵进行奇异值分解(SVD)来生成一组特征向量,即所谓的特征姿态,该矩阵包含从训练集中采样的不同姿态。然后根据融合系数对这些特征位姿进行线性组合以构建目标身体位姿,并通过所提出的网络学习融合系数。这种主体姿态(即特征姿态)的全局组合对处理遮挡引起的局部模糊有很大的帮助。此外,我们还提出了一个关节注入模块,该模块在融合系数估计过程中有效地将可见关节的空间信息融入到编码特征中。在基准数据集上的实验结果表明,该方法能够在真实场景中发生的各种遮挡下稳健地重建人体网格。代码和模型可以在https://github.com/DCVL-3D/Eigenpose_release上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信