Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim
{"title":"特征:遮挡鲁棒三维人体网格重建","authors":"Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim","doi":"10.1109/TIP.2025.3559788","DOIUrl":null,"url":null,"abstract":"A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: <monospace><uri>https://github.com/DCVL-3D/Eigenpose_release</uri></monospace>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2379-2391"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction\",\"authors\":\"Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim\",\"doi\":\"10.1109/TIP.2025.3559788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: <monospace><uri>https://github.com/DCVL-3D/Eigenpose_release</uri></monospace>.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"2379-2391\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10967032/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10967032/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction
A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: https://github.com/DCVL-3D/Eigenpose_release.