{"title":"Monocular Human-Object Reconstruction in the Wild","authors":"Chaofan Huo, Ye Shi, Jingya Wang","doi":"arxiv-2407.20566","DOIUrl":null,"url":null,"abstract":"Learning the prior knowledge of the 3D human-object spatial relation is\ncrucial for reconstructing human-object interaction from images and\nunderstanding how humans interact with objects in 3D space. Previous works\nlearn this prior from datasets collected in controlled environments, but due to\nthe diversity of domains, they struggle to generalize to real-world scenarios.\nTo overcome this limitation, we present a 2D-supervised method that learns the\n3D human-object spatial relation prior purely from 2D images in the wild. Our\nmethod utilizes a flow-based neural network to learn the prior distribution of\nthe 2D human-object keypoint layout and viewports for each image in the\ndataset. The effectiveness of the prior learned from 2D images is demonstrated\non the human-object reconstruction task by applying the prior to tune the\nrelative pose between the human and the object during the post-optimization\nstage. To validate and benchmark our method on in-the-wild images, we collect\nthe WildHOI dataset from the YouTube website, which consists of various\ninteractions with 8 objects in real-world scenarios. We conduct the experiments\non the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show\nthat our method achieves almost comparable performance with fully 3D supervised\nmethods on the BEHAVE dataset, even if we have only utilized the 2D layout\ninformation, and outperforms previous methods in terms of generality and\ninteraction diversity on in-the-wild images.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20566","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Learning the prior knowledge of the 3D human-object spatial relation is
crucial for reconstructing human-object interaction from images and
understanding how humans interact with objects in 3D space. Previous works
learn this prior from datasets collected in controlled environments, but due to
the diversity of domains, they struggle to generalize to real-world scenarios.
To overcome this limitation, we present a 2D-supervised method that learns the
3D human-object spatial relation prior purely from 2D images in the wild. Our
method utilizes a flow-based neural network to learn the prior distribution of
the 2D human-object keypoint layout and viewports for each image in the
dataset. The effectiveness of the prior learned from 2D images is demonstrated
on the human-object reconstruction task by applying the prior to tune the
relative pose between the human and the object during the post-optimization
stage. To validate and benchmark our method on in-the-wild images, we collect
the WildHOI dataset from the YouTube website, which consists of various
interactions with 8 objects in real-world scenarios. We conduct the experiments
on the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show
that our method achieves almost comparable performance with fully 3D supervised
methods on the BEHAVE dataset, even if we have only utilized the 2D layout
information, and outperforms previous methods in terms of generality and
interaction diversity on in-the-wild images.