野外单目人-物重构

Chaofan Huo, Ye Shi, Jingya Wang
{"title":"野外单目人-物重构","authors":"Chaofan Huo, Ye Shi, Jingya Wang","doi":"arxiv-2407.20566","DOIUrl":null,"url":null,"abstract":"Learning the prior knowledge of the 3D human-object spatial relation is\ncrucial for reconstructing human-object interaction from images and\nunderstanding how humans interact with objects in 3D space. Previous works\nlearn this prior from datasets collected in controlled environments, but due to\nthe diversity of domains, they struggle to generalize to real-world scenarios.\nTo overcome this limitation, we present a 2D-supervised method that learns the\n3D human-object spatial relation prior purely from 2D images in the wild. Our\nmethod utilizes a flow-based neural network to learn the prior distribution of\nthe 2D human-object keypoint layout and viewports for each image in the\ndataset. The effectiveness of the prior learned from 2D images is demonstrated\non the human-object reconstruction task by applying the prior to tune the\nrelative pose between the human and the object during the post-optimization\nstage. To validate and benchmark our method on in-the-wild images, we collect\nthe WildHOI dataset from the YouTube website, which consists of various\ninteractions with 8 objects in real-world scenarios. We conduct the experiments\non the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show\nthat our method achieves almost comparable performance with fully 3D supervised\nmethods on the BEHAVE dataset, even if we have only utilized the 2D layout\ninformation, and outperforms previous methods in terms of generality and\ninteraction diversity on in-the-wild images.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Monocular Human-Object Reconstruction in the Wild\",\"authors\":\"Chaofan Huo, Ye Shi, Jingya Wang\",\"doi\":\"arxiv-2407.20566\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning the prior knowledge of the 3D human-object spatial relation is\\ncrucial for reconstructing human-object interaction from images and\\nunderstanding how humans interact with objects in 3D space. Previous works\\nlearn this prior from datasets collected in controlled environments, but due to\\nthe diversity of domains, they struggle to generalize to real-world scenarios.\\nTo overcome this limitation, we present a 2D-supervised method that learns the\\n3D human-object spatial relation prior purely from 2D images in the wild. Our\\nmethod utilizes a flow-based neural network to learn the prior distribution of\\nthe 2D human-object keypoint layout and viewports for each image in the\\ndataset. The effectiveness of the prior learned from 2D images is demonstrated\\non the human-object reconstruction task by applying the prior to tune the\\nrelative pose between the human and the object during the post-optimization\\nstage. To validate and benchmark our method on in-the-wild images, we collect\\nthe WildHOI dataset from the YouTube website, which consists of various\\ninteractions with 8 objects in real-world scenarios. We conduct the experiments\\non the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show\\nthat our method achieves almost comparable performance with fully 3D supervised\\nmethods on the BEHAVE dataset, even if we have only utilized the 2D layout\\ninformation, and outperforms previous methods in terms of generality and\\ninteraction diversity on in-the-wild images.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.20566\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20566","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

学习三维人-物空间关系的先验知识对于从图像中重建人-物互动以及理解人类如何在三维空间中与物体互动至关重要。为了克服这一局限性,我们提出了一种二维监督方法,该方法纯粹从野外的二维图像中学习三维人-物空间关系先验知识。我们的方法利用基于流的神经网络来学习数据集中每幅图像的二维人-物关键点布局和视口的先验分布。在后优化阶段,通过应用先验来调整人与物体之间的相对姿态,证明了从二维图像中学到的先验在人-物重建任务中的有效性。为了在野外图像上对我们的方法进行验证和基准测试,我们从 YouTube 网站上收集了 WildHOI 数据集,其中包括在真实世界场景中与 8 个物体的各种交互。我们在室内 BEHAVE 数据集和室外 WildHOI 数据集上进行了实验。结果表明,在 BEHAVE 数据集上,即使我们只利用了二维布局信息,我们的方法也取得了与全三维监督方法几乎相当的性能;在野外图像上,我们的方法在通用性和交互多样性方面优于之前的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Monocular Human-Object Reconstruction in the Wild
Learning the prior knowledge of the 3D human-object spatial relation is crucial for reconstructing human-object interaction from images and understanding how humans interact with objects in 3D space. Previous works learn this prior from datasets collected in controlled environments, but due to the diversity of domains, they struggle to generalize to real-world scenarios. To overcome this limitation, we present a 2D-supervised method that learns the 3D human-object spatial relation prior purely from 2D images in the wild. Our method utilizes a flow-based neural network to learn the prior distribution of the 2D human-object keypoint layout and viewports for each image in the dataset. The effectiveness of the prior learned from 2D images is demonstrated on the human-object reconstruction task by applying the prior to tune the relative pose between the human and the object during the post-optimization stage. To validate and benchmark our method on in-the-wild images, we collect the WildHOI dataset from the YouTube website, which consists of various interactions with 8 objects in real-world scenarios. We conduct the experiments on the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show that our method achieves almost comparable performance with fully 3D supervised methods on the BEHAVE dataset, even if we have only utilized the 2D layout information, and outperforms previous methods in terms of generality and interaction diversity on in-the-wild images.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信