三维场景空间中遮挡行人的姿态感知单目定位

Mohammad Masoud Rahimi , Kourosh Khoshelham , Mark Stevenson , Stephan Winter
{"title":"三维场景空间中遮挡行人的姿态感知单目定位","authors":"Mohammad Masoud Rahimi ,&nbsp;Kourosh Khoshelham ,&nbsp;Mark Stevenson ,&nbsp;Stephan Winter","doi":"10.1016/j.ophoto.2021.100006","DOIUrl":null,"url":null,"abstract":"<div><p>Localization of pedestrians in 3D scene space from single RGB images is critical for various downstream applications. Current monocular approaches employ either the bounding box of pedestrians or the visible parts of their bodies for localization. Both approaches introduce additional error to the location estimation in the case of real-world scenarios – crowded environments with multiple occluded pedestrians. To overcome the limitation, this paper proposes a novel human pose-aware pedestrian localization framework to model poses of occluded pedestrians, where this enables accurate localization in image and ground space. This is done by proposing a light-weight neural network architecture, where this ensures a fast and accurate prediction of missing body parts for downstream applications. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of the framework compared to state-of-the-art in predicting pedestrians missing body parts as well as pedestrian localization.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100006"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000065/pdfft?md5=9eb7fb438c5548be0151101048d7d41b&pid=1-s2.0-S2667393221000065-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Pose-aware monocular localization of occluded pedestrians in 3D scene space\",\"authors\":\"Mohammad Masoud Rahimi ,&nbsp;Kourosh Khoshelham ,&nbsp;Mark Stevenson ,&nbsp;Stephan Winter\",\"doi\":\"10.1016/j.ophoto.2021.100006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Localization of pedestrians in 3D scene space from single RGB images is critical for various downstream applications. Current monocular approaches employ either the bounding box of pedestrians or the visible parts of their bodies for localization. Both approaches introduce additional error to the location estimation in the case of real-world scenarios – crowded environments with multiple occluded pedestrians. To overcome the limitation, this paper proposes a novel human pose-aware pedestrian localization framework to model poses of occluded pedestrians, where this enables accurate localization in image and ground space. This is done by proposing a light-weight neural network architecture, where this ensures a fast and accurate prediction of missing body parts for downstream applications. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of the framework compared to state-of-the-art in predicting pedestrians missing body parts as well as pedestrian localization.</p></div>\",\"PeriodicalId\":100730,\"journal\":{\"name\":\"ISPRS Open Journal of Photogrammetry and Remote Sensing\",\"volume\":\"2 \",\"pages\":\"Article 100006\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2667393221000065/pdfft?md5=9eb7fb438c5548be0151101048d7d41b&pid=1-s2.0-S2667393221000065-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Open Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667393221000065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Open Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667393221000065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

从单个RGB图像中定位3D场景空间中的行人对于各种下游应用至关重要。目前的单目方法要么使用行人的边界框,要么使用他们身体的可见部分进行定位。这两种方法在现实世界的情况下都会给位置估计带来额外的误差——拥挤的环境中有多个遮挡的行人。为了克服这一限制,本文提出了一种新的人体姿态感知行人定位框架来模拟遮挡行人的姿态,从而实现图像和地面空间的精确定位。这是通过提出一种轻量级的神经网络架构来完成的,这确保了对下游应用中缺失的身体部位的快速准确预测。在两个真实世界数据集上的综合实验表明,与最先进的框架相比,该框架在预测行人缺失身体部位以及行人定位方面是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Pose-aware monocular localization of occluded pedestrians in 3D scene space

Localization of pedestrians in 3D scene space from single RGB images is critical for various downstream applications. Current monocular approaches employ either the bounding box of pedestrians or the visible parts of their bodies for localization. Both approaches introduce additional error to the location estimation in the case of real-world scenarios – crowded environments with multiple occluded pedestrians. To overcome the limitation, this paper proposes a novel human pose-aware pedestrian localization framework to model poses of occluded pedestrians, where this enables accurate localization in image and ground space. This is done by proposing a light-weight neural network architecture, where this ensures a fast and accurate prediction of missing body parts for downstream applications. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of the framework compared to state-of-the-art in predicting pedestrians missing body parts as well as pedestrian localization.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信