ViPNet:一个端到端的6D视觉相机姿态回归网络

Haohao Hu, Aoran Wang, Marc Sons, M. Lauer
{"title":"ViPNet:一个端到端的6D视觉相机姿态回归网络","authors":"Haohao Hu, Aoran Wang, Marc Sons, M. Lauer","doi":"10.1109/ITSC45102.2020.9294630","DOIUrl":null,"url":null,"abstract":"In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.","PeriodicalId":394538,"journal":{"name":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"ViPNet: An End-to-End 6D Visual Camera Pose Regression Network\",\"authors\":\"Haohao Hu, Aoran Wang, Marc Sons, M. Lauer\",\"doi\":\"10.1109/ITSC45102.2020.9294630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.\",\"PeriodicalId\":394538,\"journal\":{\"name\":\"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITSC45102.2020.9294630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC45102.2020.9294630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在这项工作中,我们提出了一个视觉姿态回归网络:ViPNet。它在自动驾驶汽车等移动平台上具有强大的实时性。我们训练了一个卷积神经网络,以端到端的方式从单个单眼图像中估计六个自由度的相机姿态。为了估计相机姿态的不确定性,我们使用贝叶斯版本的ResNet-50作为我们的基本网络。在残差单元中应用SEBlocks以提高模型对信息特征的敏感性。我们的ViPNet使用具有可训练参数的几何损失函数进行训练,这可以显着简化微调过程。我们在剑桥地标数据集和卡尔-威廉-广场数据集上评估了我们的ViPNet,这是用实验车辆记录的。作为评估结果,我们的ViPNet优于其他端到端单目相机姿态估计方法。我们的ViPNet只需要9-15毫秒来预测一个相机姿势,这使我们能够以非常高的频率运行它。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ViPNet: An End-to-End 6D Visual Camera Pose Regression Network
In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信