UAVSensor Fusion with Latent-Dynamic Conditional Random Fields in Coronal Plane Estimation

Amir M. Rahimi, Raphael Ruschel, B. S. Manjunath
{"title":"UAVSensor Fusion with Latent-Dynamic Conditional Random Fields in Coronal Plane Estimation","authors":"Amir M. Rahimi, Raphael Ruschel, B. S. Manjunath","doi":"10.1109/CVPR.2016.490","DOIUrl":null,"url":null,"abstract":"We present a real-time body orientation estimation in a micro-Unmanned Air Vehicle video stream. This work is part ofafully autonomous UAVsystem which can maneuver to face a single individual in challenging outdoor environments. Our body orientation estimation consists of the following steps: (a) obtaining a set ofvisual appearance models for each body orientation, where each model is tagged with a set of scene information (obtained from sensors), (b) exploiting the mutual information of on-board sensors using latent-dynamic conditional random fields (WCRF), (c) Characterizing each visual appearance model with the most discriminative sensor information, (d) fast estimation ofbody orientation during the test flights given theWCRF parameters and the corresponding sensor readings. The key aspects of our approach is to add sparsity to the sensor readings with latent variables followed by long range dependency analysis. Experimental results obtained over real-time video streams demonstrate a significant improvement in both speed (l5-fps) and accuracy (72%) compared to the state of the art techniques that only rely on visual data. Video demonstration ofour autonomous flights (both from ground view and aerial view) are included in the supplementary material.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"11 1","pages":"4527-4534"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2016.490","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

We present a real-time body orientation estimation in a micro-Unmanned Air Vehicle video stream. This work is part ofafully autonomous UAVsystem which can maneuver to face a single individual in challenging outdoor environments. Our body orientation estimation consists of the following steps: (a) obtaining a set ofvisual appearance models for each body orientation, where each model is tagged with a set of scene information (obtained from sensors), (b) exploiting the mutual information of on-board sensors using latent-dynamic conditional random fields (WCRF), (c) Characterizing each visual appearance model with the most discriminative sensor information, (d) fast estimation ofbody orientation during the test flights given theWCRF parameters and the corresponding sensor readings. The key aspects of our approach is to add sparsity to the sensor readings with latent variables followed by long range dependency analysis. Experimental results obtained over real-time video streams demonstrate a significant improvement in both speed (l5-fps) and accuracy (72%) compared to the state of the art techniques that only rely on visual data. Video demonstration ofour autonomous flights (both from ground view and aerial view) are included in the supplementary material.
基于潜在动态条件随机场的无人机传感器融合冠状面估计
提出了一种基于微型无人机视频流的实时人体方向估计方法。这项工作是完全自主无人机系统的一部分,该系统可以在具有挑战性的室外环境中机动以面对单个个体。我们的身体方向估计包括以下步骤:(a)获得每个身体方向的一组视觉外观模型,其中每个模型都用一组场景信息(从传感器获得)进行标记,(b)使用潜在动态条件随机场(WCRF)利用机载传感器的互信息,(c)用最具判别性的传感器信息表征每个视觉外观模型,(d)在给定WCRF参数和相应传感器读数的情况下,在试飞期间快速估计身体方向。我们方法的关键方面是增加传感器读数的稀疏性,然后进行长期依赖分析。通过实时视频流获得的实验结果表明,与仅依赖视觉数据的最先进技术相比,在速度(15 -fps)和准确性(72%)方面都有显着提高。视频演示四自主飞行(从地面视图和鸟瞰图)包括在补充材料。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信