2013 IEEE Workshop on Robot Vision (WORV)最新文献

筛选
英文 中文
Segment-based robotic mapping in dynamic environments 动态环境中基于分段的机器人映射
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521913
Ross T. Creed, R. Lakaemper
{"title":"Segment-based robotic mapping in dynamic environments","authors":"Ross T. Creed, R. Lakaemper","doi":"10.1109/WORV.2013.6521913","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521913","url":null,"abstract":"This paper introduces a dynamic mapping algorithm based on line segments. The use of higher level geometric features allows for fast and robust identification of inconsistencies between incoming sensor data and an existing robotic map. Handling of these inconsistencies using a partial-segment likelihood measure produces a system for robot mapping that evolves with the changing features of a dynamic environment. The algorithm is tested in a large scale simulation of a storage logistics center, a real world office environment, and compared against the current state of the art.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Why would i want a gyroscope on my RGB-D sensor? 我为什么要在我的RGB-D传感器上安装陀螺仪?
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-01-01 DOI: 10.1109/WORV.2013.6521916
H. Ovrén, Per-Erik Forssén, D. Tornqvist
{"title":"Why would i want a gyroscope on my RGB-D sensor?","authors":"H. Ovrén, Per-Erik Forssén, D. Tornqvist","doi":"10.1109/WORV.2013.6521916","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521916","url":null,"abstract":"Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor. We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fusion algorithm. By comparing Kinect Fusion models obtained from raw sensor scans and from rectified scans, we demonstrate improvement for three classes of sensor motion: panning motions causes slant distortions, and tilt motions cause vertically elongated or compressed objects. For wobble we also observe a loss of detail, compared to the reconstruction using rectified depth scans. As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Visual-inertial navigation with guaranteed convergence 具有保证收敛性的视觉惯性导航
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521930
F. Di Corato, M. Innocenti, L. Pollini
{"title":"Visual-inertial navigation with guaranteed convergence","authors":"F. Di Corato, M. Innocenti, L. Pollini","doi":"10.1109/WORV.2013.6521930","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521930","url":null,"abstract":"This contribution presents a constraints-based loosely-coupled Augmented Implicit Kalman Filter approach to vision-aided inertial navigation that uses epipolar constraints as output map. The proposed approach is capable of estimating the standard navigation output (velocity, position and attitude) together with inertial sensor biases. An observability analysis is proposed in order to define the motion requirements for full observability of the system and asymptotic convergence of the parameter estimations. Simulations are presented to support the theoretical conclusions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122249037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Clustering of image features based on contact and occlusion among robot body and objects 基于机器人身体和物体之间的接触和遮挡的图像特征聚类
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521939
T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko
{"title":"Clustering of image features based on contact and occlusion among robot body and objects","authors":"T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko","doi":"10.1109/WORV.2013.6521939","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521939","url":null,"abstract":"This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133906389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RMSD: A 3D real-time mid-level scene description system RMSD:一个三维实时中层场景描述系统
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 1900-01-01 DOI: 10.1007/978-3-662-43859-6_2
K. Georgiev, R. Lakaemper
{"title":"RMSD: A 3D real-time mid-level scene description system","authors":"K. Georgiev, R. Lakaemper","doi":"10.1007/978-3-662-43859-6_2","DOIUrl":"https://doi.org/10.1007/978-3-662-43859-6_2","url":null,"abstract":"","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123935482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor 实时障碍物检测和避免在镜面的存在使用一个主动的3D传感器
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521938
B. Peasley, Stan Birchfield
{"title":"Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor","authors":"B. Peasley, Stan Birchfield","doi":"10.1109/WORV.2013.6521938","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521938","url":null,"abstract":"This paper proposes a novel approach to obstacle detection and avoidance using a 3D sensor. We depart from the approach of previous researchers who use depth images from 3D sensors projected onto UV-disparity to detect obstacles. Instead, our approach relies on projecting 3D points onto the ground plane, which is estimated during a calibration step. A 2D occupancy map is then used to determine the presence of obstacles, from which translation and rotation velocities are computed to avoid the obstacles. Two innovations are introduced to overcome the limitations of the sensor: An infinite pole approach is proposed to hypothesize infinitely tall, thin obstacles when the sensor yields invalid readings, and a control strategy is adopted to turn the robot away from scenes that yield a high percentage of invalid readings. Together, these extensions enable the system to overcome the inherent limitations of the sensor. Experiments in a variety of environments, including dynamic objects, obstacles of varying heights, and dimly-lit conditions, show the ability of the system to perform robust obstacle avoidance in real time under realistic indoor conditions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Probabilistic analysis of incremental light bundle adjustment 增量光束调整的概率分析
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521942
Vadim Indelman, Richard Roberts, F. Dellaert
{"title":"Probabilistic analysis of incremental light bundle adjustment","authors":"Vadim Indelman, Richard Roberts, F. Dellaert","doi":"10.1109/WORV.2013.6521942","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521942","url":null,"abstract":"This paper presents a probabilistic analysis of the recently introduced incremental light bundle adjustment method (iLBA) [6]. In iLBA, the observed 3D points are algebraically eliminated, resulting in a cost function with only the camera poses as variables, and an incremental smoothing technique is applied for efficiently processing incoming images. While we have already showed that compared to conventional bundle adjustment (BA), iLBA yields a significant improvement in computational complexity with similar levels of accuracy, the probabilistic properties of iLBA have not been analyzed thus far. In this paper we consider the probability distribution that corresponds to the iLBA cost function, and analyze how well it represents the true density of the camera poses given the image measurements. The latter can be exactly calculated in bundle adjustment (BA) by marginalizing out the 3D points from the joint distribution of camera poses and 3D points. We present a theoretical analysis of the differences in the way that LBA and BA use measurement information. Using indoor and outdoor datasets we show that the first two moments of the iLBA and the true probability distributions are very similar in practice.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信