{"title":"Segment-based robotic mapping in dynamic environments","authors":"Ross T. Creed, R. Lakaemper","doi":"10.1109/WORV.2013.6521913","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521913","url":null,"abstract":"This paper introduces a dynamic mapping algorithm based on line segments. The use of higher level geometric features allows for fast and robust identification of inconsistencies between incoming sensor data and an existing robotic map. Handling of these inconsistencies using a partial-segment likelihood measure produces a system for robot mapping that evolves with the changing features of a dynamic environment. The algorithm is tested in a large scale simulation of a storage logistics center, a real world office environment, and compared against the current state of the art.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why would i want a gyroscope on my RGB-D sensor?","authors":"H. Ovrén, Per-Erik Forssén, D. Tornqvist","doi":"10.1109/WORV.2013.6521916","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521916","url":null,"abstract":"Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor. We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fusion algorithm. By comparing Kinect Fusion models obtained from raw sensor scans and from rectified scans, we demonstrate improvement for three classes of sensor motion: panning motions causes slant distortions, and tilt motions cause vertically elongated or compressed objects. For wobble we also observe a loss of detail, compared to the reconstruction using rectified depth scans. As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual-inertial navigation with guaranteed convergence","authors":"F. Di Corato, M. Innocenti, L. Pollini","doi":"10.1109/WORV.2013.6521930","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521930","url":null,"abstract":"This contribution presents a constraints-based loosely-coupled Augmented Implicit Kalman Filter approach to vision-aided inertial navigation that uses epipolar constraints as output map. The proposed approach is capable of estimating the standard navigation output (velocity, position and attitude) together with inertial sensor biases. An observability analysis is proposed in order to define the motion requirements for full observability of the system and asymptotic convergence of the parameter estimations. Simulations are presented to support the theoretical conclusions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122249037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering of image features based on contact and occlusion among robot body and objects","authors":"T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko","doi":"10.1109/WORV.2013.6521939","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521939","url":null,"abstract":"This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133906389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RMSD: A 3D real-time mid-level scene description system","authors":"K. Georgiev, R. Lakaemper","doi":"10.1007/978-3-662-43859-6_2","DOIUrl":"https://doi.org/10.1007/978-3-662-43859-6_2","url":null,"abstract":"","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123935482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor","authors":"B. Peasley, Stan Birchfield","doi":"10.1109/WORV.2013.6521938","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521938","url":null,"abstract":"This paper proposes a novel approach to obstacle detection and avoidance using a 3D sensor. We depart from the approach of previous researchers who use depth images from 3D sensors projected onto UV-disparity to detect obstacles. Instead, our approach relies on projecting 3D points onto the ground plane, which is estimated during a calibration step. A 2D occupancy map is then used to determine the presence of obstacles, from which translation and rotation velocities are computed to avoid the obstacles. Two innovations are introduced to overcome the limitations of the sensor: An infinite pole approach is proposed to hypothesize infinitely tall, thin obstacles when the sensor yields invalid readings, and a control strategy is adopted to turn the robot away from scenes that yield a high percentage of invalid readings. Together, these extensions enable the system to overcome the inherent limitations of the sensor. Experiments in a variety of environments, including dynamic objects, obstacles of varying heights, and dimly-lit conditions, show the ability of the system to perform robust obstacle avoidance in real time under realistic indoor conditions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic analysis of incremental light bundle adjustment","authors":"Vadim Indelman, Richard Roberts, F. Dellaert","doi":"10.1109/WORV.2013.6521942","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521942","url":null,"abstract":"This paper presents a probabilistic analysis of the recently introduced incremental light bundle adjustment method (iLBA) [6]. In iLBA, the observed 3D points are algebraically eliminated, resulting in a cost function with only the camera poses as variables, and an incremental smoothing technique is applied for efficiently processing incoming images. While we have already showed that compared to conventional bundle adjustment (BA), iLBA yields a significant improvement in computational complexity with similar levels of accuracy, the probabilistic properties of iLBA have not been analyzed thus far. In this paper we consider the probability distribution that corresponds to the iLBA cost function, and analyze how well it represents the true density of the camera poses given the image measurements. The latter can be exactly calculated in bundle adjustment (BA) by marginalizing out the 3D points from the joint distribution of camera poses and 3D points. We present a theoretical analysis of the differences in the way that LBA and BA use measurement information. Using indoor and outdoor datasets we show that the first two moments of the iLBA and the true probability distributions are very similar in practice.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}