2010 IEEE/RSJ International Conference on Intelligent Robots and Systems最新文献

筛选
英文 中文
Decentralized cooperative simultaneous localization and mapping for dynamic and sparse robot networks 动态稀疏机器人网络的分散协同同步定位与映射
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2011-05-09 DOI: 10.1109/ICRA.2011.5979783
K. Leung, T. Barfoot, H. Liu
{"title":"Decentralized cooperative simultaneous localization and mapping for dynamic and sparse robot networks","authors":"K. Leung, T. Barfoot, H. Liu","doi":"10.1109/ICRA.2011.5979783","DOIUrl":"https://doi.org/10.1109/ICRA.2011.5979783","url":null,"abstract":"Communication among robots is key to performance in cooperative multi-robot systems. In practice, communication connections for information exchange between all robots are not always guaranteed, which adds difficulty to state estimation. This paper examines the decentralized cooperative simultaneous localization and mapping (SLAM) problem under a sparsely-communicating and dynamic network. We mathematically prove how the centralized-equivalent estimate can be obtained by all robots in the network in a decentralized manner. Furthermore, a robot only needs to consider its own knowledge of the network topology to detect when the centralized-equivalent estimate is obtainable. Our approach is validated through more than 250 minutes of experiments using a team of real robots, with accurate groundtruth data of all robots and landmark features.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126816138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Outdoor navigation with a spherical amphibious robot 用球形水陆两栖机器人进行户外导航
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5651713
Viktor Kaznov, M. Seeman
{"title":"Outdoor navigation with a spherical amphibious robot","authors":"Viktor Kaznov, M. Seeman","doi":"10.1109/IROS.2010.5651713","DOIUrl":"https://doi.org/10.1109/IROS.2010.5651713","url":null,"abstract":"Traditionally, mobile robot design is based on wheels, tracks or legs with their respective advantages and disadvantages. Very few groups have explored designs with spherical morphology. During the past ten years, the number of robots with spherical shape and related studies has substantially increased, and a lot of work is done in this area of mobile robotics. Interest in robots with spherical morphology has also increased, in part due to NASA's search for an alternative design for a Mars rover since the wheel-based rover Spirit is now stuck for good in soft soil. This paper presents the spherical amphibious robot Groundbot, developed by Rotundus AB in Stockholm, Sweden, and describes in detail the navigation algorithm employed in this system.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114961972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
A vision-based boundary following framework for aerial vehicles 基于视觉的飞行器边界跟踪框架
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5652034
Anqi Xu, G. Dudek
{"title":"A vision-based boundary following framework for aerial vehicles","authors":"Anqi Xu, G. Dudek","doi":"10.1109/IROS.2010.5652034","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652034","url":null,"abstract":"We present an integration of classical computer vision techniques to achieve real-time autonomous steering of an unmanned aircraft along the boundary of different regions. Using an unified conceptual framework, we illustrate solutions for tracking coastlines and for following roads surrounded by forests. In particular, we exploit color and texture properties to differentiate between region types in the aforementioned domains. The performance of our system is evaluated using different experimental approaches, which includes a fully automated in-field flight over a 1km coastline trajectory.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114972502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Smooth and collision-free navigation for multiple robots under differential-drive constraints 差分驱动约束下多机器人的平滑无碰撞导航
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5652073
J. Snape, J. V. D. Berg, S. Guy, Dinesh Manocha
{"title":"Smooth and collision-free navigation for multiple robots under differential-drive constraints","authors":"J. Snape, J. V. D. Berg, S. Guy, Dinesh Manocha","doi":"10.1109/IROS.2010.5652073","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652073","url":null,"abstract":"We present a method for smooth and collision-free navigation for multiple independent robots under differential-drive constraints. Our algorithm is based on the optimal reciprocal collision avoidance formulation and guarantees both smoothness in the trajectories of the robots and locally collision-free paths. We provide proofs of these guarantees and demonstrate the effectiveness of our method in experimental scenarios using iRobot Create mobile robots navigating amongst each other.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115458481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Describing the environment using semantic labelled polylines from 2D laser scanned raw data: Application to autonomous navigation 利用二维激光扫描原始数据的语义标记折线描述环境:在自主导航中的应用
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5650846
N. Pavón, J. F. Melero, A. Ollero
{"title":"Describing the environment using semantic labelled polylines from 2D laser scanned raw data: Application to autonomous navigation","authors":"N. Pavón, J. F. Melero, A. Ollero","doi":"10.1109/IROS.2010.5650846","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650846","url":null,"abstract":"This paper describes a real-time method that obtains a hybrid description of the environment (both metric and semantic) from raw data perceived by a 2D laser scanner. A set of linguistically labelled polylines allows to build a compact geometrical representation of the indoor location where a set of representative points (or features) are semantically described. These features are processed in order to find a list of traversable segments whose middle points are heuristically clustered. Finally, a set of safe paths are calculated from these clusters. Both the environment representation and the safe paths can be used by a controller to carry out navigation and exploration tasks. The method has been successfully tested in simulation and on a real robot.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Non-rigid registration and rectification of 3D laser scans 三维激光扫描的非刚性配准和校正
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5652278
J. Elseberg, D. Borrmann, K. Lingemann, A. Nüchter
{"title":"Non-rigid registration and rectification of 3D laser scans","authors":"J. Elseberg, D. Borrmann, K. Lingemann, A. Nüchter","doi":"10.1109/IROS.2010.5652278","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652278","url":null,"abstract":"Three dimensional point clouds acquired by range scanners often do not represent the environment precisely due to noise and errors in the acquisition process. These latter systematical errors manifest as deformations of different kinds in the 3D range image. This paper presents a novel approach to correct deformations by an analysis of the structures present in the environment and correcting them by non-rigid transformations. The resulting algorithms are used for creating high-accuracy 3D indoor maps.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124364169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A hand-gesture-based control interface for a car-robot 基于手势的汽车机器人控制界面
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5650294
X. Wu, M. Su, Pa-Chun Wang
{"title":"A hand-gesture-based control interface for a car-robot","authors":"X. Wu, M. Su, Pa-Chun Wang","doi":"10.1109/IROS.2010.5650294","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650294","url":null,"abstract":"In this paper, we introduce a hand-gesture-based control interface for navigating a car-robot. A 3-axis accelerometer is adopted to record a user's hand trajectories. The trajectory data is transmitted wirelessly via an RF module to a computer. The received trajectories are then classified to one of six control commands for navigating a car-robot. The classifier adopts the dynamic time warping (DTW) algorithm to classify hand trajectories. Simulation results show that the classifier could achieve 92.2% correct rate.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121809991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Motion generation based on reliable predictability using self-organized object features 运动生成基于可靠的可预测性,使用自组织的对象特征
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5652609
S. Nishide, T. Ogata, J. Tani, Toru Takahashi, Kazunori Komatani, HIroshi G. Okuno
{"title":"Motion generation based on reliable predictability using self-organized object features","authors":"S. Nishide, T. Ogata, J. Tani, Toru Takahashi, Kazunori Komatani, HIroshi G. Okuno","doi":"10.1109/IROS.2010.5652609","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652609","url":null,"abstract":"Predictability is an important factor for determining robot motions. This paper presents a model to generate robot motions based on reliable predictability evaluated through a dynamics learning model which self-organizes object features. The model is composed of a dynamics learning module, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network as a feature extraction module. The model inputs raw object images and robot motions. Through bi-directional training of the two models, object features which describe the object motion are self-organized in the output of the hierarchical neural network, which is linked to the input of RNNPB. After training, the model searches for the robot motion with high reliable predictability of object motion. Experiments were performed with the robot's pushing motion with a variety of objects to generate sliding, falling over, bouncing, and rolling motions. For objects with single motion possibility, the robot tended to generate motions that induce the object motion. For objects with two motion possibilities, the robot evenly generated motions that induce the two object motions.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121815251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Floor sensing system using laser range finder and mirror for localizing daily life commodities 地板感应系统利用激光测距仪和镜子定位日常生活用品
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5649372
Yasunobu Nohara, T. Hasegawa, K. Murakami
{"title":"Floor sensing system using laser range finder and mirror for localizing daily life commodities","authors":"Yasunobu Nohara, T. Hasegawa, K. Murakami","doi":"10.1109/IROS.2010.5649372","DOIUrl":"https://doi.org/10.1109/IROS.2010.5649372","url":null,"abstract":"This paper proposes a new method of measuring position of daily commodities placed on a floor. Picking up an object on a floor will be a typical task for a robot working in our daily life environment. However, it is difficult for a robotic vision to find a small daily life object left on a large floor. The floor surface may have various texture and shadow, while other furniture may obstruct the vision. Various objects may also exist on the floor. Moreover, the surface of the object has various optical characteristics: color, metallic reflection, transparent, black etc. Our method uses a laser range finder (LRF) together with a mirror installed on the wall very close to floor. The LRF scans the laser beam horizontally just above the floor and measure the distance to the object. Some beams are reflected by the mirror and measure the distance of the object from virtually different origin. Even if the LRF fails two measurements, the method calculates the position of the object by utilizing information that the two measurements are unavailable. Thus, the method achieves two major advantages: 1) robust against occlusion and 2) applicable to variety of daily life commodities. In the experiment, success rate of observation of our method achieves 100% for any daily commodity, while that of the existing method for a cell-phone is 69.4%.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116705744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Wide-baseline image matching based on coplanar line intersections 基于共面直线相交的宽基线图像匹配
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2010-12-03 DOI: 10.1109/IROS.2010.5650309
Hyunwoo J. Kim, Sukhan Lee
{"title":"Wide-baseline image matching based on coplanar line intersections","authors":"Hyunwoo J. Kim, Sukhan Lee","doi":"10.1109/IROS.2010.5650309","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650309","url":null,"abstract":"This paper presents a novel method of wide-baseline image matching based on the intersection context of coplanar line pairs especially designed for dealing with poorly textured and/or non-planar structured scenes. The line matching in widely separated views is challenging because of large perspective distortion and the violation of the planarity assumption in local regions. To overcome the large perspective distortion, the local regions are normalized into the canonical frames by rectifying coplanar line pairs to be orthogonal. Also, the 3D interpretation of the intersection context of the coplanar line pairs helps to match the non-planar local regions by adjusting the region of interest of the canonical frame according to the different types of 3D non-planar structures. Compared to previous approaches, the proposed method offers efficient yet robust wide-baseline line matching performance under unreliable detection of end-points of line segments and poor line topologies or junction structures. Comparison studies and experimental results demonstrate the accuracy of the proposed method for various real world scenes.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116715613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信