2012 IEEE International Conference on Robotics and Automation最新文献

筛选
英文 中文
Sparse Spatial Coding: A novel approach for efficient and accurate object recognition 稀疏空间编码:一种高效、准确的目标识别新方法
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224785
Gabriel L. Oliveira, Erickson R. Nascimento, A. W. Vieira, M. Campos
{"title":"Sparse Spatial Coding: A novel approach for efficient and accurate object recognition","authors":"Gabriel L. Oliveira, Erickson R. Nascimento, A. W. Vieira, M. Campos","doi":"10.1109/ICRA.2012.6224785","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224785","url":null,"abstract":"Successful state-of-the-art object recognition techniques from images have been based on powerful methods, such as sparse representation, in order to replace the also popular vector quantization (VQ) approach. Recently, sparse coding, which is characterized by representing a signal in a sparse space, has raised the bar on several object recognition benchmarks. However, one serious drawback of sparse space based methods is that similar local features can be quantized into different visual words. We present in this paper a new method, called Sparse Spatial Coding (SSC), which combines a sparse coding dictionary learning, a spatial constraint coding stage and an online classification method to improve object recognition. An efficient new off-line classification algorithm is also presented. We overcome the problem of techniques which make use of sparse representation alone by generating the final representation with SSC and max pooling, presented for an online learning classifier. Experimental results obtained on the Caltech 101, Caltech 256, Corel 5000 and Corel 10000 databases, show that, to the best of our knowledge, our approach supersedes in accuracy the best published results to date on the same databases. As an extension, we also show high performance results on the MIT-67 indoor scene recognition dataset.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115095701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
The jacobs robotics approach to object recognition and localization in the context of the ICRA'11 Solutions in Perception Challenge 在ICRA感知挑战11解决方案的背景下,雅各布斯机器人技术对物体识别和定位的方法
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6225335
N. Vaskevicius, K. Pathak, A. Ichim, A. Birk
{"title":"The jacobs robotics approach to object recognition and localization in the context of the ICRA'11 Solutions in Perception Challenge","authors":"N. Vaskevicius, K. Pathak, A. Ichim, A. Birk","doi":"10.1109/ICRA.2012.6225335","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225335","url":null,"abstract":"In this paper, we give an overview of the Jacobs Robotics entry to the ICRA'11 Solutions in Perception Challenge. We present our multi-pronged strategy for object recognition and localization based on the integrated geometric and visual information available from the Kinect sensor. Firstly, the range image is over-segmented using an edge-detection algorithm and regions of interest are extracted based on a simple shape-analysis per segment. Then, these selected regions of the scene are matched with known objects using visual features and their distribution in 3D space. Finally, generated hypotheses about the positions of the objects are tested by back-projecting learned 3D models to the scene using estimated transformations and sensor model. Our method won the second place among eight competing algorithms, only marginally losing to the winner.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117304550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
k-IOS: Intersection of spheres for efficient proximity query k-IOS:用于高效接近查询的球体交集
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224889
Xinyu Zhang, Young J. Kim
{"title":"k-IOS: Intersection of spheres for efficient proximity query","authors":"Xinyu Zhang, Young J. Kim","doi":"10.1109/ICRA.2012.6224889","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224889","url":null,"abstract":"We present a new bounding volume structure, k-IOS that is an intersection of k spheres, for accelerating proximity query including collision detection and Euclidean distance computation between arbitrary polygon-soup models that undergo rigid motion. Our new bounding volume is easy to implement and highly efficient both for its construction and runtime query. In our experiments, we have observed up to 4.0 times performance improvement of proximity query compared to an existing well-known algorithm based on swept sphere volume (SSV) [1]. Moreover, k-IOS is strictly convex that can guarantee a continuous gradient of distance function with respect to object's configuration parameter.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Development and control of a three DOF planar induction motor 平面三自由度感应电机的研制与控制
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224612
M. Kumagai, R. Hollis
{"title":"Development and control of a three DOF planar induction motor","authors":"M. Kumagai, R. Hollis","doi":"10.1109/ICRA.2012.6224612","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224612","url":null,"abstract":"This paper reports a planar induction motor that can output 70 N translational thrust and 9 Nm torque with a response time of 10 ms. The motor consists of three linear induction armatures with vector control drivers and three optical mouse sensors. First, an idea to combine multiple linear induction elements is proposed. The power distribution to each element is derived from the position and orientation of that element. A discussion of the developed system and its measured characteristics follow. The experimental results highlight the potential of its direct drive features.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"460 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116183690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Active learning from demonstration for robust autonomous navigation 鲁棒自主导航演示中的主动学习
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224757
David Silver, J. Bagnell, A. Stentz
{"title":"Active learning from demonstration for robust autonomous navigation","authors":"David Silver, J. Bagnell, A. Stentz","doi":"10.1109/ICRA.2012.6224757","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224757","url":null,"abstract":"Building robust and reliable autonomous navigation systems that generalize across environments and operating scenarios remains a core challenge in robotics. Machine learning has proven a significant aid in this task; in recent years learning from demonstration has become especially popular, leading to improved systems while requiring less expert tuning and interaction. However, these approaches still place a burden on the expert, specifically to choose the best demonstrations to provide. This work proposes two approaches for active learning from demonstration, in which the learning system requests specific demonstrations from the expert. The approaches identify examples for which expert demonstration is predicted to provide useful information on concepts which are either novel or uncertain to the current system. Experimental results demonstrate both improved generalization performance and reduced expert interaction when using these approaches.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123457791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Versatile distributed pose estimation and sensor self-calibration for an autonomous MAV 自主MAV的多用途分布式姿态估计和传感器自校准
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6225002
S. Weiss, Markus Achtelik, M. Chli, R. Siegwart
{"title":"Versatile distributed pose estimation and sensor self-calibration for an autonomous MAV","authors":"S. Weiss, Markus Achtelik, M. Chli, R. Siegwart","doi":"10.1109/ICRA.2012.6225002","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225002","url":null,"abstract":"In this paper, we present a versatile framework to enable autonomous flights of a Micro Aerial Vehicle (MAV) which has only slow, noisy, delayed and possibly arbitrarily scaled measurements available. Using such measurements directly for position control would be practically impossible as MAVs exhibit great agility in motion. In addition, these measurements often come from a selection of different onboard sensors, hence accurate calibration is crucial to the robustness of the estimation processes. Here, we address these problems using an EKF formulation which fuses these measurements with inertial sensors. We do not only estimate pose and velocity of the MAV, but also estimate sensor biases, scale of the position measurement and self (inter-sensor) calibration in real-time. Furthermore, we show that it is possible to obtain a yaw estimate from position measurements only. We demonstrate that the proposed framework is capable of running entirely onboard a MAV performing state prediction at the rate of 1 kHz. Our results illustrate that this approach is able to handle measurement delays (up to 500ms), noise (std. deviation up to 20 cm) and slow update rates (as low as 1 Hz) while dynamic maneuvers are still possible. We present a detailed quantitative performance evaluation of the real system under the influence of different disturbance parameters and different sensor setups to highlight the versatility of our approach.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123522691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 140
Underwater electro-navigation in the dark 黑暗中的水下电子导航
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224836
V. Lebastard, F. Boyer, C. Chevallereau, N. Servagent
{"title":"Underwater electro-navigation in the dark","authors":"V. Lebastard, F. Boyer, C. Chevallereau, N. Servagent","doi":"10.1109/ICRA.2012.6224836","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224836","url":null,"abstract":"This article proposes a solution to the problem of the navigation of underwater robots in confined unstructured environments wetted by turbid waters. The solution is based on a new sensor bio-inspired from electric fish. Exploiting the morphology of the sensor as well as taking inspiration from passive electro-location in real fish, the solution turns out to be a sensory-motor loop encoding a simple behavior relevant to exploration missions. This behavior consists in seeking conductive objects while avoiding insulating ones. The solution is illustrated on experiments. It is robust and works even in very unstructured scenes. It does not require any model and is quite cheap to implement.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Compact covariance descriptors in 3D point clouds for object recognition 三维点云中用于目标识别的紧凑协方差描述子
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6224740
D. Fehr, A. Cherian, Ravishankar Sivalingam, S. Nickolay, V. Morellas, N. Papanikolopoulos
{"title":"Compact covariance descriptors in 3D point clouds for object recognition","authors":"D. Fehr, A. Cherian, Ravishankar Sivalingam, S. Nickolay, V. Morellas, N. Papanikolopoulos","doi":"10.1109/ICRA.2012.6224740","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224740","url":null,"abstract":"One of the most important tasks for mobile robots is to sense their environment. Further tasks might include the recognition of objects in the surrounding environment. Three dimensional range finders have become the sensors of choice for mapping the environment of a robot. Recognizing objects in point clouds provided by such sensors is a difficult task. The main contribution of this paper is the introduction of a new covariance based point cloud descriptor for such object recognition. Covariance based descriptors have been very successful in image processing. One of the main advantages of these descriptors is their relatively small size. The comparisons between different covariance matrices can also be made very efficient. Experiments with real world and synthetic data will show the superior performance of the covariance descriptors on point clouds compared to state-of-the-art methods.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
A compact two DOF magneto-elastomeric force sensor for a running quadruped 一种紧凑的二自由度磁弹性力传感器,用于奔跑的四足动物
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6225201
A. Ananthanarayanan, S. Foong, Sangbae Kim
{"title":"A compact two DOF magneto-elastomeric force sensor for a running quadruped","authors":"A. Ananthanarayanan, S. Foong, Sangbae Kim","doi":"10.1109/ICRA.2012.6225201","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225201","url":null,"abstract":"This paper presents a novel design approach for a two-DOF foot force sensor for a high speed running quadruped. The adopted approach harnesses the deformation property of an elastomeric material to relate applied force to measurable deformation. A lightweight, robust and compact magnetic-field based sensing system, consisting of an assembly of miniature hall-effect sensors, is employed to infer the positional information of a magnet embedded in the elastomeric material. Instead of solving two non-linear models (magnetic field and elastomeric) sequentially, a direct approach of using artificial neural networks (ANN) is utilized to relate magnetic flux density (MFD) measurements to applied forces. The force sensor, which weighs only 24.5 gms, provides a measurement range of 0 - 1000 N normal to the ground and up to ± 125N parallel to the ground. The mean force measurement accuracy was found to be within 7% of the applied forces. The sensor designed as part of this work finds direct applications in ground reaction force sensing for a running quadrupedal robot.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121891180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Non-Gaussian belief space planning: Correctness and complexity 非高斯信念空间规划:正确性和复杂性
2012 IEEE International Conference on Robotics and Automation Pub Date : 2012-05-14 DOI: 10.1109/ICRA.2012.6225223
Robert Platt, L. Kaelbling, Tomas Lozano-Perez, Russ Tedrake
{"title":"Non-Gaussian belief space planning: Correctness and complexity","authors":"Robert Platt, L. Kaelbling, Tomas Lozano-Perez, Russ Tedrake","doi":"10.1109/ICRA.2012.6225223","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225223","url":null,"abstract":"We consider the partially observable control problem where it is potentially necessary to perform complex information-gathering operations in order to localize state. One approach to solving these problems is to create plans in belief-space, the space of probability distributions over the underlying state of the system. The belief-space plan encodes a strategy for performing a task while gaining information as necessary. Unlike most approaches in the literature which rely upon representing belief state as a Gaussian distribution, we have recently proposed an approach to non-Gaussian belief space planning based on solving a non-linear optimization problem defined in terms of a set of state samples [1]. In this paper, we show that even though our approach makes optimistic assumptions about the content of future observations for planning purposes, all low-cost plans are guaranteed to gain information in a specific way under certain conditions. We show that eventually, the algorithm is guaranteed to localize the true state of the system and to reach a goal region with high probability. Although the computational complexity of the algorithm is dominated by the number of samples used to define the optimization problem, our convergence guarantee holds with as few as two samples. Moreover, we show empirically that it is unnecessary to use large numbers of samples in order to obtain good performance.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121891954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信