Gabriel L. Oliveira, Erickson R. Nascimento, A. W. Vieira, M. Campos
{"title":"Sparse Spatial Coding: A novel approach for efficient and accurate object recognition","authors":"Gabriel L. Oliveira, Erickson R. Nascimento, A. W. Vieira, M. Campos","doi":"10.1109/ICRA.2012.6224785","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224785","url":null,"abstract":"Successful state-of-the-art object recognition techniques from images have been based on powerful methods, such as sparse representation, in order to replace the also popular vector quantization (VQ) approach. Recently, sparse coding, which is characterized by representing a signal in a sparse space, has raised the bar on several object recognition benchmarks. However, one serious drawback of sparse space based methods is that similar local features can be quantized into different visual words. We present in this paper a new method, called Sparse Spatial Coding (SSC), which combines a sparse coding dictionary learning, a spatial constraint coding stage and an online classification method to improve object recognition. An efficient new off-line classification algorithm is also presented. We overcome the problem of techniques which make use of sparse representation alone by generating the final representation with SSC and max pooling, presented for an online learning classifier. Experimental results obtained on the Caltech 101, Caltech 256, Corel 5000 and Corel 10000 databases, show that, to the best of our knowledge, our approach supersedes in accuracy the best published results to date on the same databases. As an extension, we also show high performance results on the MIT-67 indoor scene recognition dataset.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115095701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The jacobs robotics approach to object recognition and localization in the context of the ICRA'11 Solutions in Perception Challenge","authors":"N. Vaskevicius, K. Pathak, A. Ichim, A. Birk","doi":"10.1109/ICRA.2012.6225335","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225335","url":null,"abstract":"In this paper, we give an overview of the Jacobs Robotics entry to the ICRA'11 Solutions in Perception Challenge. We present our multi-pronged strategy for object recognition and localization based on the integrated geometric and visual information available from the Kinect sensor. Firstly, the range image is over-segmented using an edge-detection algorithm and regions of interest are extracted based on a simple shape-analysis per segment. Then, these selected regions of the scene are matched with known objects using visual features and their distribution in 3D space. Finally, generated hypotheses about the positions of the objects are tested by back-projecting learned 3D models to the scene using estimated transformations and sensor model. Our method won the second place among eight competing algorithms, only marginally losing to the winner.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117304550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"k-IOS: Intersection of spheres for efficient proximity query","authors":"Xinyu Zhang, Young J. Kim","doi":"10.1109/ICRA.2012.6224889","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224889","url":null,"abstract":"We present a new bounding volume structure, k-IOS that is an intersection of k spheres, for accelerating proximity query including collision detection and Euclidean distance computation between arbitrary polygon-soup models that undergo rigid motion. Our new bounding volume is easy to implement and highly efficient both for its construction and runtime query. In our experiments, we have observed up to 4.0 times performance improvement of proximity query compared to an existing well-known algorithm based on swept sphere volume (SSV) [1]. Moreover, k-IOS is strictly convex that can guarantee a continuous gradient of distance function with respect to object's configuration parameter.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development and control of a three DOF planar induction motor","authors":"M. Kumagai, R. Hollis","doi":"10.1109/ICRA.2012.6224612","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224612","url":null,"abstract":"This paper reports a planar induction motor that can output 70 N translational thrust and 9 Nm torque with a response time of 10 ms. The motor consists of three linear induction armatures with vector control drivers and three optical mouse sensors. First, an idea to combine multiple linear induction elements is proposed. The power distribution to each element is derived from the position and orientation of that element. A discussion of the developed system and its measured characteristics follow. The experimental results highlight the potential of its direct drive features.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"460 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116183690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active learning from demonstration for robust autonomous navigation","authors":"David Silver, J. Bagnell, A. Stentz","doi":"10.1109/ICRA.2012.6224757","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224757","url":null,"abstract":"Building robust and reliable autonomous navigation systems that generalize across environments and operating scenarios remains a core challenge in robotics. Machine learning has proven a significant aid in this task; in recent years learning from demonstration has become especially popular, leading to improved systems while requiring less expert tuning and interaction. However, these approaches still place a burden on the expert, specifically to choose the best demonstrations to provide. This work proposes two approaches for active learning from demonstration, in which the learning system requests specific demonstrations from the expert. The approaches identify examples for which expert demonstration is predicted to provide useful information on concepts which are either novel or uncertain to the current system. Experimental results demonstrate both improved generalization performance and reduced expert interaction when using these approaches.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123457791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Versatile distributed pose estimation and sensor self-calibration for an autonomous MAV","authors":"S. Weiss, Markus Achtelik, M. Chli, R. Siegwart","doi":"10.1109/ICRA.2012.6225002","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225002","url":null,"abstract":"In this paper, we present a versatile framework to enable autonomous flights of a Micro Aerial Vehicle (MAV) which has only slow, noisy, delayed and possibly arbitrarily scaled measurements available. Using such measurements directly for position control would be practically impossible as MAVs exhibit great agility in motion. In addition, these measurements often come from a selection of different onboard sensors, hence accurate calibration is crucial to the robustness of the estimation processes. Here, we address these problems using an EKF formulation which fuses these measurements with inertial sensors. We do not only estimate pose and velocity of the MAV, but also estimate sensor biases, scale of the position measurement and self (inter-sensor) calibration in real-time. Furthermore, we show that it is possible to obtain a yaw estimate from position measurements only. We demonstrate that the proposed framework is capable of running entirely onboard a MAV performing state prediction at the rate of 1 kHz. Our results illustrate that this approach is able to handle measurement delays (up to 500ms), noise (std. deviation up to 20 cm) and slow update rates (as low as 1 Hz) while dynamic maneuvers are still possible. We present a detailed quantitative performance evaluation of the real system under the influence of different disturbance parameters and different sensor setups to highlight the versatility of our approach.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123522691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Lebastard, F. Boyer, C. Chevallereau, N. Servagent
{"title":"Underwater electro-navigation in the dark","authors":"V. Lebastard, F. Boyer, C. Chevallereau, N. Servagent","doi":"10.1109/ICRA.2012.6224836","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224836","url":null,"abstract":"This article proposes a solution to the problem of the navigation of underwater robots in confined unstructured environments wetted by turbid waters. The solution is based on a new sensor bio-inspired from electric fish. Exploiting the morphology of the sensor as well as taking inspiration from passive electro-location in real fish, the solution turns out to be a sensory-motor loop encoding a simple behavior relevant to exploration missions. This behavior consists in seeking conductive objects while avoiding insulating ones. The solution is illustrated on experiments. It is robust and works even in very unstructured scenes. It does not require any model and is quite cheap to implement.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Fehr, A. Cherian, Ravishankar Sivalingam, S. Nickolay, V. Morellas, N. Papanikolopoulos
{"title":"Compact covariance descriptors in 3D point clouds for object recognition","authors":"D. Fehr, A. Cherian, Ravishankar Sivalingam, S. Nickolay, V. Morellas, N. Papanikolopoulos","doi":"10.1109/ICRA.2012.6224740","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6224740","url":null,"abstract":"One of the most important tasks for mobile robots is to sense their environment. Further tasks might include the recognition of objects in the surrounding environment. Three dimensional range finders have become the sensors of choice for mapping the environment of a robot. Recognizing objects in point clouds provided by such sensors is a difficult task. The main contribution of this paper is the introduction of a new covariance based point cloud descriptor for such object recognition. Covariance based descriptors have been very successful in image processing. One of the main advantages of these descriptors is their relatively small size. The comparisons between different covariance matrices can also be made very efficient. Experiments with real world and synthetic data will show the superior performance of the covariance descriptors on point clouds compared to state-of-the-art methods.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A compact two DOF magneto-elastomeric force sensor for a running quadruped","authors":"A. Ananthanarayanan, S. Foong, Sangbae Kim","doi":"10.1109/ICRA.2012.6225201","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225201","url":null,"abstract":"This paper presents a novel design approach for a two-DOF foot force sensor for a high speed running quadruped. The adopted approach harnesses the deformation property of an elastomeric material to relate applied force to measurable deformation. A lightweight, robust and compact magnetic-field based sensing system, consisting of an assembly of miniature hall-effect sensors, is employed to infer the positional information of a magnet embedded in the elastomeric material. Instead of solving two non-linear models (magnetic field and elastomeric) sequentially, a direct approach of using artificial neural networks (ANN) is utilized to relate magnetic flux density (MFD) measurements to applied forces. The force sensor, which weighs only 24.5 gms, provides a measurement range of 0 - 1000 N normal to the ground and up to ± 125N parallel to the ground. The mean force measurement accuracy was found to be within 7% of the applied forces. The sensor designed as part of this work finds direct applications in ground reaction force sensing for a running quadrupedal robot.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121891180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Platt, L. Kaelbling, Tomas Lozano-Perez, Russ Tedrake
{"title":"Non-Gaussian belief space planning: Correctness and complexity","authors":"Robert Platt, L. Kaelbling, Tomas Lozano-Perez, Russ Tedrake","doi":"10.1109/ICRA.2012.6225223","DOIUrl":"https://doi.org/10.1109/ICRA.2012.6225223","url":null,"abstract":"We consider the partially observable control problem where it is potentially necessary to perform complex information-gathering operations in order to localize state. One approach to solving these problems is to create plans in belief-space, the space of probability distributions over the underlying state of the system. The belief-space plan encodes a strategy for performing a task while gaining information as necessary. Unlike most approaches in the literature which rely upon representing belief state as a Gaussian distribution, we have recently proposed an approach to non-Gaussian belief space planning based on solving a non-linear optimization problem defined in terms of a set of state samples [1]. In this paper, we show that even though our approach makes optimistic assumptions about the content of future observations for planning purposes, all low-cost plans are guaranteed to gain information in a specific way under certain conditions. We show that eventually, the algorithm is guaranteed to localize the true state of the system and to reach a goal region with high probability. Although the computational complexity of the algorithm is dominated by the number of samples used to define the optimization problem, our convergence guarantee holds with as few as two samples. Moreover, we show empirically that it is unnecessary to use large numbers of samples in order to obtain good performance.","PeriodicalId":246173,"journal":{"name":"2012 IEEE International Conference on Robotics and Automation","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121891954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}