Sinziana Indreica, A. Stancovici, M. Micea, V. Cretu, V. Groza
{"title":"Simulator based study of robot alignment and localization","authors":"Sinziana Indreica, A. Stancovici, M. Micea, V. Cretu, V. Groza","doi":"10.1109/ROSE.2013.6698429","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698429","url":null,"abstract":"Localization techniques are of key interest for mobile robot groups. A certain node is usable by the system when its position is known and it can communicate (one-way or two-ways). Ideally, the localization should be quick, precise and with low resource consumption. For this, the possibilities to keep track of the node should be seen as parameters and modified to obtain the best results. To study such a localization case of a group of robots we developed a simulation environment based on the hardware configuration from our previous work (mobile robots with wireless communication and ultrasound based location system). This paper aims to show the impact certain parameters and situations have on the localization problem.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126428358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An introduction of the biomimetic hand testbed: Skeletal structure and actuation","authors":"Norbert Sarkany, G. Cserey, P. Szolgay","doi":"10.1109/ROSE.2013.6698416","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698416","url":null,"abstract":"This paper presents a design of an anthropomorphic biomimetic hand testbed, wich focuses on the design of the fingers and its bio-inspired flexor-extensor like control. The kinematic description, the detailed explanation and presentation of the 3D CAD design are included. The description of the applied 3D touch and magnetic sensors are also detailed in the article. Functional simulation results and also the first experiments of the hardware prototype gave promising results and show that the approach can be an effective solution for the need of a hand testbed.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128338861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Catadioptric omnidirectional image inpainting via a multi-scale approach and image unwrapping","authors":"Daniel Paredes, P. Rodríguez, N. Ragot","doi":"10.1109/ROSE.2013.6698420","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698420","url":null,"abstract":"Omnidirectional catadioptric sensors are widely used in different applications as they allow the observation of a 360 field of view instantaneously. Their specific architecture necessarily induces carrying devices which obstruct the field of view of the sensor. This drawback can be a major obstacle to their use. This paper outlines a methodoly to inpaint catadioptric omnidirectional images, that means to remove and repair a part of the image. This method combines a multi-scale image inpainting (MII) algorithm with unwrapping techniques as the MII can not be applied in the omnidirectional image directly.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128380556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a sensor-based approach for local minima recovery in unknown environments","authors":"D. Nakhaeinia, S. Tang, P. Payeur","doi":"10.1109/ROSE.2013.6698437","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698437","url":null,"abstract":"This paper introduces a new methodology for escaping from local minima using an actual-virtual target switching strategy. In particular, this approach proposes suitable steps to detect trap situations and guide the robot away from local minima even when the environment is completely unknown. In this work the navigation system consists of two layers. In the low-level layer, a Nearest Virtual Target (NVT) approach is adapted as a reactive collision avoidance method for mobile robot navigation to achieve collision free motion in cluttered, dense and troublesome scenarios. Where the robot is surrounded by obstacles and a trap situation is likely to occur, the high-level layer becomes responsible to plan a path to pull the robot out of the trap. Finally, the performance of the proposed approach is validated by simulation results.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resume navigation and re-localization of an autonomous mobile robot after being kidnapped","authors":"R. Luo, K. C. Yeh, Kuan-Ho Huang","doi":"10.1109/ROSE.2013.6698410","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698410","url":null,"abstract":"The kidnapped robot problem is one of the essential issues in Human Robot Interaction research fields. This work addresses the problem of the position and orientation (pose) recovery after the robot being kidnapped, based on Laser Range Finder (LRF) sensor. By now the Monte Carlo Localization (MCL) has been introduced as a useful localization method. However the computational load of MCL is extremely large and not efficient at the initial few steps, which causes the localization process to take long computation time after the robot has been kidnapped and resets the particles. This paper provides a methodology to solve it by fusing MCL with Fast Library for Approximate Nearest Neighbors (FLANN) machine learning technique. We design a feature for LRF data called Geometric Structure Feature Histogram (GSFH).The feature GSFH encodes the LRF data to use it as the descriptor in FLANN. By building the database previously and FLANN searching technique, we filter out the most impossible area and reduce the computation load of MCL. Both in simulation and real autonomous mobile robot experiments show the effectiveness of our method.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122844706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement science for 6DOF object pose ground truth","authors":"R. Eastman, J. Marvel, J. Falco, T. Hong","doi":"10.1109/ROSE.2013.6698442","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698442","url":null,"abstract":"Users of perception systems in industrial manufacturing applications need standardized, third party ground truth procedures to validate system performance before deployment. Many manufacturing robotic applications require parts and assemblies to be perceived, inspected or grasped. These applications need accurate perception of object pose to six degrees of freedom (6DOF) in X, Y, Z position with roll, pitch and yaw. A standardized 6DOF ground truth system should include test procedures, algorithms, artifacts, fixtures, and measurement equipment. Each of them must be openly documented so manufacturers, vendors, and researchers can recreate and apply the procedures. This article reports on efforts to develop an industrial standard for 6DOF pose measurement. It includes the design of test methods using a laser-tracker, an aluminum fixture pose fixture, and a modular, medium density fiberboard (MDF) pose fixture.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123012985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}