{"title":"Modeling and control of cylindrical mobile robot","authors":"T. Hirano, M. Ishikawa, K. Osuka","doi":"10.1109/IROS.2012.6386124","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386124","url":null,"abstract":"Cylinders exhibits characteristic dynamic behavior such as rolling with lateral-side or edges. In this paper, we propose a new type of rolling mobile robot with cylindrical aspect, which performs two contrastive modes of motion due to the geometry of its shape. In the first mode called lateral-side rolling, the robot is statically stable except few degree of freedom, whereas the robot is only dynamically stable in the other mode (called edge-rolling) and has potential of high mobility. In this work, we attempt to control this robot using an eccentric rotor without gyroscope and applying linear or nonlinear control theory.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"11 1","pages":"5321-5326"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82003139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative study of two 3D reconstruction methods for underwater archaeology","authors":"A. Meline, J. Triboulet, B. Jouvencel","doi":"10.1109/IROS.2012.6385711","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385711","url":null,"abstract":"The underwater 3D reconstruction cartography has made great progress in the last decade. The work presented in this paper is about the analysis and 3D reconstruction of archeological objects. Using a calibrated single camera and an uncalibrated system, we propose to describe a method to perform the Euclidian 3D reconstruction of unknown objects. A comparison of two methods is presented and tested on synthetic and real underwater pictures. Filters are proposed to simulate underwater environment and inherent problems. Finally, robust and stable features have been extracted from underwater pictures and used to perform the 3D model.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"28 1","pages":"740-745"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83640979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Orienting deformable polygonal parts without sensors","authors":"Shawn M. Kristek, Dylan A. Shell","doi":"10.1109/IROS.2012.6386165","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386165","url":null,"abstract":"Sensorless part orienting has proven useful in manufacturing and automation, while the manipulation of deformable objects is an area of growing interest. Existing sensorless orienting techniques may produce forces which have the potential to damage deformable parts. We present an algorithm that, when provided a geometric description of the part and a deformation model, generates a plan to orient the part up to symmetry from any initial orientation. The solution exploits deformation of the object under certain configurations to help resolve ambiguity. The approach has several attractive features: (1) the resulting plan is a short sequence of such actions guaranteed to succeed for all initial configurations; (2) the algorithm operates even with a very simple model of deformation, but is extensible when specialized knowledge is available; (3) failure to find a feasible solution has precise semantics (e.g., inadequate manipulator precision). We validate the algorithm experimentally with a pair of low-precision robot manipulators, orienting 6 parts made of 4 types of materials, with the correct orientation being reached on 80% of the 192 trials. Careful analysis of the failures emphasizes the importance of low-friction conditions, that increased manipulator precision would be beneficial but is not necessary, and a simple deformation model can suffice. In addition to illustrating the feasibility of sensorless manipulation of deformable parts, we note that the algorithm has applications to manipulation of non-deformable parts without the pressure switch sensor employed in existing sensorless orienting strategies.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"19 1","pages":"973-979"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89863546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kinematic calibration of manipulator using single laser pointer","authors":"Jwusheng Hu, Jyun-Ji Wang, Yung-Jung Chang","doi":"10.1109/IROS.2012.6385531","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385531","url":null,"abstract":"This paper proposes a robot kinematic calibration system including a laser pointer installed on the manipulator, a stationary camera, and a planar surface. The laser pointer beams to the surface, and the camera observes the projected laser spot. The position of the laser spot is computed according to the geometrical relationships of line-plane intersection. The laser spot position is sensitive to slight difference of the end-effector pose due to the extensibility of laser beam. Inaccurate kinematic parameters cause inaccurate calculation of the end-effector pose, and then the laser spot position by the forward estimation is deviated from the one by camera observation. For calibrating the robot kinematics, the optimal solution of kinematic parameters is obtained by minimizing the laser spot position difference between the forward estimation and camera measurement via the nonlinear optimization method. The proposed kinematic calibration system is cost-efficient and flexible for any manipulator. The proposed method is validated by simulation and experiment.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"58 1","pages":"426-430"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78161607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust descriptors for 3D point clouds using Geometric and Photometric Local Feature","authors":"Hyoseok Hwang, S. Hyung, Sukjune Yoon, K. Roh","doi":"10.1109/IROS.2012.6385920","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385920","url":null,"abstract":"The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"59 1","pages":"4027-4033"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77931364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collision avoidance of industrial robot arms using an invisible sensitive skin","authors":"Tin Lun Lam, H. Yip, Huihuan Qian, Yangsheng Xu","doi":"10.1109/IROS.2012.6386294","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386294","url":null,"abstract":"Collision avoidance of industrial robot arms in varying environment is a challenging task which has been a tough problem for decades. It often requires a large number of sensors and high computational power. Moreover, since the sensors are often mounted on the surface of robot arms, they may affect the appearance of the robot arms and may be vulnerable to damage. This video presents a cost-effective invisible sensitive skin that can cover a large area without utilizing a large number of sensors and it is built inside the robot arm. By using only 5 contactless capacitive sensors and specially designed antennas, collision avoidance of a 6-DOF industrial robot arm is attained.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"115 1","pages":"4542-4543"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74898164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel stiffness in a bounding quadruped with flexible spine","authors":"G. A. Folkertsma, Sangbae Kim, S. Stramigioli","doi":"10.1109/IROS.2012.6385870","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385870","url":null,"abstract":"Legged locomotion involves periodic negative and positive work, which usually results in high power consumption. Improvement of the energy efficiency is possible by using energy storage elements to reversibly store the negative work performed during a walking or running cycle. While series elastics with high impedance (high gear ratio) actuators are widely used, we investigate the application of parallel stiffness with highly backdriveable actuators. We specifically show that the use of parallel springs in a bounding quadruped with a flexible spine can lower power consumption by over 50%.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"25 1","pages":"2210-2215"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76312513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seamless aiding of inertial-slam using Visual Directional Constraints from a monocular vision","authors":"U. Qayyum, Jonghyuk Kim","doi":"10.1109/IROS.2012.6385830","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385830","url":null,"abstract":"Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"9 1","pages":"4205-4210"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75594804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge Barrio, Fráncisco Suarez-Ruiz, M. Ferre, R. Aracil
{"title":"A rate-position haptic controller for large telemanipulation workspaces","authors":"Jorge Barrio, Fráncisco Suarez-Ruiz, M. Ferre, R. Aracil","doi":"10.1109/IROS.2012.6385633","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385633","url":null,"abstract":"This paper presents a new Haptic Rate-Position controller, which allows manipulating a slave robot in a large workspace. Haptic information is displayed to the user so as to be informed when a change in the operation mode occurs. This controller allows performing tasks in a large remote workspace by using a haptic device with a reduced workspace such as Phantom. Experimental results have been carried out using a virtual slave robot, simulated with Open Dynamics Engine (ODE). A real IFMIF (International Fusion Materials Irradiation Facility) remote handling task has been simulated. Its goal is to carry out remote manipulation of test irradiated materials in a nuclear environment. The proposed algorithm has been compared with the classic Position controller in a Pick & Place manipulation and has shown much better levels of effectiveness.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"18 1","pages":"58-63"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75736360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new Kinect-based guidance mode for upper limb robot-aided neurorehabilitation","authors":"C. Loconsole, F. Banno, A. Frisoli, M. Bergamasco","doi":"10.1109/IROS.2012.6386097","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386097","url":null,"abstract":"During typical robot-assisted training sessions, patients are required to execute tasks with the assistance of a robot while receiving feedback on a 2D display. Three-dimensional tasks of this sort require the adoption of stereoscopy to achieve correct visuo-motor-proprioceptive alignment. Stereoscopy often causes side-effects as sickness and tiredness, and it may affect the processes of recovery and cortical reorganization of the patients' brain in an unclear way. It follows that it is preferrable for a robot-assisted neurorehabilitation therapy to work in a real 3D setup containing real objects rather than using virtual reality. In this paper, we propose a new system for robot-assisted neurorehabilitation scenarios which allows patients to execute therapy by manipulating real, generic 3D objects. The proposed system is based on a new algorithm for identification and tracking of generic objects which makes efficient use of a Microsoft Kinect sensor. We discuss the results of several experiments conducted in order to test robustness, accuracy and speed of the tracking algorithm and the feasibility of the integrated system.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"6 1","pages":"1037-1042"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74279714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}