{"title":"A disposable plastic compact wrist for smart minimally invasive surgical tools","authors":"F. V. Meer, A. Giraud, D. Estève, X. Dollat","doi":"10.1109/IROS.2005.1545440","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545440","url":null,"abstract":"This paper describes a new compact bending and disposable (to avoid nosocomial contaminations) plastic wrist for minimally invasive surgery with a large free space for several connections such as electrical wires, fiberoptics and fluidic tubes, etc. It uses small partially locked ball joints to increase the dexterity of surgical tools in all directions contrary to other wrists using several successive orthogonal joints. This compact wrist is a generic concept comprises at least two vertebrae composed of non-attached contacts: plastic plates and balls. Six metal wires drive the position of each vertebra and several other free wires allow the locking of wrist axial rotations. Analytic and finite element simulations allow an evaluation of the mechanical rigidity of the wrist by several parameters: the wire number, diameter, position, mechanical properties and the general geometry of the wrist. The wrist is fabricated with 6 mm biocompatible plastic vertebrae micromachined by low cost water jet cutting. It uses 0.3 mm NiTi super-elastic wires for its mechanical structure which enable two degrees of freedom (DOF) in any directions between -85 degrees and 85 degrees. The two DOFs of the wrist and the DOF of the forceps are driven by a handled basic system using pulleys, 0.5mm Topline/spl reg/ ropes connected to NiTi wires and four RC-servomotors. In the first prototype 6 electrical wires, 2 micro-light emitters and 4 fiberoptics were successfully integrated. We are convinced of the effectiveness of this compact disposable plastic wrist, to be used with a usual or a motorised handled surgical instrument and integrating new functionalities such as electrical/optical/fluidics connections for smart surgical embedded micro-systems like micro-sensors and micro-actuators.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123782904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Belousov, Claudia Esteves, J. Laumond, Etienne Ferre
{"title":"Motion planning for the large space manipulators with complicated dynamics","authors":"I. Belousov, Claudia Esteves, J. Laumond, Etienne Ferre","doi":"10.1109/IROS.2005.1545547","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545547","url":null,"abstract":"This paper deals with motion planning algorithms for the large space robot manipulators with complicated dynamic behavior. We propose two \"two-stage\" iterative algorithms, which provide collision-free robot motion taking into account robot's dynamics. The approach is based on new efficient methods for robot manipulator dynamics simulation and probabilistic methods for motion planning in highly cluttered environments. The algorithms are applicable for the robot manipulators of general class with arbitrary kinematics and dynamics parameters. We have demonstrated the approach for a particular task of servicing the satellite by a large space manipulator. This task is one of the most challenging since large space manipulators have extremely complicated dynamic behavior caused by elasticity of their structure, huge payloads they work with and zero-gravity conditions. Experiments involving a 15.5 meters long manipulator carrying a satellite inside a space shuttle with clearance less than 3 cm are presented. Several movies demonstrate the results.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125507913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David M. Bradley, R. Patel, N. Vandapel, S. Thayer
{"title":"Real-time image-based topological localization in large outdoor environments","authors":"David M. Bradley, R. Patel, N. Vandapel, S. Thayer","doi":"10.1109/IROS.2005.1545442","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545442","url":null,"abstract":"This paper presents a real-time implementation of a topological localization method based on matching image features. This work is supported by a unique sensor pod design that provides stand-alone sensing and computing for localizing a vehicle on a previously traveled road. We report extensive field test results from outdoor environments, with the sensor pod mounted on both a small and a large all-terrain vehicle. Off-line analysis of the approach is also presented to evaluate the robustness of the various image features tested against different weather and lighting conditions.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122496110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model-based approach for visual guided grasping with uncalibrated system components","authors":"Oliver Hornung, B. Heimann","doi":"10.1109/IROS.2005.1545598","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545598","url":null,"abstract":"In this paper, a complete framework for the task of grasping a naturally textured object is being presented. The lack of calibration in the components of the eye-in-hand system is overcome by an image trajectory based visual servoing scheme (ITBVS). Using an object model reconstructed off-line, consecutive rough pose estimation and refinement of the estimate by scale optimized 3D model to object registration solves the feature correspondence problem in the absence of artificial landmarks. Experimental results using a zoom camera and a 7-DOF serial manipulator prove the effectiveness of the approach.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114286586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maren Bennewitz, F. Faber, D. Joho, M. Schreiber, Sven Behnke
{"title":"Integrating vision and speech for conversations with multiple persons","authors":"Maren Bennewitz, F. Faber, D. Joho, M. Schreiber, Sven Behnke","doi":"10.1109/IROS.2005.1545158","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545158","url":null,"abstract":"An essential capability for a robot designed to interact with humans is to show attention to the people in its surroundings. To enable a robot to involve multiple persons into interaction requires the maintenance of an accurate belief about the people in the environment. In this paper, we use a probabilistic technique to update the knowledge of the robot based on sensory input. In this way, the robot is able to reason about the uncertainty in its belief about people in the vicinity and is able to shift its attention between different persons. Even people who are not the primary conversational partners are included into the interaction. In practical experiments with a humanoid robot, we demonstrate the effectiveness of our approach.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122083624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Working postures for humanoid robots to generate large manipulation force","authors":"A. Konno, Yoonkwon Hwang, S. Tamada, M. Uchiyama","doi":"10.1109/IROS.2005.1545237","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545237","url":null,"abstract":"When a human needs to apply a large force to an environment, the human takes an appropriate posture to produce a large manipulation force. This paper discusses appropriate working postures for a humanoid robot to apply a force effectively to an environment. An objective function is defined and the sequential quadratic programming (SQP) is used to find a solution. The pushing a wall and turning a valve are taken as examples of tasks for a humanoid robot, and simulations and experimentations are performed. The results show that the consideration of working postures clearly contributes to make the manipulation force larger.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122233674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient mapping through exploitation of spatial dependencies","authors":"Y. Rachlin, J. Dolan, P. Khosla","doi":"10.1109/IROS.2005.1545118","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545118","url":null,"abstract":"Occupancy grid mapping algorithms assume that grid block values are independently distributed. However, most environments of interest contain spatial patterns that are better characterized by models that capture dependencies among grid blocks. To account for such dependencies, we model the environment as a pairwise Markov random field. We specify a belief propagation-based mapping algorithm that takes these dependencies into account when estimating a map. To demonstrate the potential benefits of this approach, we simulate a simple multi-robot minefield mapping scenario. Minefields contain spatial dependencies since some landmine configurations are more likely than others, and since clutter, which causes false alarms, can be concentrated in certain regions and completely absent in others. Our belief propagation-based approach outperforms conventional occupancy grid mapping algorithms in the sense that better maps can be obtained with significantly fewer robot measurements. The belief propagation algorithm requires a modest amount of increased computation, but we contend that in applications where significant energy and time expenditure is associated with robot movement and active sensing, the reduction in the required number of samples justified the increased computation.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116857921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Serpentine locomotion of a snake-like robot controlled by cyclic inhibitory CPG model","authors":"Zhenli Lu, Shugen Ma, Bin Li, Yuechao Wang","doi":"10.1109/IROS.2005.1545435","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545435","url":null,"abstract":"Based on the structure of both biological snakes and snake-like robots and their rhythm locomotion, the theory of the cyclic inhibitory CPG is adopted as a control method to construct a neuron network model of the snake-like robot. The relation between the CPG parameters and the serpentine locomotion of the snake-like robot is defined in this paper. The validity of the serpentine locomotion controlled by the CPG model is verified through a snake-like robot model. The modulating methods of the CPG parameters are brought forward and simulated to realize the required turn motion and the reconfiguration. Moreover, we present that real snake-like robot can successfully exhibit serpentine locomotion by using controller output of the proposed architecture. Finally, the aspects of future researches are discussed.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117089737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Ogata, H. Ohba, J. Tani, Kazunori Komatani, HIroshi G. Okuno
{"title":"Extracting multi-modal dynamics of objects using RNNPB","authors":"T. Ogata, H. Ohba, J. Tani, Kazunori Komatani, HIroshi G. Okuno","doi":"10.20965/jrm.2005.p0681","DOIUrl":"https://doi.org/10.20965/jrm.2005.p0681","url":null,"abstract":"Dynamic features play an important role in recognizing objects that have similar static features in colors and or shapes. This paper focuses on active sensing that exploits dynamic feature of an object. An extended version of the robot, Robovie-IIs, moves an object by its arm to obtain its dynamic features. Its issue is how to extract symbols from various kinds of temporal states of the object. We use the recurrent neural network with parametric bias (RNNPB) that generates self-organized nodes in the parametric bias space. The RNNPB with 42 neurons was trained with the data of sounds, trajectories, and tactile sensors generated while the robot was moving/hitting an object with its own arm. The clusters of 20 kinds of objects were successfully self-organized. The experiments with unknown (not trained) objects demonstrated that our method configured them in the PB space appropriately, which proves its generalization capability.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129770539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active multi-camera object recognition in presence of occlusion","authors":"Forough Farshidi, S. Sirouspour, T. Kirubarajan","doi":"10.1109/IROS.2005.1545591","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545591","url":null,"abstract":"This paper is concerned with the problem of appearance-based active multi-sensor object recognition/pose estimation in the presence of structured noise. It is assumed that multiple cameras acquire images from an object belonging to a set of known objects. An algorithm is proposed for optimal sequential positioning of the cameras in order to estimate the class and pose of the object from sensory observations. The principle component analysis is used to produce the observation vector from the acquired images. Object occlusion and sensor noise have been explicitly incorporated into the recognition process using a probabilistic approach. A recursive Bayesian state estimation problem is formulated that employs the mutual information in order to determine the best next camera positions based on the available information. Experiments with a two-camera system demonstrate that the proposed method is highly effective in object recognition/pose estimation in the presence of occlusion.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}