Yasunori Tada, M. Inoue, T. Kawasaki, Yasushi Kawahito, H. Ishiguro, K. Suganuma
{"title":"A flexible and stretchable tactile sensor utilizing static electricity","authors":"Yasunori Tada, M. Inoue, T. Kawasaki, Yasushi Kawahito, H. Ishiguro, K. Suganuma","doi":"10.1109/IROS.2007.4399523","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399523","url":null,"abstract":"The tactile sensor is required for various robots. In humanoid robots, flexibility of the sensor is an important feature for preventing physical damage and for interacting with the human. Moreover, stretchability of the sensor has advantages that the sensor is nonbreakable and that the sensor can be easily mounted on curved surfaces or deformable parts such as joints. This paper proposes a novel tactile sensor made of flexible and stretchable silicone rubber. A structure of the sensor is similar to the capacitive tactile sensors. However, the proposed sensor utilizes a different principle from existing sensors. The sensor utilizes static electricity and electrostatic induction phenomenon, and can detect some touch conditions. This paper reports the principle and characteristics of the proposed sensor. Experiments show that the sensor output depends on touch area, touch velocity, and material of touch objects. However, the sensor does not depend on touch weight. Moreover, the experiment shows that even if the proposed sensor is stretched, it performs as the tactile sensor.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126161315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wheeled inverted pendulum type assistant robot: inverted mobile, standing, and sitting motions","authors":"Seonghee Jeong, Takayuki Takahashi","doi":"10.1109/IROS.2007.4398961","DOIUrl":"https://doi.org/10.1109/IROS.2007.4398961","url":null,"abstract":"This paper describes the mobile control and the standing and sitting motions of an Inverted PENdulum type assistant robot-(I-PENTAR) aiming at the coexistence of safety and work capability. I-PENTAR consists of a body with a high powered waist joint, arms designed for safety, and a wheeled inverted pendulum mobile platform. It is modeled as a three-dimensional robot with controls for inclination angle, linear position, and steering angle, and is controlled by state feedback control based on the LQR method. The motion planning of standing and sitting-important motions for an inverted pendulum type robot in practical use-is proposed. It was experimentally confirmed that I-PENTAR could realize a series of fundamental motions required in practical use, which are standing, running, turning, and sitting stably.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126185593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metrics for quantifying system performance in intelligent, fault-tolerant multi-robot teams","authors":"Balajee Kannan, L. Parker","doi":"10.1109/IROS.2007.4399530","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399530","url":null,"abstract":"Any system that has the capability to diagnose and recover from faults is considered to be a fault-tolerant system. Additionally, the quality of the incorporated fault-tolerance has a direct impact on the overall performance of the system. Hence, being able to measure the extent and usefulness of fault- tolerance exhibited by the system would provide the designer with a useful analysis tool for better understanding the system as a whole. Unfortunately, it is difficult to quantify system fault-tolerance on its own for intelligent systems. A more useful metric for evaluation is the \"effectiveness\" measure of fault- tolerance. The influence of fault-tolerance towards improving overall performance determines the overall effectiveness or quality of the system. In this paper, we outline application- independent metrics to measure fault-tolerance within the context of system performance. In addition, we also outline potential methods to better interpret the obtained measures towards understanding the capabilities of the implemented system. Furthermore, a main focus of our approach is to capture the effect of intelligence, reasoning, or learning on the effective fault-tolerance of the system, rather than relying purely on traditional redundancy based measures. We show the utility of the designed metrics by applying them to different fault-tolerance architectures implemented for multiple complex heterogeneous multi-robot team applications and comparing system performance. Finally, we contrast the developed metrics with the only other existing method (HWB method) for evaluating (that we are aware of) effective fault-tolerance for multi-robot teams and rate them in terms of their capability to best interpret the workings of the implemented systems. To the best of our knowledge, this is the first metric that attempts to evaluate the quality of learning towards understanding system level fault-tolerance.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123246395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An environmental adaptive control system of a wheel type mobile robot for the rough terrain movement","authors":"Masanori Sato, A. Kanda, K. Ishii","doi":"10.1109/IROS.2007.4399604","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399604","url":null,"abstract":"The transportation using wheels is one of the most popular transportation mechanisms for mobile robots because a wheel type mobile system has high energy efficiency, simple mechanism and well investigated control system. However, the wheel type mobile robots have the difficulty in the rough terrain movement. In this research, we propose an environmental adaptive control system for a wheel type mobile robot for the rough terrain movement. This proposal system recognized the traveling environment and switched the adaptable controllers, and showed the better performance than only one controller.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114901191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling and motion planning for handling furniture by a mobile manipulator","authors":"Kimitoshi Yamazaki, T. Tsubouchi, M. Tomono","doi":"10.1109/IROS.2007.4399399","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399399","url":null,"abstract":"This paper introduces a planning method for handling furniture which exists in real world. We propose a method which is easily expandable its handle able furniture such as closet, shelf and so on. If the robot can handle such furniture autonomously, it is expected that multiple daily tasks, for example, storing a small object in a drawer, can be achieved by the robot. Because perplexing processes is needed to give the knowledge of furniture handling to the robot manually, we propose direct teaching based approach which can easily give not only how to handle the furniture but also an appearance and 3D shape of it. Combining general knowledge given in advance and manipulation procedure instructed by human directly, the robot acquires how to manipulate the storing places. The performance of the proposed method is illustrated by experiments.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115523601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural task decomposition with intrinsic potential fields","authors":"Stephen Hart, R. Grupen","doi":"10.1109/IROS.2007.4399481","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399481","url":null,"abstract":"Any given task can be solved in a number of ways, whether through path-planning, modeling, or control techniques. In this paper, we present a methodology for natural task decomposition through the use of intrinsically meaningful potential fields. Specifically, we demonstrate that using classical conditioning measures in a concurrent control framework provides a domain-general means for solving tasks. Among the conditioning measures we use are manipulability [T. Yoshikawam, 1985], localizability [J. Uppala et al., 2002], and range of motion. To illustrate the value of our approach we demonstrate its applicability to an industrially relevant inspection task.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115592119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Angle control of a loosely coupled mechanism in 3D space using length sensors","authors":"M. Shibata, T. Yoshimura, S. Hirai","doi":"10.1109/IROS.2007.4399093","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399093","url":null,"abstract":"We describe here a mechanism for controlling angles of a human-like joint using length sensors in three- dimensional (3D) space. This joint mechanism, which is called a loosely coupled mechanism, includes a viscoelastic object and soft actuators in place of the cartilage and muscles of a human arm. To confirm motion of the link using one length sensor, we constructed a prototype of the mechanism in two-dimensional (2D) space. Based on this prototype, we constructed a 3D loosely coupled mechanism with length sensors, and we were able to control two projecting angles of the 3D prototype. In addition, we propose an appropriate method of measurement to reduce errors in measurement due to the length sensors. Using this method, we found that, for each projecting plane, the errors were less than 1.0 deg in our 3D prototype.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116018380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Humanoid robot noise suppression by particle filters for improved automatic speech recognition accuracy","authors":"Florian Kraft, Matthias Wölfel","doi":"10.1109/IROS.2007.4399114","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399114","url":null,"abstract":"Automatic speech recognition on a humanoid robot is exposed to numerous known noises produced by the robot's own motion system and background noises such as fans. Those noises interfere with target speech by an unknown transfer function at high distortion levels, since some noise sources might be closer to the robot's microphones than the target speech sources. In this paper we show how to remedy those distortions by a speech feature enhancement technique based on the recently proposed particle filters. A significant increase of recognition accuracy could be reached at different distances for both engine and background noises.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122539449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collective construction of environmentally-adaptive structures","authors":"Justin Werfel, D. Ingber, R. Nagpal","doi":"10.1109/IROS.2007.4399462","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399462","url":null,"abstract":"We describe decentralized algorithms by which a swarm of simple, independent, autonomous robots can build two-dimensional structures using square building blocks. These structures can (1) exactly match arbitrary user-specified designs, (2) adapt their shape to immovable obstacles, or (3) form a wall of given minimum width around an environmental feature. These three possibilities span the range from entirely prespecified structures to those whose shape is entirely determined by the environment. Robots require no explicit communication, instead using information storage capabilities of environmental elements (a form of \"extended stigmergy\") to coordinate their activities. We provide theoretical proof of the correctness of the algorithms for the first two types of structures, and experimental support for algorithms for the third.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122834639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Yoshimitsu, F. Miyawaki, T. Sadahiro, Kentaro Ohnuma, Y. Fukui, D. Hashimoto, K. Masamune
{"title":"Development and evaluation of the second version of scrub nurse robot (SNR) for endoscopic and laparoscopic surgery","authors":"K. Yoshimitsu, F. Miyawaki, T. Sadahiro, Kentaro Ohnuma, Y. Fukui, D. Hashimoto, K. Masamune","doi":"10.1109/IROS.2007.4399359","DOIUrl":"https://doi.org/10.1109/IROS.2007.4399359","url":null,"abstract":"The shortage of nurses in large hospitals of developed countries has become a major problem. Especially, the shortage of scrub nurses, who assist operating surgeons exchange surgical instruments, has been chronically severe. To compensate for this shortage, we have been proposing the scrub nurse robot (SNR) system that is capable of functioning as a skilled human scrub nurse in endoscopic and laparoscopic surgery. We developed the 2nd version of SNR, and achieved smooth and wide movement of its arms each with 4 DOF. The 2nd SNR is able to speak several sentences and recognize some words as well as the names of surgical instruments, and is also capable of recognizing a surgeon's intraoperative actions by its real-time visual recognition system (RTVRS). The RTVRS is basically composed of both a commercially-available 3D position tracking system and the algorithm that we developed to recognize surgeons' actions during exchange of instruments from the above-mentioned positional data. In this paper, we evaluated how quickly and timely the RTVRS-driven SNR helped surgeon's stand-ins exchange instruments in a laboratory, in comparison with human scrub nurses in real surgical operations. We found two problems about the current RTVRS-driven SNR: one is its response time and the other is chiefly related to the design and mechanism of the part storing the surgical instruments. 1) Concerning the first problem, the RTVRS-driven SNR took 2.11 sec until it finished holding out its hand with an instrument after it had detected a surgeon's stand-in's motions observed during extraction of a surgical instrument. However, a skilled real surgeon took 1.90 sec until he got the requested instrument in the clinical cases although he had to wait for as long as 1.24 sec until receiving it. Therefore, we must speed up the SNR's performance at least by 0.2 sec to assist the real surgeon as human scrub nurses did. Especially, since 0.68 sec out of the 2.11 sec was spent in data processing within the current RTVRS, we conclude that the performance of the RTVRS must be improved rather than speed-up of its arm movement. 2) The other problem was highlighted by measurement of the time during which the stand-ins and the real surgeon had to take their eyes from the monitor displaying the operative field within the abdomen. We termed this period of time 'eyes-off time. The existence of 'eyes-off' time observed during his actions of returning an instrument after use and of waiting for the next instrument was regarded as unfavorable. The 'eyes-off' time was 2.34 sec in the laboratory whereas it was 0.19 sec in the clinical cases. The much longer 'eyes-off' time in the laboratory was partly due to inexperienced stand-ins' performances, but mainly because of the design and mechanism of the part storing the instruments (a tool changer). To overcome these two problems, we are now developing the next version of SNR.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}