{"title":"Adaptable EMG Prosthetic Hand using On-line Learning Method -Investigation of Mutual Adaptation between Human and Adaptable Machine","authors":"R. Kato, T. Fujita, H. Yokoi, T. Arai","doi":"10.1109/ROMAN.2006.314455","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314455","url":null,"abstract":"We developed a new adaptable EMG prosthetic hand, which executes recognition process and learning process in parallel and can keep up with change in the mapping between an electromyographic signals (EMG) to the desired motion, for amputee. EMG-to-motion classifier which used in proposed prosthetic hand is done under the assumptions that the input motions are continuous, and the teaching motions are ambiguous in nature, therefore, automatic addition, elimination and selection of learning data are possible. Using our proposed prosthetic hand system, we conducted experiments to discriminate eight forearm motions, with the results, a stable and highly effective discrimination rate was achieved and maintained even when changes occurred in the mapping. Moreover, we analyzed mutual adaptation between human and adaptable prosthetic hand using ability test and f-MRI, and clarified each adaptation process","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124209494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of Autonomous Assistive Devices -Analysis of change of human motion patterns-","authors":"K. Kita, R. Kato, H. Yokoi, T. Arai","doi":"10.1109/ROMAN.2006.314454","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314454","url":null,"abstract":"The purpose of our research is to build a system for mutual adaptation between a user and assistive devices for restoration of motor function. To build such system, it is necessary to know human's motion patterns. In this paper, as the first step, we investigated human motion characteristic on human-machine system like EMG (electromyogram) prosthetic hand and EMG to motion classifier system. In the experiment, we measured the EMG signals and investigated a difference between motion patterns of teaching motion, i.e. user's intended motion, and that of actual motion using the proposed criteria. As results, it is clear that these criteria are useful to analyze changes of human motion patterns","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Extensible Tool Interface for Three-Dimensional Interaction with Remote Objects","authors":"Takabumi Watanabe, S. Wesugi, Y. Miwa","doi":"10.1109/ROMAN.2006.314358","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314358","url":null,"abstract":"Many shared-space communication systems have been proposed which support bodily actions, such as eye gaze and instruction, among remote people. However, the method to share a three-dimensional bodily action in physical space with remote people has not been explored sufficiently. In this paper, we propose a novel method of a shared-space communication to support to bridge over remote two tabletops visually and to share bodily action among remote people. To achieve this aim, a video image of remote tabletop is required to be shared with each other. A display and an interface are also needed for reflecting a remote bodily action in a local space consistently. Consequently, the front screen, on which a video image of a remote space including a tabletop is projected to connect tableside visually, is installed aslope on a table. We developed also a virtual extensible tool interface which supports visual interactions including pointing to and touching a physical object in remote space by representing virtual extensive tool to a remote space. Experiments on bodily interactions between local and remote people demonstrated that the extensible tool interface can support bodily interactions with a remote partner including instructing to a remote physical object in three dimensions. Moreover, the results indicate clearly that the participants felt as if they were touching the remote table. Thus, our system possesses a potential to support a remote co-creative communication including a physical interaction through tools, and it is promising","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Clodic, S. Fleury, R. Alami, R. Chatila, G. Bailly, L. Brethes, M. Cottret, P. Danès, X. Dollat, F. Elisei, I. Ferrané, M. Herrb, G. Infantes, Christian Lemaire, F. Lerasle, Jérôme Manhes, Patrick Marcoul, P. Menezes, V. Montreuil
{"title":"Rackham: An Interactive Robot-Guide","authors":"A. Clodic, S. Fleury, R. Alami, R. Chatila, G. Bailly, L. Brethes, M. Cottret, P. Danès, X. Dollat, F. Elisei, I. Ferrané, M. Herrb, G. Infantes, Christian Lemaire, F. Lerasle, Jérôme Manhes, Patrick Marcoul, P. Menezes, V. Montreuil","doi":"10.1109/ROMAN.2006.314378","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314378","url":null,"abstract":"Rackham is an interactive robot-guide that has been used in several places and exhibitions. This paper presents its design and reports on results that have been obtained after its deployment in a permanent exhibition. The project is conducted so as to incrementally enhance the robot functional and decisional capabilities based on the observation of the interaction between the public and the robot. Besides robustness and efficiency in the robot navigation abilities in a dynamic environment, our focus was to develop and test a methodology to integrate human-robot interaction abilities in a systematic way. We first present the robot and some of its key design issues. Then, we discuss a number of lessons that we have drawn from its use in interaction with the public and how that will serve to refine our design choices and to enhance robot efficiency and acceptability","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"516 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116227305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model for inferring the intention in imitation tasks","authors":"B. Jansen, Tony Belpaeme","doi":"10.1109/ROMAN.2006.314424","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314424","url":null,"abstract":"Robot imitation comprises a number of hard problems, one of the most difficult problems is the extraction of intention from a demonstration. A demonstration can almost always be interpreted in different ways, making it impossible for the imitating robot to find out what exactly it is that was demonstrated. We first attempt to set out the problem of intention reading. Next, we offer a computational model which implements a solution to intention reading. Our model needs repeated interactions between the demonstrator and the imitator. Through keeping a score about which interactions where successful, the imitating robot gradually builds a model which \"understands\" what the intent is of the demonstrator","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115136419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context Orientated Motion Generation: A New Scheme for Humanoid Robot Control","authors":"I. Boesnach, J. Moldenhauer, A. Fischer, T. Stein","doi":"10.1109/ROMAN.2006.314434","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314434","url":null,"abstract":"There are two quite different kinds of approaches to the generation of motions for a humanoid robot. The first one optimizes for the robot's pose, i.e. the movement should look similar to human motions (Zordan and Hodgins, 1999), (Dasgupta and Nakamura, 1999), (Riley et al., 2003), (Kuffner et al., 2003), (Erol et al., 2003), and (Ijspeert et al., 2002). The second one attempts to make the robot perform a given task and is thus focused on the accurate movements of the robot's end effectors (Asfour et al., 2000) and (Yigit et al., 2003). In this work, we present a completely new scheme for the motion generation of a humanoid robot called context oriented motion generation. This scheme incorporates the pose oriented approach and the task oriented approach. It is based on a classical trajectory generator and a new context specific motion classifier developed by our group. The trajectory generator creates a set of trajectories and thereby ensures that the given motion task is accomplished by all trajectories. Afterwards, the motion classifier evaluates this set of motions with respect to the given context and selects the optimal trajectory","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123039698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Clustering for A Partner Robot Based on Particle Swarm Optimization","authors":"I. A. Sulistijono, N. Kubota","doi":"10.1109/ROMAN.2006.314480","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314480","url":null,"abstract":"This paper proposes swarm intelligence for a perceptual system of a partner robot. The robot requires the capability of visual perception to interact with a human. Basically, a robot should perform moving object extraction and clustering for visual perception used in the interaction with a human. In this paper, we propose a total system for human classification for a partner robot by using particle swarm optimization, k-means, self organizing maps and back propagation. The experimental results show that the partner robot can perform the human clustering and classification","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123096550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Behavior Fusion Estimation from Demonstration","authors":"M. Nicolescu, O. Jenkins, A. Olenderski","doi":"10.1109/ROMAN.2006.314457","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314457","url":null,"abstract":"A critical challenge in robot learning from demonstration is the ability to map the behavior of the trainer onto the robot's existing repertoire of basic/primitive capabilities. Following a behavior-based approach, we aim to express a teacher's demonstration as a linear combination (or fusion) of the robot's primitives. We treat this problem as a state estimation problem over the space of possible linear fusion weights. We consider this fusion state to be a model of the teacher's control policy expressed with respect to the robot's capabilities. Once estimated under various sensory preconditions, fusion state estimates are used as a coordination policy for online robot control to imitate the teacher's decision making. A particle filter is used to infer fusion state from control commands demonstrated by the teacher and predicted by each primitive. The particle filter allows for inference under the ambiguity over a large space of likely fusion combinations and dynamic changes to the teacher's policy over time. We present results of our approach in a simulated and real world environments with a Pioneer 3DX mobile robot","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125638012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile Manipulation Based on Generic Object Knowledge","authors":"F. Bley, Volker Schmirgel, K. Kraiss","doi":"10.1109/ROMAN.2006.314363","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314363","url":null,"abstract":"The control of vision-based mobile manipulators relies on specific information representing the object which has to be manipulated. Conventional methods like position-based or image-based visual servoing use a precise 3D model of the object or image features to move the end effector to the desired position. This a-priori knowledge has to be generated by an expert before the manipulation can be performed and it is limited to a specific object. Similar items of the same category varying in size or color can not be handled with these approaches. We are proposing a new methodology, which uses a generic category description instead of specific object features in order to allow the interaction with a broader range of items. This description includes appearance properties represented through geometrical primitives as well as additional category knowledge like functional properties, mechanical attributes and directions for object handling. Since the online determination of possible grasps for multi-fingered grippers is time-consuming, suitable grasp modes are stored for every object category, including preferred areas of contact on the objects surface","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"496 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115886028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Richarz, Christian Martin, Andrea Scheidig, H. Groß
{"title":"There You Go! - Estimating Pointing Gestures In Monocular Images For Mobile Robot Instruction","authors":"J. Richarz, Christian Martin, Andrea Scheidig, H. Groß","doi":"10.1109/ROMAN.2006.314446","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314446","url":null,"abstract":"In this paper, we present a neural architecture that is capable of estimating a target point from a pointing gesture, thus enabling a user to command a mobile robot to a specific position in his local surroundings by means of pointing. In this context, we were especially interested to determine whether it is possible to implement a target point estimator using only monocular images of low-cost Webcams. The feature extraction is also quite straightforward: We use a gabor jet to extract the feature vector from the normalized camera images; and a cascade of multi layer perceptron (MLP) classifiers as estimator. The system was implemented and tested on our mobile robotic assistant HOROS. The results indicate that it is in fact possible to realize a pointing estimator using monocular image data, but further efforts are necessary to improve the accuracy and robustness of our approach","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130132573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}