Dae-Hyung Park, Heiko Hoffmann, P. Pastor, S. Schaal
{"title":"Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields","authors":"Dae-Hyung Park, Heiko Hoffmann, P. Pastor, S. Schaal","doi":"10.1109/ICHR.2008.4755937","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755937","url":null,"abstract":"Robots in a human environment need to be compliant. This compliance requires that a preplanned movement can be adapted to an obstacle that may be moving or appearing unexpectedly. Here, we present a general framework for movement generation and mid-flight adaptation to obstacles. For robust motion generation, Ijspeert et al developed the framework of dynamic movement primitives which represent a demonstrated movement with a set of differential equations. These equations allow adding a perturbing force without sacrificing stability of the desired movement. We extend this framework such that arbitrary movements in end-effector space can be represented - which was not possible before. Furthermore, we include obstacle avoidance by adding to the equations of motion a repellent force - a gradient of a potential field centered around the obstacle. In addition, this article compares different potential fields and shows how to avoid obstacle-link collisions within this framework. We demonstrate the abilities of our approach in simulations and with an anthropomorphic robot arm.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130531264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Bonnet, P. Fraisse, N. Ramdani, J. Lagarde, S. Ramdani, B. Bardy
{"title":"Modeling postural coordination dynamics using a closed-loop controller","authors":"V. Bonnet, P. Fraisse, N. Ramdani, J. Lagarde, S. Ramdani, B. Bardy","doi":"10.1109/ICHR.2008.4755932","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755932","url":null,"abstract":"This paper models recent data in the field of postural coordination showing the existence of self-organized postural states, and transition between them, underlying supra-postural tracking movements. The proposed closed-loop controller captures the complex postural behaviors observed in humans and can be used to implement efficient and simple balance control principles in humanoids.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A VR navigation of a 6-DOF gait rehabilitation robot with upper and lower limbs connections","authors":"B. Novandy, Jungwon Yoon, Christiand","doi":"10.1109/ICHR.2008.4756010","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756010","url":null,"abstract":"This paper explains a 6-DOF gait rehabilitation robot, which may allow a patient to navigate in virtual reality (VR) by upper and lower limbs interactions. The suggested robot is composed of an upper limb device, a sliding device, two footpad devices, and a partial body support system. The footpad device on the sliding device will generate 3-DOF spatial motions on sagittal plane for each foot. The upper limb device will allow arm of a user to swing naturally with a simple pendulum link with a passive prismatic joint. It is possible to use this robot inside normal home for the purpose of telerehabilitation since its size is compact and its power consumption is small. Synchronized gait patterns for this robot are designed to represent natural gait with the upper and lower limbs connections. In gait rehabilitation robots, one of the important concerns is not only to follow the robot motions passively, but also to allow the patient to walk by his/her intention. Thus, this robot allows automatic walking velocity update by estimating interaction torques between the human and the upper limb device, and synchronizing the upper limb device to the lower limb device. In addition, the upper limb device acts as user-friendly input device for navigating in virtual reality. By pushing the switches located at the right and left handles of the upper limb device, a patient is able to do turning motions during navigation in virtual reality. Through experimental results of a healthy subject, we showed that rehabilitation training can be more effectively combined to virtual environments with upper and lower limb connections, which will allow various rehabilitation training modes.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132940101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Woosung Yang, N. Chong, Syungkwon Ra, Changhwan Kim, Bum-Jae You
{"title":"Self-stabilizing bipedal locomotion employing neural oscillators","authors":"Woosung Yang, N. Chong, Syungkwon Ra, Changhwan Kim, Bum-Jae You","doi":"10.1109/ICHR.2008.4755924","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755924","url":null,"abstract":"For attaining a stable and robust dynamic bipedal locomotion, we address an efficient and powerful alternative based on biologically inspired control framework employing neural oscillators. Neural oscillators can be used to generate sustained rhythmic signals, and show superior features for stabilizing bipedal locomotion particularly when coupled with virtual impedance components. By building a network of neural oscillators, we can enable humanoid robots to walk stably and exhibit robustness against unexpected disturbances. Specifically, in order to maintain stability, the neural oscillator plays an important role by controlling the trajectory of the COM in phase with the ZMP input. The effectiveness of the proposed control scheme is verified through simulations.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"43 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114006544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Xie, Z. Zhong, L. Zhang, H. J. Yang, C. Song, J. Li, L. Xian, L. Wang
{"title":"Self learning of gravity compensation by LOCH humanoid robot","authors":"M. Xie, Z. Zhong, L. Zhang, H. J. Yang, C. Song, J. Li, L. Xian, L. Wang","doi":"10.1109/ICHR.2008.4755999","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755999","url":null,"abstract":"A humanoid robot is a complex machine with many degrees of freedom. And, the control at the joint level is a crucial step for a humanoid robot to achieve fast and accurate movements. In this paper, we address the issue of gravity compensation, and propose a learning approach which is inspired by a human-like scheme of compensating gravity through learning. First of all, we will describe the importance of gravity compensation. Then, for the purpose of comparison, we outline the theoretical way of computing the torques which compensate the gravity acting on a limbpsilas links and payload. Subsequently, we present a human-like learning scheme, which accurately determines the necessary torques for the compensation of gravity acting at the various joints when a humanoid robot is in any posture of interest. Finally, real experiments with a humanoid robot are presented and discussed.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121021944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kulić, Dongheui Lee, C. Ott, Yoshihiko Nakamura
{"title":"Incremental learning of full body motion primitives for humanoid robots","authors":"D. Kulić, Dongheui Lee, C. Ott, Yoshihiko Nakamura","doi":"10.1109/ICHR.2008.4756000","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756000","url":null,"abstract":"This paper describes an approach for on-line, incremental learning of full body motion primitives from observation of human motion. The continuous observation sequence is first partitioned into motion segments, using stochastic segmentation. Motion segments are next incrementally clustered and organized into a hierarchical tree structure representing the known motion primitives. Motion primitives are encoded using hidden Markov models, so that the same model can be used for both motion recognition and motion generation. At the same time, the relationship between motion primitives is learned via the construction of a motion primitive graph. The motion primitive graph can then be used to construct motions, consisting of sequences of motion primitives. The approach is implemented and tested on the IRT humanoid robot.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129864147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust real-time stereo-based markerless human motion capture","authors":"P. Azad, T. Asfour, R. Dillmann","doi":"10.1109/ICHR.2008.4755975","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755975","url":null,"abstract":"The main problem of markerless human motion capture is the high-dimensional search space. Tracking approaches therefore utilize temporal information and rely on the pose differences between consecutive frames being small. Typically, systems using a pure tracking approach are sensitive to fast movements or require high frame rates, respectively. However, on the other hand, the complexity of the problem does not allow real-time processing at such high frame rates. Furthermore, pure tracking approaches often only recover by chance once tracking has got lost. In this paper, we present a novel approach building on top of a particle filtering framework that combines an edge cue and 3D hand/head tracking in a distance cue for human upper body tracking, as proposed in our earlier work. To overcome the mentioned deficiencies, the solutions of an inverse kinematics problem for a - in the context of the problem - redundant arm model are incorporated into the sampling of particles in a simplified annealed particle filter. Furthermore, a prioritized fusion method and adaptive shoulder positions are introduced in order to allow proper model alignment and therefore smooth tracking. Results of real-world experiments show that the proposed system is capable of robust online tracking of 3D human motion at a frame rate of 15 Hz. Initialization is accomplished automatically.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128422960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic display of facial expressions on the face robot made by using a life mask","authors":"T. Hashimoto, Sachio Hiramatsu, Hiroshi Kobayashi","doi":"10.1109/ICHR.2008.4756017","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756017","url":null,"abstract":"In face-to-face communication, we use not only verbal medium but also non-verbal medium for communication. In particular, facial expressions are very important for emotional communication because they show emotions and feelings effectively. Therefore it is considered that facial expressions are necessary in order to make human-robot communication more naturally. In this paper, ldquoFace Robotrdquo that has human-like appearance and can display facial expressions similar to human being is developed. In order to improve the humanity, the skin of the face robot is taken from the cast of existing femalepsilas face (i.e. life-mask). We then add and improve control points of the face robot according to features of her face. Moreover we analyze her facial expressions in order to mimic her dynamic facial expressions with the face robot. We then confirm that mimicking of her dynamic facial expressions and face features with the face robot are reproduced successfully.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130680599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Task dependent human-like grasping","authors":"Baris Ozyer, Erhan Öztop","doi":"10.1109/ICHR.2008.4755949","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755949","url":null,"abstract":"In this study we report our preliminary work on investigating task dependent grasping in humans and realization of this skill on a robotic platform. For human data acquisition, subjects were asked to reach and grasp two objects (a claw-hammer and a pen) positioned in four different orientations. The subjects were primed by specification of the task to be performed with the objects after the grasp. For the hammer, the task could be one of hammering or prying a nail, or transporting the hammer; for the pen the task could be transporting or using the pen for drawing a circle. The human movements were recorded using a motion capture system and analyzed off-line. The results indicate that humans deploy different grasps for objects depending on the task specification, even though the orientation and location of the objects are kept the same. The human data was also used to derive robot trajectories that were used to implement task depended grasping on a 7-DOF robotic arm (Mitsubishi, PA-10) and a 16-DOF robotic hand (Gifu Hand, Dainichi Co. Ltd.).","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122526956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chyi-Yeu Lin, C. Tseng, H. Gu, Kuo-Liang Chung, C. Fahn, Kai-Jay Lu, Chih-Cheng Chang
{"title":"An autonomous singing and news broadcasting face robot","authors":"Chyi-Yeu Lin, C. Tseng, H. Gu, Kuo-Liang Chung, C. Fahn, Kai-Jay Lu, Chih-Cheng Chang","doi":"10.1109/ICHR.2008.4755994","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755994","url":null,"abstract":"This research aims to devise a robotic head with human-like face and skin that can read randomly composed musical notation, sing out the corresponding content of the song and broadcast news. The face robot is composed of an artificial facial skin that can express a number of facial expressions via motions driven by internal servo motors. Two cameras, each installed inside an eyeball of the face, provide vision capability for reading of the music notation. Computer vision techniques are subsequently used to interpret the musical notation and lyrics of the song. Voice synthesis techniques are then implemented to enable the robot to sing out the song. The mouth patterns of the face robot will automatically change to match the emotion corresponding to the lyrics of the song and broadcasting news. Experiments show that the face robot can successfully read and then sing the song a high percentage of the time.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115140646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}