{"title":"Angular momentum primitives for human turning: Control implications for biped robots","authors":"M. Farrell, H. Herr","doi":"10.1109/ICHR.2008.4755962","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755962","url":null,"abstract":"Human locomotion involves a number of different complex activities. Humanoid robots, if they are expected to work in a human environment, should be expected to navigate obstacles and transients as well as, or better than a human being. Turning is one aspect of human walking that is poorly understood from the perspective of biomechanics and robotics. It is an important task comprising a large percentage of daily activities through most human environments. During turning the body is subjected to torques that the leave the body unstable. By understanding the contributions of the spin angular momentum about the center of mass we can gain insight on how to design better controllers for bipedal robots. There are several different types of turning; using alternate legs as the stance leg to accomplish the turn and then recover and also, turning can be a steady-state phenomena as well as a more transient behavior depending on speed. The contributions of spin angular momentum to the center of mass is considered in the case of a spin-turn where the inside foot pivots and the opposite foot the direction of the turn returns the body to level ground walking. Motivations for control of human walking bipeds are discussed. Further, a theory is developed that turning is dominated by contributions from the swing leg producing angular momentum about the body during the turn.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116976953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Karlsruhe Humanoid Head","authors":"T. Asfour, K. Welke, P. Azad, A. Ude, R. Dillmann","doi":"10.1109/ICHR.2008.4755993","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755993","url":null,"abstract":"The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114407307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoichi Matsuyama, Hikaru Taniyama, S. Fujie, Tetsunori Kobayashi
{"title":"Designing communication activation system in group communication","authors":"Yoichi Matsuyama, Hikaru Taniyama, S. Fujie, Tetsunori Kobayashi","doi":"10.1109/ICHR.2008.4756016","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756016","url":null,"abstract":"Our community is facing serious problem: aging society. The population of elderly people is now increasing in Japan. Especially over 75 years old people is estimated to be up to 20 million in 2030. We have investigated in one day care centers which are facilities for elderly care. And we realized that communication is needed for its own sake in these facilities and active communication can cure even depression and dementia. Therefore we propose to cope with these problems using a robot as a communication activator in order to improve the effectiveness of group communication. We define group communication as one of the type of communication which is formed by several persons. This time, we focus on a recreation game named ldquoNandoku.rdquo Nandoku is a type of quize which can be described as group communication with a master of ceremony (MC). In this paper, we describe requirement for this system and system design. The system always selects its behavior and target (a participant in the game) to maximize ldquocommunication activeness.rdquo Communication activeness is defined as amount of several subjectspsila(ordinary three: A, B, C) participation, which are calculated with participantspsila face direction using camera information. For instance, if participant A is not fully participating by not making eye contact, the system is expected to select one of the behaviors such as ldquoCan you answer, Mr.A?rdquo to encourage A to participate in the game. We experimented with the system in a daycare center. Our results show subjectspsila participation is totally increased. That offers evidence that the robot can serve a practical role in improving the group communication as a communication activator.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121579483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An optimal control model unifying holonomic and nonholonomic walking","authors":"K. Mombaur, J. Laumond, E. Yoshida","doi":"10.1109/ICHR.2008.4756020","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756020","url":null,"abstract":"In this paper we explore the underlying principles of natural locomotion path generation of human beings. The knowledge of these principles is useful to implement biologically inspired path planning algorithms on a humanoid robot. The key is to formulate the path planning problem as optimal control problem. We propose a single dynamic model valid for all situations, unifying nonholonomic and holonomic parts of the motion, as well as a carefully designed unified objective function. The choice between holonomic and nonholonomic behavior appears, along with the optimal path, as result of the optimization by powerful numerical techniques. The proposed model and objective function are successfully tested in six different locomotion scenarios. The resulting paths are implemented on the HRP2 robot in the simulation environment OpenHRP as well as in the experiment on the real robot.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121680573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion capture based human motion recognition and imitation by direct marker control","authors":"C. Ott, Dongheui Lee, Yoshihiko Nakamura","doi":"10.1109/ICHR.2008.4755984","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755984","url":null,"abstract":"This paper deals with the imitation of human motions by a humanoid robot based on marker point measurements from a 3D motion capture system. For imitating the humanpsilas motion, we propose a Cartesian control approach in which a set of control points on the humanoid is selected and the robot is virtually connected to the measured marker points via translational springs. The forces according to these springs drive a simplified simulation of the robot dynamics, such that the real robot motion can finally be generated based on joint position controllers effectively managing joint friction and other uncertain dynamics. This procedure allows to make the robot follow the marker points without the need of explicitly computing inverse kinematics. For the implementation of the marker control on a humanoid robot, we combine it with a center of gravity based balancing controller for the lower body joints. We integrate the marker control based motion imitation with the mimesis model, which is a mathematical model for motion learning, recognition, and generation based on hidden Markov models (HMMs). Learning, recognition, and generation of motion primitives are all performed in marker coordinates paving the way for extending these concepts to task space problems and object manipulation. Finally, an experimental evaluation of the presented concepts using a 38 degrees of freedom humanoid robot is discussed.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"76 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129352109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Axenbeck, Maren Bennewitz, Sven Behnke, Wolfram Burgard
{"title":"Recognizing complex, parameterized gestures from monocular image sequences","authors":"Tobias Axenbeck, Maren Bennewitz, Sven Behnke, Wolfram Burgard","doi":"10.1109/ICHR.2008.4755973","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755973","url":null,"abstract":"Robotic assistants designed to coexist and communicate with humans in the real world should be able to interact with them in an intuitive way. This requires that the robots are able to recognize typical gestures performed by humans such as head shaking/nodding, hand waving, or pointing. In this paper, we present a system that is able to spot and recognize complex, parameterized gestures from monocular image sequences. To represent people, we locate their faces and hands using trained classifiers and track them over time. We use few, expressive features extracted out of this compact representation as input to hidden Markov models (HMMs). First, we segment gestures into distinct phases and train HMMs for each phase separately. Then, we construct composed HMMs, which consist of the individual phase-HMMs. Once a specific phase is recognized, we estimate the parameter of the current gesture, e.g., the target of a pointing gesture. As we demonstrate in the experiments, our method is able to robustly locate and track hands, despite of the fact that they can take a large number of substantially different shapes. Based on this, our system is able to reliably spot and recognize a variety of complex, parameterized gestures.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"370 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Greggio, L. Manfredi, C. Laschi, P. Dario, M. Carrozza
{"title":"RobotCub implementation of real-time least-square fitting of ellipses","authors":"N. Greggio, L. Manfredi, C. Laschi, P. Dario, M. Carrozza","doi":"10.1109/ICHR.2008.4755964","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755964","url":null,"abstract":"This paper presents the implementation of a new algorithm for pattern recognition in machine vision developed in our laboratory applied to the RobotCub humanoid robotics platform simulator. The algorithm is a robust and direct method for the least-square fitting of ellipses to scattered data. RobotCub is an open source platform, born to study the development of neuro-scientific and cognitive skills in human beings, especially in children. By the estimation of the surrounding objects properties (such as dimensions, distances, etc...) a subject can create a topographic map of the environment, in order to navigate through it without colliding with obstacles. In this work we implemented the method of the least-square fitting of ellipses of Maini (EDFE), previously developed in our laboratory, in a robotics context. Moreover, we compared its performance with the hough transform, and others least-square ellipse fittings techniques. We used our system to detect spherical objects, and we applied it to the simulated RobotCub platform. We performed several tests to prove the robustness of the algorithm within the overall system, and finally we present our results.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133883278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards socially adaptive robots: A novel method for real time recognition of human-robot interaction styles","authors":"D. François, D. Polani, K. Dautenhahn","doi":"10.1109/ICHR.2008.4756004","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4756004","url":null,"abstract":"Automatically detecting different styles of play in human-robot interaction is a key challenge towards adaptive robots, i.e. robots that are able to regulate the interactions and adapt to different interaction styles of the robot users. In this paper we present a novel algorithm for pattern recognition in human-robot interaction, the cascaded information bottleneck method. We apply it to real-time autonomous recognition of human-robot interaction styles. This method uses an information theoretic approach and enables to progressively extract relevant information from time series. It relies on a cascade of bottlenecks, the bottlenecks being trained one after the other according to the existing agglomerative information bottleneck algorithm. We show that a structure for the bottleneck states along the cascade emerges and we introduce a measure to extrapolate unseen data. We apply this method to real-time recognition of human-robot interaction styles by a robot in a detailed case study. The algorithm has been implemented for real interactions between humans and a real robot. We demonstrate that the algorithm, which is designed to operate real time, is capable of classifying interaction styles, with a good accuracy and a very acceptable delay. Our future work will evaluate this method in scenarios on robot-assisted therapy for children with autism.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127612317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time selection and generation of fall damage reduction actions for humanoid robots","authors":"Kunihiro Ogata, K. Terada, Y. Kuniyoshi","doi":"10.1109/ICHR.2008.4755950","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755950","url":null,"abstract":"Falling motion control is necessary because humanoid robots are vulnerable to falling. This issue has been the subject of several previous studies; the contributions of this paper are a motion selection method and a method for generating fall-avoidance motions and active shock reducing motions.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123202406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Prats, Steven Wieland, T. Asfour, A. P. Pobil, R. Dillmann
{"title":"Compliant interaction in household environments by the Armar-III humanoid robot","authors":"M. Prats, Steven Wieland, T. Asfour, A. P. Pobil, R. Dillmann","doi":"10.1109/ICHR.2008.4755997","DOIUrl":"https://doi.org/10.1109/ICHR.2008.4755997","url":null,"abstract":"In this work, we present a humanoid robot able to perform compliant physical interaction tasks with furniture commonly found in household environments. A general framework for task description and sensor-based execution, based on previous work, has been adopted for this purpose, providing versatility to the robot, which is able to adapt its task knowledge to several different cases, without being specifically programmed for a particular task. Robustness to uncertainties during task execution is guaranteed by a force-torque sensor placed in the robotpsilas wrist, which is in charge of adapting the robot motion to the particular task. A total of 8 degrees of freedom are controlled, making the task execution highly redundant, thus allowing the use of auxiliary secondary tasks by means of task and joint redundancy management. Several experiments, performed in a real kitchen environment, are shown.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123589797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}