{"title":"Eye gaze tracking for a humanoid robot","authors":"Oskar Palinko, F. Rea, G. Sandini, A. Sciutti","doi":"10.1109/HUMANOIDS.2015.7363561","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363561","url":null,"abstract":"Humans use eye gaze in their daily interaction with other humans. Humanoid robots, on the other hand, have not yet taken full advantage of this form of implicit communication. In this paper we present a passive monocular gaze tracking system implemented on the iCub humanoid robot. The validation of the system proved that it is a viable low-cost, calibration-free gaze tracking solution for humanoid platforms, with a mean absolute error of about 5 degrees on horizontal angle estimates. We also demonstrated the applicability of our system to human-robot collaborative tasks, showing that the eye gaze reading ability can enable successful implicit communication between humans and the robot. Finally, in the conclusion we give generic guidelines on how to improve our system and discuss some potential applications of gaze estimation for humanoid robots.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121113390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Lofaro, Martyna Bula, Patrick Early, E. Eide, Mannan Javid
{"title":"ARCHR — Apparatus for Remote Control of Humanoid Robots","authors":"D. Lofaro, Martyna Bula, Patrick Early, E. Eide, Mannan Javid","doi":"10.1109/HUMANOIDS.2015.7363540","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363540","url":null,"abstract":"The Apparatus for Remote Control of Humanoid Robots (ARCHR) is an intuitive teleoperation system for high degree of freedom robots with haptic feedback. The purpose of this system is to allow minimally trained individuals to control a complex robot to complete a dexterous task. For example if it is a biohazard handling task then we want a biohazard handling expert operating the robot. However it should not be required that the biohazard handling expert also be a trained robot handler. The robot should be controlled intuitively and give good situational awareness. This is especially important in situations requiring first responders. The ARCHR system uses scaled kinematics of the target robot to incorporate the humans inherent intuitive understanding of joint motion. This setup also allows for hardware self collision detection. A scaled stereoscopic video feed in conjunction with a binocular virtual reality system immures the user into the body of the robot. This immersion is scale independent. Finally haptic feedback is used to extend the immersion to the tactile space. Three ARCHR controllers were created for three distinct humanoid robots, the DRC-Hubo, Baxter, and MiniBot. Each of these robots range from full-size to infant-size. Public testing was done using 30 untrained users on each platform. The results showed that the system does achieve the goal of creating an intuitive and immersive teleoperation system for high degree of freedom robots.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127320631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic demonstration and feature selection for robot learning","authors":"S. Morante, J. Victores, C. Balaguer","doi":"10.1109/HUMANOIDS.2015.7363569","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363569","url":null,"abstract":"Robot learning frameworks, such as Programming by Demonstration, are based on learning tasks from sets of user demonstrations. These frameworks, in their naive implementation, assume that all the data from the user demonstrations has been correctly sensed and can be relevant to the task. Analogous to feature selection, which is the process of selecting a subset of relevant features for use in model construction, this paper presents a demonstration selection process, which is additionally applied for feature selection for further data filtering. The demonstration and feature selection process presented is called Dissimilarity Mapping Filtering (DMF). DMF involves three steps: obtaining a measurement of dissimilarity (e.g. Dynamic Time Warping, etc.), reducing dimensions through a mapping algorithm (e.g. sum of dissimilarities, Multidimensional Scaling, etc.) and a filtering method (z-score based, DBSCAN, etc.). As a demonstration selector, DMF discards outlying demonstrations in terms of all the features considered simultaneously. As a feature selector, DMF discards features that present high inconsistency among demonstrations. We apply DMF to our Continuous Goal-Directed Actions (CGDA) robot learning framework presented in previous works.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127498372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satoshi Nishikawa, Tomohiro Kobayashi, T. Fukushima, Y. Kuniyoshi
{"title":"Pole vaulting robot with dual articulated arms that can change reaching position using active bending motion","authors":"Satoshi Nishikawa, Tomohiro Kobayashi, T. Fukushima, Y. Kuniyoshi","doi":"10.1109/HUMANOIDS.2015.7363564","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363564","url":null,"abstract":"Elasticity is an important factor in enhancing the physical capabilities of robots. Pole vaulting is an interesting task because a large elastic pole changes the trajectory of the robot drastically. Moreover, the robot can change its behavior by manipulating the flexible pole during a long pole-support phase. In this study, we investigated how the reaching point of pole vaulting was changed by an active bending motion because it is important for robots to move to the desired place. To examine the effect of motion, we used a multiple pendulum model and a robot having dual articulated arms with grippers. Simulation results showed that reaching positions were changed by the switching time of the active bending motion. A relatively late switching time allowed robots to vault to a farther position. However, a very late switching time had the opposite effect. Then, we developed a pole vaulting robot having dual articulated arms with grippers for the application of humanoid robots. Pole vaulting experiments using this robot showed the same tendency in the simulation. These results indicated that the reaching position of pole vaulting could be controlled by the switching time of active bending. Further, the robot reached a height of 1.67 m by releasing the pole.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124912327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Kaiser, Markus Grotz, E. Aksoy, Martin Do, N. Vahrenkamp, T. Asfour
{"title":"Validation of whole-body loco-manipulation affordances for pushability and liftability","authors":"P. Kaiser, Markus Grotz, E. Aksoy, Martin Do, N. Vahrenkamp, T. Asfour","doi":"10.1109/HUMANOIDS.2015.7363471","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363471","url":null,"abstract":"Autonomous robots that are intended to work in disaster scenarios like collapsed or contaminated buildings need to be able to efficiently identify action possibilities in unknown environments. This includes the detection of environmental elements that allow interaction, such as doors or debris, as well as the utilization of fixed environmental structures for stable whole-body loco-manipulation. Affordances that refer to whole-body actions are especially valuable for humanoid robots as the necessity of stabilization is an integral part of their control strategies. Based on our previous work we propose to apply the concept of affordances to actions of stable whole-body loco-manipulation, in particular to pushing and lifting of large objects. We extend our perceptual pipeline in order to build large-scale representations of the robot's environment in terms of environmental primitives like planes, cylinders and spheres. A rule-based system is employed to derive whole-body affordance hypotheses from these primitives, which are then subject to validation by the robot. An experimental evaluation demonstrates our progress in detection, validation and utilization of whole-body affordances.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122355798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Fang, A. Rocchi, E. Hoffman, N. Tsagarakis, D. Caldwell
{"title":"Efficient self-collision avoidance based on focus of interest for humanoid robots","authors":"Cheng Fang, A. Rocchi, E. Hoffman, N. Tsagarakis, D. Caldwell","doi":"10.1109/HUMANOIDS.2015.7363500","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363500","url":null,"abstract":"This paper deals with the self-collision avoidance problem for humanoid robots in an efficient way. Self-collision avoidance is introduced as a constraint for each task in a hierarchical Inverse Kinematic (IK) problem. Since the number of link pairs which needs to be updated and checked for self-collision, in every control loop, is large, the novel concept of Self-Collision Avoidance Focus of Interest (SCAFoI) is proposed. SCAFoIs permits to predict and dynamically select the necessary link pairs to be checked online to improve the computation efficiency. For each of the several SCAFoIs, which corresponds to the related pairs of kinematic chains of the whole body, the status of the relative positional relationship is predicted. The prediction is done using a Support Vector Machine (SVM) which is a widely used classifier from the machine learning field. Moreover, techniques are proposed to guarantee and improve the prediction performance of the trained classifier. The effectiveness of the framework is verified using the whole-body motion control library OpenSoT by simulation on the model of the recently developed humanoid robot WALK-MAN.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116838626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of iris size and eyelids coupling on the estimation of the gaze direction of a robotic talking head by human viewers","authors":"Francois Foerster, G. Bailly, F. Elisei","doi":"10.1109/HUMANOIDS.2015.7363528","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363528","url":null,"abstract":"Primates - and in particular humans - are very sensitive to the eye direction of congeners. Estimation of gaze of others is one of the basic skills for estimating goals, intentions and desires of social agents, whether they are humans or avatars. When building robots, one should not only supply them with gaze trackers but also check for the readability of their own gaze by human partners. We conducted experiments that demonstrate the strong impact of the iris size and the position of the eyelids of an iCub humanoid robot on gaze reading performance by human observers. We comment on the importance of assessing the robot's ability of displaying its intentions via clearly legible and readable gestures.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Planning desired center of Mass and zero moment point trajectories for bipedal locomotion","authors":"L. Lanari, S. Hutchinson","doi":"10.1109/HUMANOIDS.2015.7363438","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363438","url":null,"abstract":"In this paper, we consider the problem of simultaneously synthesizing desired trajectories for both the center of mass (CoM) and zero moment point (ZMP) of a bipedal robot. The approach extends our past work, in which we derived an explicit constraint that ensures boundedness of CoM trajectories for a given reference ZMP trajectory. We parameterize the desired ZMP trajectory by a sequence of basis functions; these design parameters are optimized by an algorithm, in which the CoM boundedness constraint as well as ZMP and other CoM specifications are simultaneously satisfied. When these specifications do not fully constrain the solution, we apply a null-space projection approach that allows the incorporation of additional design criteria. We present simulation results for both the linear inverted pendulum model, and for a model that includes swing leg dynamics.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130409396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iterative learning control for accurate task-space tracking with humanoid robots","authors":"Pranav A. Bhounsule, K. Yamane","doi":"10.1109/HUMANOIDS.2015.7363594","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363594","url":null,"abstract":"Precise task-space tracking with manipulator-type systems requires accurate kinematics models. In contrast to traditional manipulators, it is difficult to obtain an accurate kinematic model of humanoid robots due to complex structure and link flexibility. Also, prolonged use of the robot will lead to some parts wearing out or being replaced with a slightly different alignment, thus throwing off the initial calibration. Therefore, there is a need to develop a control algorithm that can compensate for the modeling errors and quickly retune itself, if needed, taking into account the controller bandwidth limitations and high dimensionality of the system. In this paper, we develop an iterative learning control algorithm that can work with existing inverse kinematics solver to refine the joint-level control commands to enable precise tracking in the task space. We demonstrate the efficacy of the algorithm on a theme-park type humanoid that learns to track the figure eight in 18 trials and to serve a drink without spilling in 9 trials.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130644885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marijn F. Stollenga, Alan J. Lockett, J. Schmidhuber
{"title":"The Natural Gradient as a control signal for a humanoid robot","authors":"Marijn F. Stollenga, Alan J. Lockett, J. Schmidhuber","doi":"10.1109/HUMANOIDS.2015.7363550","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2015.7363550","url":null,"abstract":"This paper presents Natural Gradient Control (NGC), a control algorithm that efficiently estimates and applies the natural gradient for high-degree of freedom robotic control. In contrast to the standard task Jacobian, the natural gradient follows the direction of steepest descent with respect to a parameterized model with extra degrees of freedom injected. This procedure enables NGC to maneuver smoothly in regions where the task Jacobian is ill-conditioned or singular. NGC efficiently estimates the natural gradient using only forward kinematics evaluations. This sampling-based algorithm prevents the need for gradient calculations and therefore allows great flexibility in the cost functions. Experiments show NGC can even use statistics of rendered images as part of the cost function, which would be impossible with traditional inverse kinematics approaches. The advantages of NGC are shown on the full 41-degree upper body of an iCub humanoid, in simulation and on a real robot, and compared to a Jacobian-based controller. Experiments show that the natural gradient is robust and avoids common pitfalls such as local minima and slow convergence, which often affects the application of Jacobian-based methods. Demonstrations on the iCub show that NGC is a practical method that can be used for complex movements.","PeriodicalId":417686,"journal":{"name":"2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}