Nathan Elangovan, Che-Ming Chang, Ricardo V. Godoy, Felipe Sanches, Kevin Wang, Patrick Jarvis, Minas Liarokapis
{"title":"Comparing Human and Robot Performance in the Execution of Kitchen Tasks: Evaluating Grasping and Dexterous Manipulation Skills","authors":"Nathan Elangovan, Che-Ming Chang, Ricardo V. Godoy, Felipe Sanches, Kevin Wang, Patrick Jarvis, Minas Liarokapis","doi":"10.1109/Humanoids53995.2022.10000248","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000248","url":null,"abstract":"Over the last decades there has been a lot of research effort focusing on the development of household robots. Such robots need to execute a plethora of complex tasks that require significant dexterity and that need to be employed in dynamic and unstructured environments (e.g., a Kitchen environment). In this work, we focus on comparing human and robot performance in the execution of complex kitchen tasks, assessing the grasping and dexterous manipulation skills that are required. In particular, the study is based on a comprehensive collection of grasping and manipulation strategies that are employed by humans and humans directly operating robots. A dataset is created containing more than 2000 activities that are typically executed in a kitchen environment and a total of more than two hours of data. Based on the analysis of this dataset, we propose a taxonomy that classifies the attributes of kitchen specific grasping and manipulation strategies, as well as appropriate benchmarks to compare the performance of robotic grippers against human counterparts using what we call a dexterity/capability map. The color-coded maps enable us to visualize the current capabilities and limitations of robotic grippers in the execution of specific tasks. These insights can be used for the development of new classes of grippers and hands capable of performing on par with human hands.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128060540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel D. C. Seppelfelt, Tomoki Asaka, T. Nagai, Soh Yukizaki
{"title":"HumanoidBot: Full-Body Humanoid Chitchat System","authors":"Gabriel D. C. Seppelfelt, Tomoki Asaka, T. Nagai, Soh Yukizaki","doi":"10.1109/Humanoids53995.2022.10000209","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000209","url":null,"abstract":"State-of-the-art chatbot models have been refined over the past few years, especially in the category of open-domain chatbots, thanks to the development of new model architectures, capable of storing the context of conversations more reliably, or the development of larger data sets to train such models. With such improvements, chatbot text applications can simulate human replies. However, when one of those text applications is implemented in a humanoid robot, that uses mainly sound to communicate, the result may not be as humanlike as the text by itself. In this paper, we develop a full-body humanoid chitchat system: the HumanoidBot. It has the objective to further discuss the influence of gestures in full-body humanoid robots performed simultaneously with speech utterances, aiming to improve its humanlikeness in face-to-face human-robot open-domain dialogues.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"69 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123157230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luciano Angelini, Manuela Uliano, Angela Mazzeo, Mattia Penzotti, M. Controzzi
{"title":"Self-collision avoidance in bimanual teleoperation using CollisionIK: algorithm revision and usability experiment","authors":"Luciano Angelini, Manuela Uliano, Angela Mazzeo, Mattia Penzotti, M. Controzzi","doi":"10.1109/Humanoids53995.2022.10000179","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000179","url":null,"abstract":"One of the challenges in teleoperation is avoiding self-collisions, which is particularly critical in bi-manual systems. Available solutions are usually developed for redundant robots or introduce significant delays during teleoperation. We propose a revised version of the CollisionIK algorithm, dubbed revised_CollisionIK, to solve this issue. The algorithm has been tested in a bi-manual system teleoperated by naïve users and compared with the original version CollisionIK monitored by a standard emergency brake strategy. Based on objective and subjective metrics, results show that the revised_CollisionIK can be successfully used for teleoperating bimanual pick-handover-place tasks. Participants find the manipulation of small object easier with this strategy and don't perceive any difference in terms of accuracy and delay, despite these being significantly worse than CollisionIK combined with a standard emergency brake strategy.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114149477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulio Romualdi, N. Villa, Stefano Dafarra, D. Pucci, O. Stasse
{"title":"Whole-Body Control and Estimation of Humanoid Robots with Link Flexibility","authors":"Giulio Romualdi, N. Villa, Stefano Dafarra, D. Pucci, O. Stasse","doi":"10.1109/Humanoids53995.2022.10000157","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000157","url":null,"abstract":"This article presents a whole-body controller for humanoid robots affected by concentrated link flexibility. We characterize the link flexibility by introducing passive joints at the concentration of deflections, which separate the flexible links into two or more rigid bodies. In this way, we extend our robot model to take link deflections into account as underactuated extra degrees of freedom, allowing us to design a whole-body controller capable to anticipate deformations. Since in a real scenario, the deflection is not directly measurable, we present an observer aiming at estimating the flexible joint state, namely position, velocity, and torque, only considering the measured contact force and the state of actuated joint. We validate the overall approach in simulations with the humanoid robot TALOS, whose hip is mechanically flexible due to a localized mechanical weakness. Furthermore, the paper compares the proposed whole-body control strategy with state-of-the-art approaches. Finally, we analyze the performance of the estimator in the case of different values of hip elasticity.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125477126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Galljamov, Guoping Zhao, B. Belousov, A. Seyfarth, Jan Peters
{"title":"Improving Sample Efficiency of Example-Guided Deep Reinforcement Learning for Bipedal Walking","authors":"R. Galljamov, Guoping Zhao, B. Belousov, A. Seyfarth, Jan Peters","doi":"10.1109/Humanoids53995.2022.10000068","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000068","url":null,"abstract":"Reinforcement learning holds a great promise of enabling bipedal walking in humanoid robots. However, despite encouraging recent results, training still requires significant amounts of time and resources, precluding fast iteration cycles of the control development. Therefore, faster training methods are needed. In this paper, we investigate a number of techniques for improving sample efficiency of on-policy actor-critic algorithms and show that a significant reduction in training time is achievable with a few straightforward modifications of the common algorithms, such as PPO and DeepMimic, tailored specifically towards the problem of bipedal walking. Action space representation, symmetry prior induction, and cliprange scheduling proved effective at reducing sample complexity by a factor of 4.5. These results indicate that domain-specific knowledge can be readily utilized to reduce training times and thereby enable faster development cycles in challenging robotic applications.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127079950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robinson Guachi, Flavio Napoleoni, Francesco Pipitone, M. Controzzi
{"title":"Preliminary Design and Development of a Selectable Stiffness Joint for Elbow Orthosis","authors":"Robinson Guachi, Flavio Napoleoni, Francesco Pipitone, M. Controzzi","doi":"10.1109/Humanoids53995.2022.10000188","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000188","url":null,"abstract":"The rehabilitation of a patient with a brachial plexus injury comprises of motor activities aimed at preventing muscular atrophy and fibrosis. Patients are frequently prescribed passive orthoses for shoulder and forearm support and muscle exercises using elastic bands as part of their functional treatment. Despite the clear need for assistive technology to help patients with upper extremity disability, research and development for upper extremity assistive technologies is not particularly prospering. In this scenario, developing an elbow orthosis that is compact, lightweight, and combines different functionalities customizable to the patient needs can maximizing the rehabilitation process. Here we show a new wearable passive elbow orthosis for the upper limb that has been designed to allow the output impedance of the joint to be modulated between three discrete modes depending on the user needs and motor task performed: free swing mode (1), allowing for the free flexion/extension of the elbow. Compliant mode (2), the elbow provides an elastic torque (customizable) opposed to the flexion/extension movements. Finally, the stiff mode (3) enables the patient to lock-hold the arm in a specific position. The alpha prototype has been developed, integrated and kinematic verified according to the technical specifications.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127147958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Force Feedback Control For Dexterous Robotic Hands Using Conditional Postural Synergies","authors":"D. Dimou, J. Santos-Victor, Plinio Moreno","doi":"10.1109/Humanoids53995.2022.10000162","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000162","url":null,"abstract":"We present a force feedback controller for a dexterous robotic hand equipped with force sensors on its fingertips. Our controller uses the conditional postural synergies framework to generate the grasp postures, i.e. the finger configuration of the robot, at each time step based on forces measured on the robot's fingertips. Using this framework we are able to control the hand during different grasp types using only one variable, the grasp size, which we define as the distance between the tip of the thumb and the index finger. Instead of controlling the finger limbs independently, our controller generates control signals for all the hand joints in a (low-dimensional) shared space (i.e. synergy space). In addition, our approach is modular, which allows to execute various types of precision grips, by changing the synergy space according to the type of grasp. We show that our controller is able to lift objects of various weights and materials, adjust the grasp configuration during changes in the object's weight, and perform object placements and object handovers.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128946301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Franklin Kenghagho, M. Neumann, Patrick Mania, Toni Tan, F. Siddiky, René Weller, G. Zachmann, M. Beetz
{"title":"NaivPhys4RP - Towards Human-like Robot Perception “Physical Reasoning based on Embodied Probabilistic Simulation”","authors":"Franklin Kenghagho, M. Neumann, Patrick Mania, Toni Tan, F. Siddiky, René Weller, G. Zachmann, M. Beetz","doi":"10.1109/Humanoids53995.2022.10000153","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000153","url":null,"abstract":"Perception in complex environments especially dynamic and human-centered ones goes beyond classical tasks such as classification usually known as the what- and where-object-questions from sensor data, and poses at least three challenges that are missed by most and not properly addressed by some actual robot perception systems. Note that sensors are extrinsically (e.g., clutter, embodiedness-due noise, delayed processing) and intrinsically (e.g., depth of transparent objects) very limited, resulting in a lack of or high-entropy data, that can only be difficultly compressed during learning, difficultly explained or intensively processed during interpretation. (a) Therefore, the perception system should rather reason about the causes that produce such effects (how/why-happen-questions). (b) It should reason about the consequences (effects) of agent-object and object-object interactions in order to anticipate (what-happen-questions) the (e.g., undesired) world state and then enable successful action on time. (c) Finally, it should explain its outputs for safety (meta why/how-happen-questions). This paper introduces a novel white-box and causal generative model of robot perception (NaivPhys4RP) that emulates human perception by capturing the Big Five aspects (FPCIU)11Functionality, Physics, Causality, Intention, Utility of human commonsense, recently established, that invisibly (dark) drive our observational data and allow us to overcome the above problems. However, NaivPhys4RP particularly focuses on the aspect of physics, which ultimately and constructively determines the world state.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127966070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kirtay, V. Hafner, M. Asada, A. Kuhlen, Erhan Öztop
{"title":"Multimodal reinforcement learning for partner specific adaptation in robot-multi-robot interaction","authors":"M. Kirtay, V. Hafner, M. Asada, A. Kuhlen, Erhan Öztop","doi":"10.1109/Humanoids53995.2022.10000205","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000205","url":null,"abstract":"Successful and efficient teamwork requires knowledge of the individual team members' expertise. Such knowledge is typically acquired in social interaction and forms the basis for socially intelligent, partner-adapted behavior. This study aims to implement this ability in teams of multiple humanoid robots. To this end, a humanoid robot, Nao, interacted with three Pepper robots to perform a sequential audio-visual pattern recall task that required integrating multimodal information. Nao outsourced its decisions (i.e., action selections) to its robot partners to perform the task efficiently in terms of neural computational cost by applying reinforcement learning. During the interaction, Nao learned its partners' specific expertise, which allowed Nao to turn for guidance to the partner who has the expertise corresponding to the current task state. The cognitive processing of Nao included a multimodal auto-associative memory that allowed the determination of the cost of perceptual processing (i.e., cognitive load) when processing audio-visual stimuli. In turn, the processing cost is converted into a reward signal by an internal reward generation module. In this setting, the learner robot Nao aims to minimize cognitive load by turning to the partner whose expertise corresponds to a given task state. Overall, the results indicate that the learner robot discovers the expertise of partners and exploits this information to execute its task with low neural computational cost or cognitive load.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125720095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konrad Fründ, Anton Leonhard Shu, F. Loeffl, C. Ott
{"title":"A Guideline for Humanoid Leg Design with Oblique Axes for Bipedal Locomotion","authors":"Konrad Fründ, Anton Leonhard Shu, F. Loeffl, C. Ott","doi":"10.1109/Humanoids53995.2022.10000126","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000126","url":null,"abstract":"The kinematics of humanoid robots are strongly inspired by the human archetype. A close analysis of the kinematics of the human musculoskeletal system reveals that the human joint axes are oriented within certain inclinations. This is in contrast to the most popular humanoid design with a configuration based on perpendicular joint axes. This paper reviews the oblique joint axes of the mainly involved joints for locomotion of the human musculoskeletal system. We elaborate on how the oblique axes affect the performance of walking and running. The mechanisms are put into perspective for the locomotion types of walking and running. In particular, walking robots can highly benefit from using oblique joint axes. For running, the primary goal is to align the axis of motion to the mainly active sagittal plane. The results of this analysis can serve as a guideline for the kinematic design of a humanoid robot and a prior for optimization-based approaches.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133281017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}