Gwanwoo Kim, Hiroki Kuribayashi, Y. Tazaki, Y. Yokokohji
{"title":"Omni-Directional Fall Avoidance of Bipedal Robots with Variable Stride Length and Step Duration","authors":"Gwanwoo Kim, Hiroki Kuribayashi, Y. Tazaki, Y. Yokokohji","doi":"10.1109/HUMANOIDS.2018.8625058","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8625058","url":null,"abstract":"This paper proposes a capturability analysis method for fall avoidance of bipedal robots under arbitrary disturbances. Based on a dynamical model of the planar movement of the center-of-mass, capture region is computed numerically by discretizing the state space and the set of control inputs. The proposed method is able to handle a number of practically important elements of fall avoidance such as the relation between stride length and step duration, and kinematic limitations of foot placement, which have been neglected in conventional studies for simplification. The developed fall-avoidance controller utilizes precomputed capturable regions to filter reference foot placements produced by a foot-step planner to ensure fall-avoidance with small online computation time. Capture regions computed by the proposed method are compared with the conventional ones in case studies. The performance of the proposed fall-avoidance controller is evaluated in simulations.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123320877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring Bending Angle and Hallucinating Shape of Elongated Deformable Objects","authors":"Piotr Kicki, Michał Bednarek, K. Walas","doi":"10.1109/HUMANOIDS.2018.8624980","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624980","url":null,"abstract":"Many objects in a human-made environment have elongated shapes for easy manipulation and grasping. As humanoid robots are working in this environment, they require proper sensing and perception of such objects. Current approaches are providing mainly the perception of rigid objects, but many everyday items are non-rigid and more challenging to track due to their substantial shape variability. We want the robots to be able to grasp and manipulate thin, elongated, deformable objects. We propose a system based on the Deep Neural Network that can predict the bend angle of such objects using the single RGB image only. In our paper, we present the proposed neural network architecture used for prediction of the bending angle and finding the elongated shape in images with a cluttered background together with the dataset used for training. We observed that the proposed system even though it was trained on synthetic data was able to perform well on real data. The proposed architecture also provide us with the ability to hallucinate how the deformable pipe with any initial bend would look like when subjected to the arbitrary bend angle. Our findings have more profound consequences than the above mentioned. We were able to show that the proposed Encoder-Decoder neural network architecture has the interpretable latent vector element for describing a measurable physical bend angle. Moreover, we allow bending arrows to be situated out of the image plane. In the future work, we are planning to extend the current approach with the prediction of the full 3d shape of the elongated object from a single RGB image.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124049578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anqing Duan, R. Camoriano, Diego Ferigo, Daniele Calandriello, L. Rosasco, D. Pucci
{"title":"Constrained DMPs for Feasible Skill Learning on Humanoid Robots","authors":"Anqing Duan, R. Camoriano, Diego Ferigo, Daniele Calandriello, L. Rosasco, D. Pucci","doi":"10.1109/HUMANOIDS.2018.8624934","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624934","url":null,"abstract":"In the context of humanoid skill learning, movement primitives have gained much attention because of their compact representation and convenient combination with a myriad of optimization approaches. Among them, a well-known scheme is to use Dynamic Movement Primitives (DMPs) with reinforcement learning (RL) algorithms. While various remarkable results have been reported, skill learning with physical constraints has not been sufficiently investigated. For example, when RL is employed to optimize the robot joint trajectories, the exploration noise could drive the resulting trajectory out of the joint limits. In this paper, we focus on robot skill learning characterized by joint limit avoidance, by introducing the novel Constrained Dynamic Movement Primitives (CDMPs). By controlling a set of transformed states (called exogenous states) instead of the original DMPs states, CDMPs are capable of maintaining the joint trajectories within the safety limits. We validate CDMPs on the humanoid robot iCub, showing the applicability of our approach.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126216430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy-Efficient Bipedal Gait Pattern Generation via CoM Acceleration Optimization","authors":"Jiatao Ding, Chengxu Zhou, Xiaohui Xiao","doi":"10.1109/HUMANOIDS.2018.8625042","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8625042","url":null,"abstract":"Energy consumption for bipedal walking plays a central role for a humanoid robot with limited battery capacity. Studies have revealed that exploiting the allowable Zero Moment Point region (AZR) and Center of Mass (CoM) height variation (CoMHV) are strategies capable of improving energy performance. In general, energetic cost is evaluated by integrating the electric power of multi joints. However, this Joint-Power-based Index requires computing joint torques and velocities in advance, which usually requires time-consuming iterative procedures, especially for multi-joints robots. In this work, we propose a CoM-Acceleration-based Optimal Index (CAOI) to synthesize an energetically efficient CoM trajectory. The proposed method is based on the Linear Inverted Pendulum Model, whose energetic cost can be easily measured by the input energy required for driving the point mass to track a reference trajectory. We characterize the CoM motion for a single walking cycle and define its energetic cost as Unit Energy Consumption. Based on the CAOI, an analytic solution for CoM trajectory generation is provided. Hardware experiments demonstrated the computational efficiency of the proposed approach and the energetic benefits of exploiting AZR and CoMHV strategies.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133930454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Runming Zhang, Huaxin Liu, Fei Meng, A. Ming, Qiang Huang
{"title":"Cylindrical Inverted Pendulum Model for Three Dimensional Bipedal Walking","authors":"Runming Zhang, Huaxin Liu, Fei Meng, A. Ming, Qiang Huang","doi":"10.1109/HUMANOIDS.2018.8624984","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624984","url":null,"abstract":"Energy efficiency of biped walking is an crucial topic for humanoid robot's research. Rapid computing is also important for online planning and model transplantation. Many dynamic models for characterizing humanoids' walking have been developed, such as conventional 3 dimensional inverted pendulum (IPM), linear inverted pendulum (LIPM). This paper proposed an improved inverted pendulum model constrained on cylindrical surface (CIPM), combining the advantages of computing and energy efficiency for humanoids' walking planning. Walking patterns with different speeds can be generated by CIPM. The constraint of cylindrical surface results in low coupling between displacement variables for tested robot and the energy consumption is less than that generated based on LIPM. The advantages of CIPM over IPM and LIPM were proved by mathematic analysis, simulations of bipedal walking with different speeds.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan R. Gosyne, Christian M. Hubicki, Xiaobin Xiong, A. Ames, D. Goldman
{"title":"Bipedial Locomotion Up Sandy Slopes: Systematic Experiments Using Zero Moment Point Methods","authors":"Jonathan R. Gosyne, Christian M. Hubicki, Xiaobin Xiong, A. Ames, D. Goldman","doi":"10.1109/HUMANOIDS.2018.8624959","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624959","url":null,"abstract":"Bipedal robotic locomotion in granular media presents a unique set of challenges at the intersection of granular physics and robotic locomotion. In this paper, we perform a systematic experimental study in which biped robotic gaits for traversing a sandy slope are empirically designed using Zero Moment Point (ZMP) methods. We are able to implement gaits that allow our 7 degree-of-freedom planar walking robot to ascend slopes with inclines up to 10°. Firstly, we identify a given set of kinematic parameters that meet the ZMP stability criterion for uphill walking at a given angle. We then find that further relating the step lengths and center of mass heights to specific slope angles through an interpolated fit allows for significantly improved success rates when ascending a sandy slope. Our results provide increased insight into the design, sensitivity and robustness of gaits on granular material, and the kinematic changes necessary for stable locomotion on complex media.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131213120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Redundant Strain Measurement of Link Structures for Improved Stability of Light Weight Torque Controlled Robots","authors":"H. Kaminaga, F. Kanehiro","doi":"10.1109/HUMANOIDS.2018.8624994","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624994","url":null,"abstract":"Robots that perform useful heavy-duty tasks are gaining attention in the field of construction, mining, and disaster recovery. For robust accomplishment of such tasks, control of interaction force is important fundamental functionality. Use of joint torque sensors is the most common method for robots that realize physical interaction. However, torque sensors add weight and reduce joint stiffness which result in loss of mobility performance. In this paper, joint torque sensing using link structure strain measurement is presented. Redundant strain gauges, placed in unstructured manner, are used to measure link deformation, which are then used to estimate all 6 components of the wrench acting on a link structure. Joint torque is then extracted from this wrench, which minimizes the cross-talks of the force measurement. Redundancy enhances the measurement accuracy and realizes fault tolerant force measurement. Simulation and experimental results of the measurement concept together with the fault recovery method are presented.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133480836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Dreaming for Object Detection: Learning Object-Centric State Representations for Visual Imitation","authors":"Maximilian Sieb, Katerina Fragkiadaki","doi":"10.1109/HUMANOIDS.2018.8625007","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8625007","url":null,"abstract":"We present a visual imitation learning method that enables robots to imitate demonstrated skills by learning a perceptual reward function based on object-centric feature representations. Our method uses the background configuration of the scene to compute object masks for the objects present. The robotic agent then trains a detector for the relevant objects in the scene via a process we call data dreaming, generating a synthetic dataset of images of various object occlusion configurations using only a small amount of background-subtracted ground truth images. We use the output of the object detector to learn an object-centric visual feature representation. We show that the resulting factorized feature representation comprised of per-object appearance features and cross-object relative locations enables efficient real world reinforcement learning that can teach a robot a policy based on a single demonstration after few minutes of training.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115227170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Efficient Omni-Directional Capture Stepping for Humanoid Robots from Human Motion and Simulation Data","authors":"Johannes Pankert, Lukas Kaul, T. Asfour","doi":"10.1109/HUMANOIDS.2018.8625039","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8625039","url":null,"abstract":"Two key questions in the context of stepping for push recovery are where to step and how to step there. In this paper we present a fast and computationally light-weight approach for capture stepping of full-sized humanoid robots. To this end, we developed an efficient parametric step motion generator based on dynamic movement primitives (DMPs) learnt from human demonstrations. Simulation-based reinforcement learning (RL) is used to find a mapping from estimated push parameters (push direction and intensity) to step parameters (step location and step execution time) that are fed to the motion generator. Successful omni-directional capture stepping for 89 % of the test cases with pushes from various directions and intensities is achieved with minimal computational effort after 500 training iterations. We evaluate our method in a dynamic simulation of the ARMAR-4 humanoid robot.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donghyun Kim, Steven Jens Jorgensen, Hochul Hwang, L. Sentis
{"title":"Control Scheme and Uncertainty Considerations for Dynamic Balancing of Passive-Ankled Bipeds and Full Humanoids","authors":"Donghyun Kim, Steven Jens Jorgensen, Hochul Hwang, L. Sentis","doi":"10.1109/HUMANOIDS.2018.8624915","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2018.8624915","url":null,"abstract":"We propose a methodology for dynamically balancing passive-ankled bipeds and full humanoids. As dynamic locomotion without ankle-actuation is more difficult than with actuated feet, our control scheme adopts an efficient whole-body controller that combines inverse kinematics, contact-consistent feed-forward torques, and low-level motor position controllers. To understand real-world sensing and controller requirements, we perform an uncertainty analysis on the linear-inverted-pendulum (LIP)-based footstep planner. This enables us to identify necessary hardware and control refinements to demonstrate that our controller can achieve long-term unsupported dynamic balancing on our series-elastic biped, Mercury. Through simulations, we also demonstrate that our control scheme for dynamic balancing with passive-ankles is applicable to full humanoid robots.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125770519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}