Agha Ali Haider Qizilbash, Christian Henkel, Sanaz Mostaghim
{"title":"Ant Colony Optimization based Multi-Robot Planner for Combined Task Allocation and Path Finding","authors":"Agha Ali Haider Qizilbash, Christian Henkel, Sanaz Mostaghim","doi":"10.1109/UR49135.2020.9144944","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144944","url":null,"abstract":"Nature has inspired many solutions to the problems in computer science and recently in the field of robotics as well. Ant based algorithms have been successful in solving the NP hard problems such as traveling salesman problem. In the field of multi-robots it has been used to solve path finding and task allocation problems. In industrial warehouse applications, these problems are often combined, when for example multiple robots need to pick-up objects from one location and dropoff at the other. Multiple mobile robots need to perform these task optimally and simultaneously being on the floor without collisions. In this paper, we address this problem keeping the objective of being able to obtain collision free paths for all robots in a map, assigned for all given pick-up and drop-off tasks among themselves with an optimized minimal total distance traveled by the robots. We propose a multi-robot planner inspired from ant colony optimization to solve this combined problem. This planner finds collision free paths to all tasks to be done using a spread of ants from each robot. Ignoring the ones with collisions from other ants in their determined paths, the planner rates the tasks according to the total distance traveled. Using this rating system through multiple iterations, the planner eventually selects the best task allocations with paths for all robots among given iterations. This planner or as we call it, Ant Colony Optimization based Multi-Robot Planner for Combined Task Allocation and Path Finding Ant Colony Optimization based Multi-Robot Planner for Combined Task Allocation and Path Finding (ACTF) for pick-up and drop-off tasks is presented in this paper and has been tested against other similar planners producing promising results.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122519071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Peer-to-Peer Localization via On-board Sensing for Aerial Flocking","authors":"Fatima Rajab, Samet Güler, J. Shamma","doi":"10.1109/UR49135.2020.9144914","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144914","url":null,"abstract":"The performance of mobile multi-robot systems dramatically depends on the mutual awareness of individual robots, particularly the positions of other robots. GPS and motion capture cameras are commonly used to acquire and ultimately communicate positions of robots. Such sensing schemes depend on infrastructure and restrict the capabilities of a multi-robot system, e.g., the robots cannot operate in both indoor and outdoor environments. Conversely, peer-to-peer localization algorithms can be used to free the robots from such infrastructures. In such systems, robots use on-board sensing to infer the positions of nearby robots. In this approach, it is essential to have a model of the motion of other robots. We introduce a flocking localization scheme that takes into account motion behavior exhibited by the other robots. The proposed scheme depends only on the robots’ on-board sensors and computational capabilities and yields a more accurate localization solution than the peer-to-peer localization algorithms that do not take into account the flocking behavior. We verify the performance of our scheme in simulations and demonstrate experiments on two unmanned aerial vehicles.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128601484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Posture Stabilization of Quadruped Walking Robot, PongBot with Disturbances","authors":"No-Sik Kwak, Jung-Yup Kim","doi":"10.1109/UR49135.2020.9144844","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144844","url":null,"abstract":"This paper describes the posture stabilization of the quadruped walking robot, PongBot, in the presence of the disturbances. The proposed posture stabilization is mainly composed of the two sub-controllers: body balance controller and body moving controller. The body balance controller is working at all times in the background, and the activation of the body moving controller is determined according to magnitude of disturbances. First, the body balance controller, which is mainly effective for small disturbances, maintains the level body through the optimal distribution of the vertical ground reaction forces on the feet. The body moving controller, which is primarily for large disturbances, performs the posture stabilization through the horizontal body movement, and it is activated together with the body balance controller. Finally, the performance of the proposed method was experimentally verified by using the quadruped walking robot, PongBot, in the presence of the changing ground slope and external impacts.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Arc-length based Two-step Robot Motion Teaching Method for Dynamic Tasks","authors":"Donghyeon Lee, Ho-Sik Choi, W. Chung, Keehoon Kim","doi":"10.1109/UR49135.2020.9144982","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144982","url":null,"abstract":"In this study, a new robot motion teaching method is proposed for dynamic robotic tasks. In the proposed teaching framework, the path geometry definition and timeparametrization process are separated. For the path geometry definition, waypoint-based teaching is used to secure motion accuracy and safety, and teaching-by-teleoperation is used to extract intuitive human motion easily. The direct motion dynamics transfer algorithm is developed to project the human motion dynamics into the pre-defined motion path according to the normalized arc-length of two different paths. The overall teaching procedure is empirically validated with a 6 DoF collaborative robot, and the result shows that the dynamic motion like pepper sprinkling can be easily taught by setting waypoints and swing handheld motion controller. This study suggests an intuitive and practical robot motion teaching method especially powerful for the dynamic robotic tasks.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121755018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Myoung-Su Choi, Dong-Hyuk Lee, Jae-Han Park, J. Bae
{"title":"Kinesthetic sensing of hole position by 3-finger gripper","authors":"Myoung-Su Choi, Dong-Hyuk Lee, Jae-Han Park, J. Bae","doi":"10.1109/UR49135.2020.9144928","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144928","url":null,"abstract":"In this study, we present a novel method for ‘peg-in-hole’ assembly using a multi-joint three-fingered gripper. In previous studies, we have conducted peg-in-hole assembly based on spiral search method, not knowing the exact hole position. However, search time varies greatly depending on the error between the peg and the hole, it was difficult to estimate tact time of the assembly task. In this paper, in order to reduce the search time deviation of the hole, we propose a method that estimating the direction of the hole first before conducting the search motion. Direction of the hole is estimated by ‘kinesthetic sensing’ using the fingers grasping the peg. The feasibility of the proposed method has been verified through peg-in-hole assembly experiments using the three-fingered robot gripper.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122718048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy Enhancement in Face-pose Estimation Network Using Incrementally Updated Face-shape Parameters","authors":"Makoto Sei, A. Utsumi, H. Yamazoe, Joo-Ho Lee","doi":"10.1109/UR49135.2020.9144866","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144866","url":null,"abstract":"In this paper, we pursue the refinement of a face-pose estimation method using incrementally updated face-shape parameters. Network-based algorithms generally rely on an on-line training process that uses a large dataset, and a trained network usually works in a one-shot manner, i.e., each input image is processed one by one with a static network. On the other hand, we expect a great advantage from having sequential observations, rather than just single-image observations, in many practical applications. In such cases, the dynamic use of multiple observations can contribute to improving system performance. In our previous study, therefore, we introduced an incremental personalization mechanism using sequential observations to a network-based face-pose estimation method, where the averaged parameters in iterative face-shape estimations are used for face-pose estimation. The head pose estimation accuracy of our method was about 2 degrees. In this paper, we conduct an experiment to examine the error distribution of face-shape estimation and discuss an effective incremental personalization mechanism to update the face-shape parameters based on the error distribution.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127443033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junki Matsuoka, Yoshihisa Tsurumine, Yuhwan Kwon, Takamitsu Matsubara, T. Shimmura, S. Kawamura
{"title":"Learning Food-arrangement Policies from Raw Images with Generative Adversarial Imitation Learning","authors":"Junki Matsuoka, Yoshihisa Tsurumine, Yuhwan Kwon, Takamitsu Matsubara, T. Shimmura, S. Kawamura","doi":"10.1109/UR49135.2020.9144988","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144988","url":null,"abstract":"In this paper, we tackle the problem of food- arrangement planning by an imitation learning approach from expert demonstrations. Specifically, we utilize a Generative Adversarial Imitation Learning framework, which allows an agent to learn near-optimal behaviors from a few expert demonstrations and self explorations without an explicit reward function. To evaluate our method, a food-arrangement simulator for the Japanese cuisine \"Tempura\" was developed with 3D-scanned tempura ingredients, and experiments were conducted for its performance evaluation. The experimental results demonstrate that our method can learn expert-like arrangement policies from bird-view raw images of plates without manually designing a reward function or requiring a massive number of expert demonstration data. Moreover, we confirmed that the learned polices are robust against arrangement errors and environmental changes compared to a baseline policy with supervised learning.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130665275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic, Robust Locomotion for a Non-Anthropomorphic Biped","authors":"M. Ahn, D. Hong","doi":"10.1109/UR49135.2020.9144959","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144959","url":null,"abstract":"This work presents a dynamic walking controller for a high-bandwidth torque-controlled non-anthropomorphic bipedal robot. A simplified model with passive stability characteristics is leveraged along with feedback linearization techniques, actively adjusted footstep positions, and swing leg trajectories that minimize the creation of additional moments, to make the robot follow a desired velocity under different environmental circumstances. Tests to show the approach’s robustness to external forces and imperfect terrain (e.g. stairs, obstacles, ramps) are demonstrated in simulation. Dynamic stability of the approach is analyzed through a limit cycle analysis.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131238381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huayan Zhang, Tianwei Zhang, Yang Li, Lei Zhang, Wanpeng Wang
{"title":"Object Mobility classification based Visual SLAM in Dynamic Environments","authors":"Huayan Zhang, Tianwei Zhang, Yang Li, Lei Zhang, Wanpeng Wang","doi":"10.1109/UR49135.2020.9144979","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144979","url":null,"abstract":"Most of the existed visual odometry methods cannot work in dynamic environments since the dynamic objects lead to wrong uncertain feature associations. In this paper, we involved a learning-based object classification front end to recognize and remove the dynamic object, and thereby ensure our ego-motion estimator’s robustness in high dynamic environments. Moreover, we newly classify the environmental objects into static, movable and dynamic three classes. This processing not only enables the ego-motion estimation in the dynamic environment but also leads to clean and complete map-ping results. The experimental results indicate that the proposed method outperformed the other state-of-the-art SLAM solutions in both dynamic and static indoor environments.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative Multi-Robot Control for Monitoring an Expanding Flood Area","authors":"Yang Bai, Koki Asami, M. Svinin, E. Magid","doi":"10.1109/UR49135.2020.9144931","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144931","url":null,"abstract":"In this paper, a control strategy is developed for tracking the propagation of an expanding flood zone by using a group of unmanned aerial vehicles (UAVs). The strategy consists of two stages: a caging stage and a covering stage. In the caging stage, a group of UAVs, referring to the boundary drones, are averagely distributed along the boundary of the flood zone, tracking its propagation. In the covering stage, another group of UAVs, referring to the inner drones, are allocated among the interior region of the flood zone, covering the region as much as possible with less overlapping of the UAVs’ field of view. Corresponding control algorithms are proposed for the aforementioned types of UAVs to implement the control strategy. The feasibility of the control strategy is verified under simulations.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}