{"title":"Trajectory Optimization of Continuum arm robots","authors":"Ritesh Yadav, B. K. Rout","doi":"10.1109/RO-MAN46459.2019.8956407","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956407","url":null,"abstract":"Rigid manipulators are applicable for a very structured environment and standard applications. For real world applications, continuum manipulators are used which has required high degrees of freedom, and compliance. The current work focus on the trajectory optimization of continuum robot for a specified application to minimize energy usage. To achieve this task Lagrangian mechanics is used to develop the mathematical model of the continuum robot with the payload. In this case the trajectory optimization has been carried out by treating the problem as a nested optimization problem. The outer optimization task is to optimize the trajectory using minimization of input force as primary goal where initial and final configurations of the arm are already available. Here, Genetic Algorithm is used as the optimizer for the selected tasks. The purpose of inner optimization loop is to find the feasible inverse solution for the manipulator that is required to calculate input forces which is further required to optimize the trajectory of the arm. A constrained non-linear optimization algorithm is used for the task. The optimization results show 30-80 % decrease in the input force required for the specified trajectories of the arm. The current paper shows that various tasks can be optimized using the formulated strategy to save the energy required by the arm to execute specified task.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122051781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Your instruction may be crisp, but not clear to me!","authors":"Pradip Pramanick, Chayan Sarkar, Indrajit Bhattacharya","doi":"10.1109/RO-MAN46459.2019.8956431","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956431","url":null,"abstract":"The number of robots deployed in our daily surroundings is ever-increasing. Even in the industrial setup, the use of coworker robots is increasing rapidly. These cohabitant robots perform various tasks as instructed by co-located human beings. Thus, a natural interaction mechanism plays a big role in the usability and acceptability of the robot, especially by a non-expert user. The recent development in natural language processing (NLP) has paved the way for chatbots to generate an automatic response for users’ query. A robot can be equipped with such a dialogue system. However, the goal of human-robot interaction is not focused on generating a response to queries, but it often involves performing some tasks in the physical world. Thus, a system is required that can detect user intended task from the natural instruction along with the set of pre- and post-conditions. In this work, we develop a dialogue engine for a robot that can classify and map a task instruction to the robot’s capability. If there is some ambiguity in the instructions or some required information is missing, which is often the case in natural conversation, it asks an appropriate question(s) to resolve it. The goal is to generate minimal and pin-pointed queries for the user to resolve an ambiguity. We evaluate our system for a telepresence scenario where a remote user instructs the robot for various tasks. Our study based on 12 individuals shows that the proposed dialogue strategy can help a novice user to effectively interact with a robot, leading to satisfactory user experience.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mattia Pesenti, Z. Alkhoury, Maciej Bednarczyk, Hassan Omran, B. Bayle
{"title":"Linear Parameter-Varying Identification of the EMG–Force Relationship of the Human Arm","authors":"Mattia Pesenti, Z. Alkhoury, Maciej Bednarczyk, Hassan Omran, B. Bayle","doi":"10.1109/RO-MAN46459.2019.8956388","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956388","url":null,"abstract":"In this paper, we present a novel identification approach to model the EMG–Force relationship of the human arm, reduced to a single degree of freedom (1-DoF) for simplicity. Specifically, we exploit the Linear Parameter Varying (LPV) framework. The inputs of the model are the electromyographic (EMG) signals acquired on two muscles of the upper arm, biceps brachii and triceps brachii, and two muscles of the forearm, brachioradialis and flexor carpi radialis. The output of the model is the force produced at the hand actuating the elbow. Because of the position-dependency of the system, the elbow angle is used as scheduling signal for the LPV model. Accurate modeling of the human arm with this approach opens new possibilities in terms of robot control for physical Human-Robot Interaction and rehabilitation robotics.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130539103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grimaldo Silva, A. Olivier, A. Crétual, J. Pettré, Thierry Fraichard
{"title":"Effective Human-Robot Collaboration in near symmetry collision scenarios","authors":"Grimaldo Silva, A. Olivier, A. Crétual, J. Pettré, Thierry Fraichard","doi":"10.1109/RO-MAN46459.2019.8956360","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956360","url":null,"abstract":"Recent works in the domain of Human-Robot Motion (HRM) attempted to plan collision avoidance behavior that accounts for cooperation between agents. Cooperative collision avoidance between humans and robots should be conducted under several factors such as speed, heading and also human attention and intention. Based on some of these factors, people decide their crossing order during collision avoidance. However, whenever situations arise in which the choice crossing order is not consistent for people, the robot is forced to account for the possibility that both agents will assume the same role i.e. a decision detrimental to collision avoidance. In our work we evaluate the boundary that separates the decision to avoid collision as first or last crosser. Approximating the uncertainty around this boundary allows our collision avoidance strategy to address this problem based on the insight that the robot should plan its collision avoidance motion in such a way that, even if agents, at first, incorrectly choose the same crossing order, they would be able to unambiguously perceive their crossing order on their following collision avoidance action.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123044069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Interaction and Improving Knowledge through Collaborative Tour Guide Robots","authors":"Anna-Maria Velentza, D. Heinke, J. Wyatt","doi":"10.1109/RO-MAN46459.2019.8956372","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956372","url":null,"abstract":"In the coming years tour guide robots will be widely used in museums and exhibitions. Therefore, it is important to identify how these new museum guides can optimally interact with visitors. In this paper, we introduce the idea of two collaborative tour guide robots. We have been inspired by evidence from cognitive studies stating that people remember more when they receive information from two different human speakers. Our collaborative tour guides were benchmarked against single robot guides. Our study initially proved, through real-world experiments, previous proposals stating that the personality of the robot affects the human learning process; our results demonstrate that people remember significantly more information when they are guided by a cheerful robot than when their guide is a serious one. Moreover, another important outcome of our study is that our visitors tend to like more our collaborative robots, than any referenced single robot, as demonstrated by the higher scores in the aesthetic-related questions. Hence our results suggest that a cheerful robot is more suitable for learning purposes while two robots are more suitable for entertainment purposes.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. P. Joshi, T. Shibata, Kunihiro Ogata, Y. Matsumoto
{"title":"Quantitative Evaluation of Clothing Assistance using Whole-Body Robotic Simulator of the Elderly","authors":"R. P. Joshi, T. Shibata, Kunihiro Ogata, Y. Matsumoto","doi":"10.1109/RO-MAN46459.2019.8956308","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956308","url":null,"abstract":"The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Robotic solutions to clothing assistance can significantly improve the Activity of Daily Living (ADL) for the elderly and disabled. We have developed a clothing assistance robot using dual arms and conducted many successful demonstrations with healthy people. It was, however, impossible to systematically evaluate its performance because human arms are not visible due to occlusion from a shirt and robot during dressing. To address this problem, we propose to use another robot, Whole-Body Robotic Simulator of the Elderly that can mimic the posture and movement of the elderly persons during the dressing task. The dressing task is accomplished by utilizing Dynamic Movement Primitives (DMP) wherein the control points of DMP are determined by applying forward kinematics on the robotic simulator. The experimental results show the plausibility of our approach.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122480932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Reality as a Medium for Human-Robot Collaborative Tasks","authors":"S. M. Chacko, V. Kapila","doi":"10.1109/RO-MAN46459.2019.8956466","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956466","url":null,"abstract":"This paper presents a novel augmented reality (AR) interaction method that allows a robot to perform manipulation of unknown physical objects in a human-robot collaborative working environment. A mobile AR application is developed to determine and communicate, in real-time, the position, orientation, and dimension of any random object in a robot manipulator’s workspace to perform pick-and-place operations. The proposed method is based on estimating the pose and size of the object by means of an AR virtual element superimposed on the live view of the real object. In particular, a semi-transparent AR element is created and manipulated through touch screen interactions to match with the pose and scale of the physical object to provide the information about that object. The resulting data is communicated to the robot manipulator to perform pick-and-place tasks. In this way, the AR virtual element acts as a medium of communication between a human and a robot. The performance of the proposed AR interface is assessed by conducting multiple trials with random objects, and it is observed that the robot successfully accomplishes tasks communicated through the AR virtual elements. The proposed interface is also tested with 20 users to determine the quality of user experience, followed by a poststudy survey. The participants reported that the AR interface is intuitive and easy to operate for manipulating physical objects of various sizes and shapes.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131097100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does a friendly robot make you feel better?","authors":"Peter A. M. Ruijten, R. Cuijpers","doi":"10.1109/RO-MAN46459.2019.8956368","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956368","url":null,"abstract":"As robots are taking a more prominent role in our daily lives, it becomes increasingly important to consider how their presence influences us. Several studies have investigated effects of robot behavior on the extent to which that robot is positively evaluated. Likewise, studies have shown that the emotions a robot shows tend to be contagious: a happy robot makes us feel happy as well. It is unknown, however, whether the affect that people experience while interacting with a robot also influences their evaluation of the robot. This study aims to discover whether people’s affective and evaluative responses to a social robot are related. Results show that affective responses and evaluations are related, and that these effects are strongest when a robot shows meaningful motions. These results are consistent with earlier findings in terms of how people evaluate social robots.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz
{"title":"Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming","authors":"Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz","doi":"10.1109/RO-MAN46459.2019.8956315","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956315","url":null,"abstract":"This paper proposes an intuitive approach for collaborative robot end-user programming using a combination of interactive spatial augmented reality (ISAR) and headmounted display (HMD). It aims to reduce user’s workload and to let the user program the robot faster than in classical approaches (e.g. kinesthetic teaching). The proposed approach, where user is using a mixed-reality HMD – Microsoft HoloLens – and touch-enabled table with SAR projected interface as input devices, is compared to a baseline approach, where robot’s arms and a touch-enabled table are used as input devices. Main advantages of the proposed approach are the possibility to program the collaborative workspace without the presence of the robot, its speed in comparison to the kinesthetic teaching and an ability to quickly visualize learned program instructions, in form of virtual objects, to enhance the users’ orientation within those programs. The approach was evaluated on a set of 20 users using the within-subject experiment design. Evaluation consisted of two pick and place tasks, where users had to start from the scratch as well as to update the existing program. Based on the experiment results, the proposed approach is better in qualitative measures by 33.84% and by 28.46% in quantitative measures over the baseline approach for both tasks.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115344322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shishir N. Y. Kolathaya, A. Ghosal, B. Amrutur, Ashish Joglekar, Suhan Shetty, Dhaivat Dholakiya, Abhimanyu, Aditya Sagi, Shounak Bhattacharya, Abhik Singla, S. Bhatnagar
{"title":"Trajectory based Deep Policy Search for Quadrupedal Walking","authors":"Shishir N. Y. Kolathaya, A. Ghosal, B. Amrutur, Ashish Joglekar, Suhan Shetty, Dhaivat Dholakiya, Abhimanyu, Aditya Sagi, Shounak Bhattacharya, Abhik Singla, S. Bhatnagar","doi":"10.1109/RO-MAN46459.2019.8956369","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956369","url":null,"abstract":"In this paper, we explore a specific form of deep reinforcement learning (D-RL) technique for quadrupedal walking—trajectory based policy search via deep policy networks. Existing approaches determine optimal policies for each time step, whereas we propose to determine an optimal policy for each walking step. We justify our approach based on the fact that animals including humans use “low” dimensional trajectories at the joint level to realize walking. We will construct these trajectories by using Bézier polynomials, with the coefficients being determined by a parameterized policy. In order to maintain smoothness of the trajectories during step transitions, hybrid invariance conditions are also applied. The action is computed at the beginning of every step, and a linear PD control law is applied to track at the individual joints. After each step, reward is computed, which is then used to update the new policy parameters for the next step. After learning an optimal policy, i.e., an optimal walking gait for each step, we then successfully play them in a custom built quadruped robot, Stoch 2, thereby validating our approach.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}