{"title":"A Participatory Design Process of a Robotic Tutor of Assistive Sign Language for Children with Autism","authors":"Minja Axelsson, M. Racca, Daryl Weir, V. Kyrki","doi":"10.1109/RO-MAN46459.2019.8956309","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956309","url":null,"abstract":"We present the participatory design process of a robotic tutor of assistive sign language for children with autism spectrum disorder (ASD). Robots have been used in autism therapy, and to teach sign language to neurotypical children. The application of teaching assistive sign language — the most common form of assistive and augmentative communication used by people with ASD — is novel. The robot’s function is to prompt children to imitate the assistive signs that it performs. The robot was therefore co-designed to appeal to children with ASD, taking into account the characteristics of ASD during the design process: impaired language and communication, impaired social behavior, and narrow flexibility in daily activities. To accommodate these characteristics, a multidisciplinary team defined design guidelines specific to robots for children with ASD, which were followed in the participatory design process. With a pilot study where the robot prompted children to imitate nine assistive signs, we found support for the effectiveness of the design. The children successfully imitated the robot and kept their focus on it, as measured by their eye gaze. Children and their companions reported positive experiences with the robot, and companions evaluated it as potentially useful, suggesting that robotic devices could be used to teach assistive sign language to children with ASD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134624455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mojgan Hashemian, Ana Paiva, S. Mascarenhas, P. A. Santos, R. Prada
{"title":"The Power to Persuade: a study of Social Power in Human-Robot Interaction","authors":"Mojgan Hashemian, Ana Paiva, S. Mascarenhas, P. A. Santos, R. Prada","doi":"10.1109/RO-MAN46459.2019.8956298","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956298","url":null,"abstract":"Recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, a body of literature has been performed using various approaches to answer this research question, ranging from the use of non-verbal behavior to the exploration of different embodiment characteristics. In this paper, we investigate the role of social power for making social robots more persuasive. Social power is defined as one’s ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131649182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. N. Jyotish, Yash Goel, A. V. S. S. B. Kumar, K. Krishna
{"title":"PIVO: Probabilistic Inverse Velocity Obstacle for Navigation under Uncertainty","authors":"P. N. Jyotish, Yash Goel, A. V. S. S. B. Kumar, K. Krishna","doi":"10.1109/RO-MAN46459.2019.8956406","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956406","url":null,"abstract":"In this paper, we present an algorithmic framework which computes the collision-free velocities for the robot in a human shared dynamic and uncertain environment. We extend the concept of Inverse Velocity Obstacle (IVO) to a probabilistic variant to handle the state estimation and motion uncertainties that arise due to the other participants of the environment. These uncertainties are modeled as non-parametric probability distributions. In our PIVO: Probabilistic Inverse Velocity Obstacle, we propose the collision-free navigation as an optimization problem by reformulating the velocity conditions of IVO as chance constraints that takes the uncertainty into account. The space of collision-free velocities that result from the presented optimization scheme are associated to a confidence measure as a specified probability. We demonstrate the efficacy of our PIVO through numerical simulations and demonstrating its ability to generate safe trajectories under highly uncertain environments.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133410750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Path Planning through Tight Spaces for Payload Transportation using Multiple Mobile Manipulators","authors":"Rahul Tallamraju, V. Sripada, S. Shah","doi":"10.1109/RO-MAN46459.2019.8956426","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956426","url":null,"abstract":"In this paper, the problem of path planning through tight spaces, for the task of spatial payload transportation, using a formation of mobile manipulators is addressed. Due to the high dimensional configuration space of the system, efficient and geometrically stable path planning through tight spaces is challenging. We resolve this by planning the path for the system in two phases. First, an obstacle-free trajectory in $mathbb{R}^{3}$ for the payload being transported is determined using RRT. Next, near-energy optimal and quasi-statically stable paths are planned for the formation of robots along this trajectory using non-linear multi-objective optimization. We validate the proposed approach in simulation experiments and compare different multi-objective optimization algorithms to find energy optimal and geometrically stable robot path plans.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129498138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teaching a Robot how to Spatially Arrange Objects: Representation and Recognition Issues","authors":"Luca Buoncompagni, F. Mastrogiovanni","doi":"10.1109/RO-MAN46459.2019.8956457","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956457","url":null,"abstract":"This paper introduces a technique to teach robots how to represent and qualitatively interpret perceived scenes in tabletop scenarios. To this aim, we envisage a 3-step human-robot interaction process, in which $(i)$ a human shows a scene to a robot, $(ii)$ the robot memorises a symbolic scene representation (in terms of objects and their spatial arrangement), and (iii) the human can revise such a representation, if necessary, by further interacting with the robot; here, we focus on steps i and ii. Scene classification occurs at a symbolic level, using ontology-based instance checking and subsumption algorithms. Experiments showcase the main properties of the approach, i.e., detecting whether a new scene belongs to a scene class already represented by the robot, or otherwise creating a new representation with a one shot learning approach, and correlating scenes from a qualitative standpoint to detect similarities and differences in order to build a scene hierarchy.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dhaivat Bhatt, Akash Garg, Bharath Gopalakrishnan, K. Krishna
{"title":"Probabilistic obstacle avoidance and object following: An overlap of Gaussians approach","authors":"Dhaivat Bhatt, Akash Garg, Bharath Gopalakrishnan, K. Krishna","doi":"10.1109/RO-MAN46459.2019.8956314","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956314","url":null,"abstract":"Autonomous navigation and obstacle avoidance are core capabilities that enable robots to execute tasks in the real world. We propose a new approach to collision avoidance that accounts for uncertainty in the states of the agent and the obstacles. We first demonstrate that measures of entropy— used in current approaches for uncertainty-aware obstacle avoidance—are an inappropriate design choice. We then propose an algorithm that solves an optimal control sequence with a guaranteed risk bound, using a measure of overlap between the two distributions that represent the state of the robot and the obstacle, respectively. Furthermore, we provide closed form expressions that can characterize the overlap as a function of the control input. The proposed approach enables model-predictive control framework to generate bounded-confidence control commands. An extensive set of simulations have been conducted in various constrained environments in order to demonstrate the efficacy of the proposed approach over the prior art. We demonstrate the usefulness of the proposed scheme under tight spaces where computing risk-sensitive control maneuvers is vital. We also show how this framework generalizes to other problems, such as object-following.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123977294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-User Programming of Low-and High-Level Actions for Robotic Task Planning","authors":"Y. Liang, D. Pellier, H. Fiorino, S. Pesty","doi":"10.1109/RO-MAN46459.2019.8956327","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956327","url":null,"abstract":"Programming robots for general purpose applications is extremely challenging due to the great diversity of end-user tasks ranging from manufacturing environments to personal homes. Recent work has focused on enabling end-users to program robots using Programming by Demonstration. However, teaching robots new actions from scratch that can be reused for unseen tasks remains a difficult challenge and is generally left up to robotic experts. We propose iRoPro, an interactive Robot Programming framework that allows end-users to teach robots new actions from scratch and reuse them with a task planner. In this work we provide a system implementation on a two-armed Baxter robot that (i) allows simultaneous teaching of low-and high-level actions by demonstration, (ii) includes a user interface for action creation with condition inference and modification, and (iii) allows creating and solving previously unseen problems using a task planner for the robot to execute in real-time. We evaluate the generalisation power of the system on six benchmark tasks and show how taught actions can be easily reused for complex tasks. We further demonstrate its usability with a user study (N=21), where users completed eight tasks to teach the robot new actions that are reused with a task planner. The study demonstrates that users with any programming level and educational background can easily learn and use the system.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121312395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Setareh Zafari, Isabel Schwaninger, Matthias Hirschmanner, Christina Schmidbauer, A. Weiss, S. Koeszegi
{"title":"“You Are Doing so Great!” – The Effect of a Robot’s Interaction Style on Self-Efficacy in HRI","authors":"Setareh Zafari, Isabel Schwaninger, Matthias Hirschmanner, Christina Schmidbauer, A. Weiss, S. Koeszegi","doi":"10.1109/RO-MAN46459.2019.8956437","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956437","url":null,"abstract":"People form mental models about robots’ behavior and intention as they interact with them. The aim of this paper is to evaluate the effect of different interaction styles on self-efficacy in human-robot interaction (HRI), people’s perception of the robot, and task engagement. We conducted a user study in which a social robot assists people verbally while building a house of cards. Data from our experimental study revealed that people engaged longer in the task while interacting with a robot that provides person related feedback than with a robot that gives no person or task related feedback. Moreover, people interacting with a robot with a person-oriented interaction style reported a higher self-efficacy in HRI, perceived higher agreeableness of the robot and found the interaction less frustrating, as compared to a robot with a task-oriented interaction style. This suggests that a robot’s interaction style can be considered as a key factor for increasing people’s perceived self-efficacy in HRI, which is essential for establishing trust and enabling Human-robot collaboration.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128369029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Calibration between a Mobile Robot and SLAM Device for Navigation","authors":"Ryoichi Ishikawa, Takeshi Oishi, K. Ikeuchi","doi":"10.1109/RO-MAN46459.2019.8956356","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956356","url":null,"abstract":"In this paper, we propose a dynamic calibration between a mobile robot and a device using simultaneous localization and mapping (SLAM) technology, which we termed as the SLAM device, for a robot navigation system. The navigation framework assumes loose mounting of SLAM device for easy use and requires an online adjustment to remove localization errors. The online adjustment method dynamically corrects not only the calibration errors between the SLAM device and the part of the robot to which the device is attached but also the robot encoder errors by calibrating the whole body of the robot. The online adjustment assumes that the information of the external environment and shape information of the robot are consistent. In addition to the online adjustment, we also present an offline calibration between a robot and device. The offline calibration is motion-based and we clarify the most efficient method based on the number of degrees-of-freedom of the robot movement. Our method can be easily used for various types of robots with sufficiently precise localization for navigation. In the experiments, we confirm the parameters obtained via two types of offline calibration based on the degree of freedom of robot movement. We also validate the effectiveness of the online adjustment method by plotting localized position errors during a robots intense movement. Finally, we demonstrate the navigation using a SLAM device.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez
{"title":"Generation of expressive motions for a tabletop robot interpolating from hand-made animations","authors":"Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez","doi":"10.1109/RO-MAN46459.2019.8956246","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956246","url":null,"abstract":"Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with various intensities. These have been created by designers with expertise in animation. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to generate additional richer expressions. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant new expressive motions automatically generated by combining previous demonstrations.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}