Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara
{"title":"Designing Context-Sensitive Norm Inverse Reinforcement Learning Framework for Norm-Compliant Autonomous Agents","authors":"Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN47096.2020.9223344","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223344","url":null,"abstract":"Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128318210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple-Robot Mediated Discussion System to support group discussion *","authors":"Shogo Ikari, Y. Yoshikawa, H. Ishiguro","doi":"10.1109/RO-MAN47096.2020.9223444","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223444","url":null,"abstract":"Deep discussions on topics without definite answers are important for society, but they are also challenging to facilitate. Recently, advances in the technology of using robots to facilitate discussions have been made. In this study, we developed a multiple-robot mediated discussion system (m-RMDS) to support discussions by having multiple robots assert their own points and lead a dialogue in a group of human participants. The robots involved the participants in a discussion through asking them for advice. We implemented the m-RMDS in discussions on difficult topics with no clear answers. A within-subject experiment with 16 groups (N=64) was conducted to evaluate the contribution of the m-RMDS. The participants completed a questionnaire about their discussion skills and their self-confidence. Then, they participated in two discussions, one facilitated by the m-RMDS and one that was unfacilitated. They evaluated and compared both experiences across multiple aspects. The participants with low confidence in conducting a discussion evaluated the discussion with m-RMDS as easier to move forward than the discussion without m-RMDS. Furthermore, they reported that they heard more of others' frank opinions during the facilitated discussion than during the unfacilitated one. In addition, regardless of their confidence level, the participants tended to respond that they would like to use the system again. We also review necessary improvements to the system and suggest future applications.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133415793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken
{"title":"Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis","authors":"S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken","doi":"10.1109/RO-MAN47096.2020.9223489","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223489","url":null,"abstract":"Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122900380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi
{"title":"When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions","authors":"Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi","doi":"10.1109/RO-MAN47096.2020.9223551","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223551","url":null,"abstract":"Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one’s own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity’s actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans’ trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127747192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai
{"title":"PredGaze: A Incongruity Prediction Model for User’s Gaze Movement","authors":"Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai","doi":"10.1109/RO-MAN47096.2020.9223525","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223525","url":null,"abstract":"With digital signage and communication robots, digital agents have gradually become popular and will become more popular. It is important to make humans notice the intentions of agents throughout the interaction between them. This paper is focused on the gaze behavior of an agent and the phenomenon that if the gaze behavior of an agent is different from human expectations, human will have a incongruity and feel the existence of the agent’s intention behind the behavioral changes instinctively. We propose PredGaze, a model of estimating this incongruity which humans have according to the shift in gaze behavior from the human’s expectations. In particular, PredGaze uses the variance in the agent behavior model to express how well humans sense the behavioral tendency of the agent. We expect that this variance will improve the estimation of the incongruity. PredGaze uses three variables to estimate the internal state of how much a human senses the agent’s intention: error, confidence, and incongruity. To evaluate the effectiveness of PredGaze with these three variables, we conducted an experiment to investigate the effects of the timing of gaze behavior change and incongruity. The experimental results indicated that there were significant differences in the subjective scores of the naturalness of agents and incongruity with agents according to the difference in the timing of the agent’s change in its gaze behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Expressivity of a Parametric Humanoid Emotion Model","authors":"Pooja Prajod, K. Hindriks","doi":"10.1109/RO-MAN47096.2020.9223459","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223459","url":null,"abstract":"Emotion expression is an important part of human-robot interaction. Previous studies typically focused on a small set of emotions and a single channel to express them. We developed an emotion expression model that modulates motion, poses and LED features parametrically, using valence and arousal values. This model does not interrupt the task or gesture being performed and hence can be used in combination with functional behavioural expressions. Even though our model is relatively simple, it is just as capable of expressing emotions as other more complicated models that have been proposed in the literature. We systematically explored the expressivity of our model and found that a parametric model using 5 key motion and pose features can be used to effectively express emotions in the two quadrants where valence and arousal have the same sign. As paradigmatic examples, we tested for happy, excited, sad and tired. By adding a second channel (eye LEDs), the model is also able to express high arousal (anger) and low arousal (relaxed) emotions in the two other quadrants. Our work supports other findings that it remains hard to express moderate arousal emotions in these quadrants for both negative (fear) and positive (content) valence.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov
{"title":"Virtual Reality based Telerobotics Framework with Depth Cameras","authors":"Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov","doi":"10.1109/RO-MAN47096.2020.9223445","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223445","url":null,"abstract":"This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot’s end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator’s ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta
{"title":"A Two-Layered Approach to Adaptive Dialogues for Robotic Assistance","authors":"Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta","doi":"10.1109/RO-MAN47096.2020.9223605","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223605","url":null,"abstract":"Socially assistive robots should provide users with personalized assistance within a wide range of scenarios such as hospitals, home or social settings and private houses. Different people may have different needs both at the cognitive/physical support level and in relation to the preferences of interaction. Consequently the typology of tasks and the way the assistance is delivered can change according to the person with whom the robot is interacting. The authors’ long-term research goal is the realization of an advanced cognitive system able to support multiple assistive scenarios with adaptations over time. We here show how the integration of model-based and model-free AI technologies can contextualize robot assistive behaviors and dynamically decide what to do (assistive plan) and how to do it (assistive plan execution), according to the different features and needs of assisted persons. Although the approach is general, the paper specifically focuses on the synthesis of personalized therapies for (cognitive) stimulation of users.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115867870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance","authors":"Oscar Thörn, Peter Knudsen, A. Saffiotti","doi":"10.1109/RO-MAN47096.2020.9223446","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223446","url":null,"abstract":"Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115974786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WalkingBot: Modular Interactive Legged Robot with Automated Structure Sensing and Motion Planning","authors":"Meng Wang, Yao Su, Hangxin Liu, Ying-Qing Xu","doi":"10.1109/RO-MAN47096.2020.9223474","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223474","url":null,"abstract":"This paper presents WalkingBot, a modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded. The kinematic model of the built robot is interpreted automatically and revealed in a customized GUI through an integrated hardware and software design, so that users can understand, control, and program the robot easily. A Model Predictive Control (MPC) scheme is introduced to generate a control policy for various motions (e.g. moving forward, turning left) corresponding to the sensed robot structure, affording rich robot motions right after assembling. Targeting different levels of programming skill, two programming methods, visual block programming and events programming, are also presented to enable users to create their own interactive legged robot.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}