Frank Kaptein, J. Broekens, K. Hindriks, Mark Antonius Neerincx
{"title":"Personalised self-explanation by robots: The role of goals versus beliefs in robot-action explanation for children and adults","authors":"Frank Kaptein, J. Broekens, K. Hindriks, Mark Antonius Neerincx","doi":"10.1109/ROMAN.2017.8172376","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172376","url":null,"abstract":"A good explanation takes the user who is receiving the explanation into account. We aim to get a better understanding of user preferences and the differences between children and adults who receive explanations from a robot. We implemented a Nao-robot as a belief-desire-intention (BDI)-based agent and explained its actions using two different explanation styles. Both are based on how humans explain and justify their actions to each other. One explanation style communicates the beliefs that give context information on why the agent performed the action. The other explanation style communicates the goals that inform the user of the agent's desired state when performing the action. We conducted a user study (19 children, 19 adults) in which a Nao-robot performed actions to support type 1 diabetes mellitus management. We investigated the preference of children and adults for goalversus belief-based action explanations. From this, we learned that adults have a significantly higher tendency to prefer goal-based action explanations. This work is a necessary step in addressing the challenge of providing personalised explanations in human-robot and human-agent interaction.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126998953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. S. Ghazali, Jaap Ham, E. Barakova, P. Markopoulos
{"title":"Pardon the rude robot: Social cues diminish reactance to high controlling language","authors":"A. S. Ghazali, Jaap Ham, E. Barakova, P. Markopoulos","doi":"10.1109/ROMAN.2017.8172335","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172335","url":null,"abstract":"In many future social interactions between robots and humans, robots may need to convince people to change their behavior. People may dislike and resist such persuasive attempts, a phenomenon known as psychological reactance. This paper examines how reactance, measured in terms of negative cognitions and feelings of anger, is affected by the persuading agent's social agency cues and the level of controlling language used. Participants played a decision-making game in which a persuasive agent attempted to influence their choices exhibiting high or low controlling language, and three different levels of social agency. Results suggest that controlling language will lead to increased reactance when the persuasive agent does not exhibit social cues. Surprisingly, reactance is not affected by controlling language in the same way when the persuading agent is a social robot exhibiting social cues.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Trovato, Alexander López, Renato Paredes, F. Cuéllar
{"title":"Security and guidance: Two roles for a humanoid robot in an interaction experiment","authors":"G. Trovato, Alexander López, Renato Paredes, F. Cuéllar","doi":"10.1109/ROMAN.2017.8172307","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172307","url":null,"abstract":"Security is one of the possible fields in human society in which robotics can be applied. Human guards usually perform a range of tasks in which a robot can provide help. A security company collaborated with us in the design and development of a robot that should serve in patrolling large indoor areas, interacting with humans, welcoming, providing information, and be a telepresence platform for the human security guards. In this paper we present a preliminary experiment which involved this new robot in two roles: security and guidance. The former being important especially during night, and the latter being common in daytime, when guards are usually interacting with people who ask them information. The results of the experiment with 55 participants showed how the perception of the appearance of the robot and its effectiveness are influenced by its behaviour and its related perceived more authoritative or kinder traits. These results provide useful indication for the employment of robot guards in a real world situation.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133752278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating how people deal with silence in a human-robot conversation","authors":"Kiyona Oto, Jianmei Feng, M. Imai","doi":"10.1109/ROMAN.2017.8172301","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172301","url":null,"abstract":"In this paper, we focus on “silence,” which appears as a gap or delay in giving a response during a conversation and is one of the most important factors to consider to have a more natural conversation with robots. In the conversation between a human and a robot, silence can be divided into two parts: first, a silence that a human uses for a robot and second, a silence that a robot takes for a human. Therefore, we conducted a conversation test between a human and a robot in order to clarify the following two points: one, whether humans use silence for a robot and two, how silence used by a robot can be interpreted by humans. The results of the experiment indicate that humans certainly use silence for a robot for some reasons. Participants were asked to label the silences in four different types: Semantic Silence, Syntactical and Grammatical Silence, Interactive Silence, and Robotic Silence. As a result of this classification, there were cases where humans used Interactive Silence to be concerned for a robot, similar to that in case of a human conversation partner. It is now clear that humans use and regard silence in a form closer to a human conversation partner rather than a machine partner while in conversation with a communication robot. In particular, we found that sometimes humans use silence in social sense such as Interactive Silence, which is for the consciousness of a conversation partner.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129530579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stopping distance for a robot approaching two conversating persons","authors":"Peter A. M. Ruijten, R. Cuijpers","doi":"10.1109/ROMAN.2017.8172306","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172306","url":null,"abstract":"In recent years, much attention has been given to developing robots with various social skills. An important social skill is navigation in the presence of people. Earlier research has indicated preferred approach angles and stopping distances for a robot when approaching people who are interacting with each other. However, an experimental validation of user experiences with such a robot is largely missing. The current study investigates the shape and size of a shared interaction space and evaluations of a robot approaching from various angles. Results show an expected pattern of stopping distances, but only when a robot approaches the middle point between two persons. Additionally, more positive evaluations were found when a robot approached on the side of the participant compared to other participant's side. These findings highlight the importance of using a smart path planning method for robots when joining an interaction between users.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130493220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ex-amp robot: Expressive robotic avatar with multimodal emotion detection to enhance communication of users with motor disabilities","authors":"Ai Kashii, K. Takashio, H. Tokuda","doi":"10.1109/ROMAN.2017.8172404","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172404","url":null,"abstract":"In current society, there are numerous robots made for various purposes, including manufacturing, cleaning, therapy, and customer service. Other robots are used for enhancing H2H communication. In this research, we proposed a robotic system which detects the user's emotions and enacts them on a humanoid robot. By using this robotic avatar, users with motor disabilities are able to extend their methods of communication, as a physical form of expression will be added to the conversation.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124299976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A framework for a robot's emotions engine","authors":"Benjamin Salem","doi":"10.1109/ROMAN.2017.8172369","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172369","url":null,"abstract":"An Emotions Engine is a modelling and a simplification of the Brain circuitry that generate emotions. It should produce a variety of responses including rapid reactionlike emotions as well as slower moods. We introduce such an engine and then propose a framework for its translated equivalent for a robot. We then define key issues that need addressing and provide guidelines via the framework, for its implementation onto an actual robot's Emotions Engine.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132791737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving robot transparency: Real-time visualisation of robot AI substantially improves understanding in naive observers","authors":"Robert H. Wortham, Andreas Theodorou, J. Bryson","doi":"10.1109/ROMAN.2017.8172491","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172491","url":null,"abstract":"Deciphering the behaviour of intelligent others is a fundamental characteristic of our own intelligence. As we interact with complex intelligent artefacts, humans inevitably construct mental models to understand and predict their behaviour. If these models are incorrect or inadequate, we run the risk of self deception or even harm. Here we demonstrate that providing even a simple, abstracted real-time visualisation of a robot's AI can radically improve the transparency of machine cognition. Findings from both an online experiment using a video recording of a robot, and from direct observation of a robot show substantial improvements in observers' understanding of the robot's behaviour. Unexpectedly, this improved understanding was correlated in one condition with an increased perception that the robot was ‘thinking’, but in no conditions was the robot's assessed intelligence impacted. In addition to our results, we describe our approach, tools used, implications, and potential future research directions.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123022339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentin Z. Nigolian, Mehmet Mutlu, Simon Hauser, A. Bernardino, A. Ijspeert
{"title":"Self-reconfigurable modular robot interface using virtual reality: Arrangement of furniture made out of roombots modules","authors":"Valentin Z. Nigolian, Mehmet Mutlu, Simon Hauser, A. Bernardino, A. Ijspeert","doi":"10.1109/ROMAN.2017.8172390","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172390","url":null,"abstract":"Self-reconfigurable modular robots (SRMR) offer high flexibility in task space by adopting different morphologies for different tasks. Using the same simple module, complex and more capable morphologies can be built. However, increasing the number of modules increases the degrees of freedom (DOF) of the system. Thus, controlling the system as a whole becomes harder. Indeed, even a 10 DOFs system is difficult to consider and manipulate. Intuitive and easy to use interfaces are needed, particularly when modular robots need to interact with humans. In this study we present an interface to assemble desired structures and placement of such structures, with a focus on the assembly process. Roombots modules, a particular SRMR design, are used for the demonstration of the proposed interface. Two non-conventional input/output devices — a head mounted display and hand tracking system — are added to the system to enhance the user experience. Finally, a user study was conducted to evaluate the interface. The results show that most users enjoyed their experience. However, they were not necessarily convinced by the gesture control, most likely for technical reasons.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127370687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"H-RRT-C: Haptic motion planning with contact","authors":"N. Blin, M. Taïx, P. Fillatreau, J. Fourquet","doi":"10.1109/ROMAN.2017.8172436","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172436","url":null,"abstract":"This paper focuses on interactive motion planning processes intended to assist a human operator when simulating industrial tasks in Virtual Reality. Such applications need motion planning on surfaces. We propose an original haptic path planning algorithm with contact, H-RRT-C, based on a RRT planner and a real-time interactive approach involving a haptic device for computer-operator authority sharing. Force feedback allows the human operator to keep contact consistently and provides the user with the feel of the contact, and the force applied by the operator on the haptic device is used to control the roadmap extension. Our approach has been validated through two experimental examples, and brings significant improvement over state of the art methods in both free and contact space to solve path-planning queries and contact operations such as insertion or sliding in highly constrained environments.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127324139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}