{"title":"Using pressure sensors to identify manipulation actions during human physical interaction","authors":"M. Javaid, M. Žefran, A. Yavolovsky","doi":"10.1109/ROMAN.2015.7333660","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333660","url":null,"abstract":"This paper presents an investigation of human physical interaction. In particular, we describe how data from pressure sensors mounted on a glove worn by a human can be mapped to manipulation actions; the actions can in turn be used to interpret physical interaction during elderly care. The work is part of the RoboHelper project, which aims to build a multimodal communication interface for assistive robots for the elderly. Human-human physical interaction during elderly care and in a realistic setting is studied in this work with the aim of using the learned insights to develop corresponding robot interfaces. The contribution of this work is the identification of various types of physical manipulation actions that take place when an elder is assisted in performing activities of daily living in a natural setting. As part of the RoboHelper project, it has been shown that the knowledge of actions involving physical manipulation of objects helps in understanding the spoken language. More specifically, it improves the resolution of third person pronouns/deictic words and the classification of dialogue acts. In this work we show that pressure sensor data can be used to automatically recognize such physical manipulation actions. The automatic recognition of physical manipulation actions may facilitate future studies of multimodal interaction by greatly reducing the time required for manual annotations. It is also useful for learning from demonstration, a popular approach in human-robot interaction research.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129083369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moondeep C. Shrestha, Yosuke Nohisa, A. Schmitz, S. Hayakawa, Erika Uno, Yuta Yokoyama, Hayato Yanagawa, Keung Or, S. Sugano
{"title":"Using contact-based inducement for efficient navigation in a congested environment","authors":"Moondeep C. Shrestha, Yosuke Nohisa, A. Schmitz, S. Hayakawa, Erika Uno, Yuta Yokoyama, Hayato Yanagawa, Keung Or, S. Sugano","doi":"10.1109/ROMAN.2015.7333673","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333673","url":null,"abstract":"As robots progressively continue to enter human lives, it becomes important for robots to navigate safely and efficiently in crowded environments. In fact, efficient navigation in crowded areas is an important prerequisite for successful coexistence between humans and robots. In this paper, we explore an unconventional idea wherein a robot tries to achieve a more efficient navigation by influencing an obstructing human to move away by means of contact. First, preliminary human reaction experiments were conducted wherein we established that we can successfully induce a human to move in a desired direction. Following this result, we have proposed a novel motion planning approach which considers inducement by contact. The system is then verified through simulation and real experiments. The results show us that the proposed method can be utilized for safer and more efficient navigation in a crowded, but relatively static environment.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129790940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hye-Jong Kim, Yuto Tanaka, A. Kawamura, S. Kawamura, Yasutaka Nishioka
{"title":"Development of an inflatable robotic arm system controlled by a joystick","authors":"Hye-Jong Kim, Yuto Tanaka, A. Kawamura, S. Kawamura, Yasutaka Nishioka","doi":"10.1109/ROMAN.2015.7333622","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333622","url":null,"abstract":"This paper presents an inflatable robotic arm controlled by a joystick to be used for healthcare applications. The arm is constructed almost entirely of plastic elements: inflatable links, air bag actuators, and acrylonitrile butadiene styrene (ABS) joints. Therefore, it is softer and lighter than typical robotic arms that are made of metal and heavy elements. Because the softness and lightness of the inflatable robotic arm is intrinsically safer, it is suitable for healthcare applications. In this paper, a new control method is proposed which allows the inflatable system to be controlled with a joystick. To verify the usefulness of the proposed method, we used an inflatable robotic arm with four degrees of freedom (4 DOF) to obtain experimental results for the control performance of the inflatable robotic arm. Moreover, we conducted preliminary tests which simulated patients controlling the robotic arm with a joystick in order to assist with eating their meals.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerstin S Haring, David Silvera Tawil, Tomotaka Takahashi, Mari Velonaki, Katsumi Watanabe
{"title":"Perception of a humanoid robot: A cross-cultural comparison","authors":"Kerstin S Haring, David Silvera Tawil, Tomotaka Takahashi, Mari Velonaki, Katsumi Watanabe","doi":"10.1109/ROMAN.2015.7333613","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333613","url":null,"abstract":"This study focuses on differences and similarities of perception of a small humanoid robot between Japanese and Australian participants. Two conditions were investigated: participants actively interacting with the robot and bystanders observing the interaction. Experimental results suggested that, while the robot was perceived as highly likeable, Japanese participants rated the robot higher for animacy, intelligence and safety. Furthermore, passive observations of the interaction (rather than active interaction) resulted in higher ratings by Japanese participants for anthropomorphism, animacy, intelligence and safety. The findings are discussed in terms of cultural background and robot perception.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127170869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adapting an hybrid behavior-based architecture with episodic memory to different humanoid robots","authors":"François Ferland, Arturo Cruz-Maya, A. Tapus","doi":"10.1109/ROMAN.2015.7333586","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333586","url":null,"abstract":"A common goal of robot control architecture designers is to create systems that are sufficiently generic to be adapted to different robot hardware. Beyond code re-use from a software engineering standpoint, having a common architecture could lead to long-term experiments spanning multiple robots and research groups. This paper presents a first step toward this goal with HBBA, a Hybrid Behavior-Based Architecture first developed on the IRL-1 humanoid robot and integrating an Adaptive Resonance Theory-based episodic memory (EM-ART). This paper presents the first step of the adaptation of this architecture to two different robots, a Meka M-1 and a NAO from Aldebaran, with a simple scenario involving learning and sharing objects' information between both robots. The experiment shows that episodes recorded as sequences of people and objects presented to one robot can be recalled in the future on either robot, enabling event anticipation and sharing of past experiences.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126986799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Chevalier, Jean-Claude Martin, B. Isableu, A. Tapus
{"title":"Impact of personality on the recognition of emotion expressed via human, virtual, and robotic embodiments","authors":"P. Chevalier, Jean-Claude Martin, B. Isableu, A. Tapus","doi":"10.1109/ROMAN.2015.7333686","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333686","url":null,"abstract":"In this paper, we describe the elaboration and the validation of a body and face database1, of 96 videos of 1 to 2 seconds of duration, expressing 4 emotions (i.e., anger, happiness, fear, and sadness) elicited through 4 platforms of increased visual complexity and level of embodiment. The final aim of this database is to develop an individualized training program designed for individuals suffering of autism in order to help them recognize various emotions on different test platforms: two robots, a virtual agent, and a human. Before assessing the recognition capabilities of individuals with ASD, we validated our video database on typically developed individuals (TD). Moreover, we also looked at the relationship between the recognition rate and their personality traits (extroverted (EX) vs. introverted (IN)). We found that the personality of our TD participants did not lead to a different recognition behavior. However, introverted individuals better recognized emotions from less visually complex characters than extroverted individuals.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132405155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The evaluation of different roles for domestic social robots","authors":"M. D. Graaf, S. B. Allouch","doi":"10.1109/ROMAN.2015.7333594","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333594","url":null,"abstract":"Robotics researchers foresee that robots will become ubiquitous in our natural environments, such as our homes. For a successful diffusion of social robots, it is important to study the user acceptance of such robots. In an online survey, we have investigated the acceptance of three different possible roles for domestic social robots and the preferred appearance. The results show that, although most people prefer a humanoid robot for domestic purposes, the role for which a social robot is build affects the choice for a robotic appearance made by potential future users. When comparing the acceptance of the three different roles, people evaluate the companion robot more negatively on the different acceptance variables. Implications of these results are discussed.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124556245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring K-degree facial interaction between robot and children with autism spectrum disorders","authors":"Yadong Pan, Masakazu Hirokawa, Kenji Suzuki","doi":"10.1109/ROMAN.2015.7333683","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333683","url":null,"abstract":"This paper presents the design, implementation, and application of a vision-based automatic system that measures facial interaction based on human's cognitive feature. We investigated the feature of people's facial interaction under natural gaze-movement via an experiment, and created criteria of k-degree facial interaction and face-to-face interaction depending only on facial orientation. These criteria were used to develop the automatic system. The system is vision-based. It could be easily embedded into applications. We focused on an application to understand the behavior of children with autism spectrum disorders (ASD), and tested the use of the automatic system in a robot-assisted activity for those children. The results suggested that the system could help to improve the efficiency of behavior analysis during the children's activities with the robot. The facial interaction measured between the children and the robot can be used by therapists to comprehend the children's psychological aspects and state of health.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117260336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Fernández-Baena, Roger Boldu, J. Albó-Canals, David Miralles
{"title":"Interaction between Vleo and Pleo, a virtual social character and a social robot","authors":"A. Fernández-Baena, Roger Boldu, J. Albó-Canals, David Miralles","doi":"10.1109/ROMAN.2015.7333597","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333597","url":null,"abstract":"In this paper we explore the field of human robot interaction (HRI) by adding a new non-physical player in the interaction rather than only humans and robots. It is a proven fact that physical robots enhance the immersion perception compared to only virtual agents. Thus, we created a shared environment with a Pleo Robot and a Virtual Pleo Robot, which we called Vleo, connected through a server to explore this new paradigm in order to see if the engagement during the interaction is improved in intensity and duration. A straightforward set of interactions between Pleo and Vleo have been designed to create narratives and therefore, tested with a group of 8-12 year old children. Results from the test suggest that virtual social robots are a good way to enhance interaction with physical social robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122239139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chandra, Patrícia Alves-Oliveira, Séverin Lemaignan, P. Sequeira, A. Paiva, P. Dillenbourg
{"title":"Can a child feel responsible for another in the presence of a robot in a collaborative learning activity?","authors":"S. Chandra, Patrícia Alves-Oliveira, Séverin Lemaignan, P. Sequeira, A. Paiva, P. Dillenbourg","doi":"10.1109/ROMAN.2015.7333678","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333678","url":null,"abstract":"In order to explore the impact of integrating a robot as a facilitator in a collaborative activity, we examined interpersonal distancing of children both with a human adult and a robot facilitator. Our scenario involves two children performing a collaborative learning activity, which included the writing of a word/letter on a tactile tablet. Based on the learning-by-teaching paradigm, one of the children acted as a teacher when the other acted as a learner. Our study involved 40 children between 6 and 8 years old, in two conditions (robot or human facilitator). The results suggest first that the child acting as a teacher feel more responsible when the facilitator is a robot, compared to a human; they show then that the interaction between a (teacher) child and a robot facilitator can be characterized as being a reciprocity-based interaction, whereas a human presence fosters a compensation-based interaction.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131513957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}