{"title":"Conveying emotion in robotic speech: Lessons learned","authors":"Joe Crumpton, Cindy L. Bethel","doi":"10.1109/ROMAN.2014.6926265","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926265","url":null,"abstract":"This research explored whether robots can use modern speech synthesizers to convey emotion with their speech. We investigated the use of MARY, an open source speech synthesizer, to convey a robot's emotional intent to novice robot users. The first experiment indicated that participants were able to distinguish the intended emotions of anger, calm, fear, and sadness with success rates of 65.9%, 68.9%, 33.3%, and 49.2%, respectively. An issue was the recognition rate of the intended happiness statements, 18.2%, which was below the 20% level determined for chance. The vocal prosody modifications for the expression of happiness were adjusted and the recognition rates for happiness improved to 30.3% in a second experiment. This is an important benchmarking step in a line of research that investigates the use of emotional speech by robots to improve human-robot interaction. Recommendations and lessons learned from this research are presented.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132746601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The personal mobility interface including human twisting motion","authors":"S. Yokota, D. Chugo, H. Hashimoto, K. Kawabata","doi":"10.1109/ROMAN.2014.6926221","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926221","url":null,"abstract":"This paper proposes the saddle type human body motion interface for personal mobility. The proposed interface uses not only conventional translational body motion but also twisting motion, and makes easy operation. The saddle is attached on the personal mobility by using the seat post with the universal joint at the floor of the personal mobility. The universal joint has three rotational joints where the potentiometers are attached on each. Hip motion makes these joints rotate. The potentiometers detect these rotations, and then this interface measures the hip translational and twisting motions. This saddle does not support whole user body weight, but it is sandwiched by legs and follows the hip motion. User keeps his/her standing position by strengths own legs. The saddle type interface doesn't need pre-setting for the operation, since this system doesn't require any sensors on the human body. By the basic experimental result, it turned out that proposed interface can operate personal mobility and has a potential for intuitive operation.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Weinrich, Tim Wengefeld, Christof Schröter, H. Groß
{"title":"People detection and distinction of their walking aids in 2D laser range data based on generic distance-invariant features","authors":"Christoph Weinrich, Tim Wengefeld, Christof Schröter, H. Groß","doi":"10.1109/ROMAN.2014.6926346","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926346","url":null,"abstract":"People detection in 2D laser range data is a popular cue for person tracking in mobile robotics. Many approaches are designed to detect pairs of legs. These approaches perform well in many public environments. However, we are working on an assistance robot for stroke patients in a rehabilitation center, where most of the people need walking aids. These tools occlude or touch the legs of the patients. Thereby, approaches based on pure leg detection fail. The essential contribution of this paper are generic distance-invariant range scan features for people detection in 2D laser range data and the distinction of their walking aids. With these features we trained classifiers for detecting people without walking aids (or with crutches), people with walkers, and people in wheelchairs. Using this approach for people detection, we achieve an F1 score of 0.99 for people with and without walking aids, and 86% of detections are classified correctly regarding their walking aid. For comparison, using state-of-the-art features of Arras et al. on the same data results in an F1 score of 0.86 and 57% correct discrimination of walking aids. The proposed detection algorithm takes around 2.5% of the resources of a 2.8 GHz CPU core to process 270° laser range data at an update rate of 10 Hz.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"207 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133356554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an interaction-activated communication model based on a heat conduction equation in voice communication","authors":"Yoshihiro Sejima, Tomio Watanabe, M. Jindai","doi":"10.1109/ROMAN.2014.6926356","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926356","url":null,"abstract":"In a previous study, we developed an embodied virtual communication system for human interaction analysis by synthesis in avatar-mediated communication and confirmed the close relationship between speech overlap and the period for activating embodied interaction and communication through avatars. In this paper, we propose an interaction-activated communication model based on the heat conduction equation in heat-transfer engineering for enhancing empathy between a human and a robot during embodied interaction in avatar-mediated communication. Further, we perform an evaluation experiment to demonstrate the effectiveness of the proposed model in estimating the period of interaction-activated communication in avatar-mediated communication. Results suggest that the proposed model is effective in estimating interaction-activated communication.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omar Mubin, Thomas D'Arcy, Ghulam Murtaza, S. Simoff, C. Stanton, C. Stevens
{"title":"Active or passive?: Investigating the impact of robot role in meetings","authors":"Omar Mubin, Thomas D'Arcy, Ghulam Murtaza, S. Simoff, C. Stanton, C. Stevens","doi":"10.1109/ROMAN.2014.6926315","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926315","url":null,"abstract":"Meetings are an integral part of the work place and society in general. Research in Computer Supported Cooperative Work attempts to facilitate and make the process of meetings more effective. Our vision is that the incorporation of social robots in such human-human collaborative settings can assist and improve the effectiveness of a meeting. In this paper we present an empirical study in which pairs of participants collaborate in a meeting scenario with a Nao humanoid robot. Using a within-subjects design, we manipulated the robot's role within the meeting as being either “active” versus “passive”/“service-oriented”. Our results show that the more active robot was deemed as more more alive and social, had the participants more emotionally involved and caused more verbal engagement from the participants as compared to a passive service robot. In conclusion, we speculate on the inclusion of a collaborative robot as a meeting partner.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"07 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122635821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning English words with the aid of an autonomous care-receiving robot in a children's group activity","authors":"Shizuko Matsuzoe, H. Kuzuoka, F. Tanaka","doi":"10.1109/ROMAN.2014.6926351","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926351","url":null,"abstract":"We studied educational support delivered through a care-receiving robot (CRR) in a children's group activity intended to promote the learning of English words by teaching the robot. Our prior study investigated the feasibility of the CRR for providing educational support in a situation where a child played with the robot by him/herself. Our research uncovered several impactful effects of the CRR for enhancing childhood education. However, the results were not sufficient to confirm more practical contributions of the CRR toward learning. In this paper, we report on a field experiment we conducted with a group of children at a Japanese kindergarten aged 5-6 years. Our goal was to verify the feasibility of an autonomous CRR for facilitating the learning of English words in an educational setting that closely resembled reality. The experiment was conducted on a group of roughly seven children who participated in an animal gesture game with the robot for four days to learn six English words/names for animals. There were a total of 15 participants, and we held two experimental sessions. In order to compare the educational effects between learning with the aid of the CRR and with an expert robot, both robots were introduced concurrently into classrooms. The experimental results showed that the autonomous CRR was more effective in promoting English vocabulary acquisition among preschoolers compared to the expert robot.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122864717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phoebe Liu, Dylan F. Glas, T. Kanda, H. Ishiguro, N. Hagita
{"title":"How to train your robot - teaching service robots to reproduce human social behavior","authors":"Phoebe Liu, Dylan F. Glas, T. Kanda, H. Ishiguro, N. Hagita","doi":"10.1109/ROMAN.2014.6926377","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926377","url":null,"abstract":"Developing interactive behaviors for social robots presents a number of challenges. It is difficult to interpret the meaning of the details of people's behavior, particularly non-verbal behavior like body positioning, but yet a social robot needs to be contingent to such subtle behaviors. It needs to generate utterances and non-verbal behavior with good timing and coordination. The rules for such behavior are often based on implicit knowledge and thus difficult for a designer to describe or program explicitly. We propose to teach such behaviors to a robot with a learning-by-demonstration approach, using recorded human-human interaction data to identify both the behaviors the robot should perform and the social cues it should respond to. In this study, we present a fully unsupervised approach that uses abstraction and clustering to identify behavior elements and joint interaction states, which are used in a variable-length Markov model predictor to generate socially-appropriate behavior commands for a robot. The proposed technique provides encouraging results despite high amounts of sensor noise, especially in speech recognition. We demonstrate our system with a robot in a shopping scenario.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124012118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masakazu Hirokawa, A. Funahashi, Yasushi Itoh, Kenji Suzuki
{"title":"Design of affective robot-assisted activity for children with autism spectrum disorders","authors":"Masakazu Hirokawa, A. Funahashi, Yasushi Itoh, Kenji Suzuki","doi":"10.1109/ROMAN.2014.6926280","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926280","url":null,"abstract":"Recent studies on autism spectrum disorders (ASD) have reported that positive emotions can be a good incentive for children with ASD to perform spontaneous positive social behaviors. Based on this findings, we propose an affective robot-assisted activity (ARAA) for fostering social interaction and communication skills among children with ASD by promoting their positive emotional responses through interaction with a robot. As it has been termed spectrum, every child has different social and affective characteristics that should be taken into account. However, due to difficulties in programming the robot's behavior, conventional RAA systems did not allow the therapist to customize the activity according to the characteristics of each individual. To tackle this problem, we developed a comprehensive framework of ARAA that consists of (i) a robot tele-operation method that allows a therapist to improvise a robot's behavior in real-time and (ii) a quantitative measurement method to describe social interaction within both behavioral and affective aspects.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. H. Lim, Miguel Oliveira, V. Mokhtari, S. Kasaei, Aneesh Chauhan, L. Lopes, A. Tomé
{"title":"Interactive teaching and experience extraction for learning about objects and robot activities","authors":"G. H. Lim, Miguel Oliveira, V. Mokhtari, S. Kasaei, Aneesh Chauhan, L. Lopes, A. Tomé","doi":"10.1109/ROMAN.2014.6926246","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926246","url":null,"abstract":"Intelligent service robots should be able to improve their knowledge from accumulated experiences through continuous interaction with the environment, and in particular with humans. A human user may guide the process of experience acquisition, teaching new concepts, or correcting insufficient or erroneous concepts through interaction. This paper reports on work towards interactive learning of objects and robot activities in an incremental and open-ended way. In particular, this paper addresses human-robot interaction and experience gathering. The robot's ontology is extended with concepts for representing human-robot interactions as well as the experiences of the robot. The human-robot interaction ontology includes not only instructor teaching activities but also robot activities to support appropriate feedback from the robot. Two simplified interfaces are implemented for the different types of instructions including the teach instruction, which triggers the robot to extract experiences. These experiences, both in the robot activity domain and in the perceptual domain, are extracted and stored in memory, and they are used as input for learning methods. The functionalities described above are completely integrated in a robot architecture, and are demonstrated in a PR2 robot.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Shiraki, K. Nagata, N. Yamanobe, Akira Nakamura, K. Harada, D. Sato, D. Nenchev
{"title":"Modeling of everyday objects for semantic grasp","authors":"Y. Shiraki, K. Nagata, N. Yamanobe, Akira Nakamura, K. Harada, D. Sato, D. Nenchev","doi":"10.1109/ROMAN.2014.6926343","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926343","url":null,"abstract":"This paper presents a knowledge model of everyday objects for semantic grasp. This model is intended for extracting the grasp areas of everyday objects and approach directions for grasping when the 3D point cloud data and the intended purpose are given. Parts that make up everyday objects have functions related to their manipulation. We therefore represent everyday objects in terms of connected parts of functional units. This knowledge model describes the structure of everyday objects and information on their manipulation. The structure of an everyday object describes component parts of the object in terms of simple shape primitives to provide geometrical information and describes connections between parts with kinematic attributes. The information on the structure is used to map the manipulation knowledge onto the 3D point cloud data. The manipulation knowledge of the object includes the grasp areas and approach directions for the intended purpose. Fine grasps suitable for the intended task can be generated by performing a grasp planning with consideration for stable grasp and the kinematics of the robot in the grasp areas and approach directions.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}