D. Limbu, Chern Yuen Anthony Wong, A. H. J. Tay, Tran Anh Dung, Y. Tan, T. H. Dat, A. Wong, Wen Zheng Terence Ng, Ridong Jiang, Li Jun
{"title":"与CuDDler机器人的情感社交互动","authors":"D. Limbu, Chern Yuen Anthony Wong, A. H. J. Tay, Tran Anh Dung, Y. Tan, T. H. Dat, A. Wong, Wen Zheng Terence Ng, Ridong Jiang, Li Jun","doi":"10.1109/RAM.2013.6758580","DOIUrl":null,"url":null,"abstract":"This paper introduces an implemented affective social robot, called CuDDler. The goal of this research is to explore and demonstrate the utility of a robot that is capable of recognising and responding to a user's emotional acts (i.e., affective stimuli), thereby improving the social interactions. CuDDler uses two main modalities; a) audio (i.e., linguistics and non-linguistics sounds) and b) visual (i.e., facial expressions) to recognise the user's emotional acts. Similarly, CuDDler has two modalities; a) gesture and b) sound to respond or express its emotional responses. During the TechFest 2012 event, CuDDler successfully demonstrated its capability of recognising the user's emotional acts and responding its expression accordingly. Although, CuDDler is still in its early prototyping stage, the preliminary survey results indicate that the CuDDler has potential to not only aid in human-robot interaction but also contribute towards the long term goal of multi-model emotion recognition and socially interactive robot.","PeriodicalId":287085,"journal":{"name":"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Affective social interaction with CuDDler robot\",\"authors\":\"D. Limbu, Chern Yuen Anthony Wong, A. H. J. Tay, Tran Anh Dung, Y. Tan, T. H. Dat, A. Wong, Wen Zheng Terence Ng, Ridong Jiang, Li Jun\",\"doi\":\"10.1109/RAM.2013.6758580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces an implemented affective social robot, called CuDDler. The goal of this research is to explore and demonstrate the utility of a robot that is capable of recognising and responding to a user's emotional acts (i.e., affective stimuli), thereby improving the social interactions. CuDDler uses two main modalities; a) audio (i.e., linguistics and non-linguistics sounds) and b) visual (i.e., facial expressions) to recognise the user's emotional acts. Similarly, CuDDler has two modalities; a) gesture and b) sound to respond or express its emotional responses. During the TechFest 2012 event, CuDDler successfully demonstrated its capability of recognising the user's emotional acts and responding its expression accordingly. Although, CuDDler is still in its early prototyping stage, the preliminary survey results indicate that the CuDDler has potential to not only aid in human-robot interaction but also contribute towards the long term goal of multi-model emotion recognition and socially interactive robot.\",\"PeriodicalId\":287085,\"journal\":{\"name\":\"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RAM.2013.6758580\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAM.2013.6758580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper introduces an implemented affective social robot, called CuDDler. The goal of this research is to explore and demonstrate the utility of a robot that is capable of recognising and responding to a user's emotional acts (i.e., affective stimuli), thereby improving the social interactions. CuDDler uses two main modalities; a) audio (i.e., linguistics and non-linguistics sounds) and b) visual (i.e., facial expressions) to recognise the user's emotional acts. Similarly, CuDDler has two modalities; a) gesture and b) sound to respond or express its emotional responses. During the TechFest 2012 event, CuDDler successfully demonstrated its capability of recognising the user's emotional acts and responding its expression accordingly. Although, CuDDler is still in its early prototyping stage, the preliminary survey results indicate that the CuDDler has potential to not only aid in human-robot interaction but also contribute towards the long term goal of multi-model emotion recognition and socially interactive robot.