Zerrin Yumak, Jianfeng Ren, N. Magnenat-Thalmann, Junsong Yuan
{"title":"跟踪和融合与虚拟角色和社交机器人的多方互动","authors":"Zerrin Yumak, Jianfeng Ren, N. Magnenat-Thalmann, Junsong Yuan","doi":"10.1145/2668956.2668958","DOIUrl":null,"url":null,"abstract":"To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Tracking and fusion for multiparty interaction with a virtual character and a social robot\",\"authors\":\"Zerrin Yumak, Jianfeng Ren, N. Magnenat-Thalmann, Junsong Yuan\",\"doi\":\"10.1145/2668956.2668958\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.\",\"PeriodicalId\":220010,\"journal\":{\"name\":\"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2668956.2668958\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2668956.2668958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tracking and fusion for multiparty interaction with a virtual character and a social robot
To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.