{"title":"基于交换单元的多模态对话用户兴趣评估","authors":"Sayaka Tomimasu, Masahiro Araki","doi":"10.1145/3011263.3011269","DOIUrl":null,"url":null,"abstract":"A person is more likely to enjoy long-term conversations with a robot if it has the capability to infer the topics that interest the person. In this paper, we propose a method of deducing the specific topics that interest a user by sequentially assessing each exchange in a chat-oriented dialog session. We use multimodal information such as facial expressions and prosodic information obtained from the user's utterances for assessing interest as these parameters are independent of linguistic information that varies widely in chat-oriented dialogs. The results show that the accuracy of the assessment of the user's interest is better when we use both features.","PeriodicalId":272696,"journal":{"name":"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction","volume":"3 3-4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Assessment of users' interests in multimodal dialog based on exchange unit\",\"authors\":\"Sayaka Tomimasu, Masahiro Araki\",\"doi\":\"10.1145/3011263.3011269\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A person is more likely to enjoy long-term conversations with a robot if it has the capability to infer the topics that interest the person. In this paper, we propose a method of deducing the specific topics that interest a user by sequentially assessing each exchange in a chat-oriented dialog session. We use multimodal information such as facial expressions and prosodic information obtained from the user's utterances for assessing interest as these parameters are independent of linguistic information that varies widely in chat-oriented dialogs. The results show that the accuracy of the assessment of the user's interest is better when we use both features.\",\"PeriodicalId\":272696,\"journal\":{\"name\":\"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction\",\"volume\":\"3 3-4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3011263.3011269\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3011263.3011269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Assessment of users' interests in multimodal dialog based on exchange unit
A person is more likely to enjoy long-term conversations with a robot if it has the capability to infer the topics that interest the person. In this paper, we propose a method of deducing the specific topics that interest a user by sequentially assessing each exchange in a chat-oriented dialog session. We use multimodal information such as facial expressions and prosodic information obtained from the user's utterances for assessing interest as these parameters are independent of linguistic information that varies widely in chat-oriented dialogs. The results show that the accuracy of the assessment of the user's interest is better when we use both features.