{"title":"会话式电影推荐系统的社会解释模型","authors":"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell","doi":"10.1145/3349537.3351899","DOIUrl":null,"url":null,"abstract":"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":"{\"title\":\"A Model of Social Explanations for a Conversational Movie Recommendation System\",\"authors\":\"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell\",\"doi\":\"10.1145/3349537.3351899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.\",\"PeriodicalId\":188834,\"journal\":{\"name\":\"Proceedings of the 7th International Conference on Human-Agent Interaction\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"36\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Conference on Human-Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3349537.3351899\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349537.3351899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Model of Social Explanations for a Conversational Movie Recommendation System
A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.