{"title":"调整会话策略,共同优化代理的任务性能和用户参与度","authors":"L. Galland, C. Pelachaud, Florian Pecune","doi":"10.1145/3514197.3549674","DOIUrl":null,"url":null,"abstract":"In this work, we present a socially interactive agent able to adapt its conversational strategies to maximize user's engagement during the interaction. For this purpose, we train our agent with simulated users using deep reinforcement learning. First, the agent estimates the simulated user's engagement depending on the latter's nonverbal behaviors and turn-taking status. This measured engagement is then used as a reward to balance the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on the latter's conversational preferences.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adapting conversational strategies to co-optimize agent's task performance and user's engagement\",\"authors\":\"L. Galland, C. Pelachaud, Florian Pecune\",\"doi\":\"10.1145/3514197.3549674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, we present a socially interactive agent able to adapt its conversational strategies to maximize user's engagement during the interaction. For this purpose, we train our agent with simulated users using deep reinforcement learning. First, the agent estimates the simulated user's engagement depending on the latter's nonverbal behaviors and turn-taking status. This measured engagement is then used as a reward to balance the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on the latter's conversational preferences.\",\"PeriodicalId\":149593,\"journal\":{\"name\":\"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3514197.3549674\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3514197.3549674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adapting conversational strategies to co-optimize agent's task performance and user's engagement
In this work, we present a socially interactive agent able to adapt its conversational strategies to maximize user's engagement during the interaction. For this purpose, we train our agent with simulated users using deep reinforcement learning. First, the agent estimates the simulated user's engagement depending on the latter's nonverbal behaviors and turn-taking status. This measured engagement is then used as a reward to balance the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on the latter's conversational preferences.