{"title":"通过凝视、手势和语言促进多方对话","authors":"D. Bohus, E. Horvitz","doi":"10.1145/1891903.1891910","DOIUrl":null,"url":null,"abstract":"We study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation for tracking and communicating intentions to hold, release, or take control of the conversational floor. We then present implementation aspects of this model in an embodied conversational agent. Empirical results with this model in a shared task setting indicate that the various verbal and non-verbal cues used by the avatar can effectively shape the multiparty conversational dynamics. In addition, we identify and discuss several context variables which impact the turn allocation process.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"165","resultStr":"{\"title\":\"Facilitating multiparty dialog with gaze, gesture, and speech\",\"authors\":\"D. Bohus, E. Horvitz\",\"doi\":\"10.1145/1891903.1891910\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation for tracking and communicating intentions to hold, release, or take control of the conversational floor. We then present implementation aspects of this model in an embodied conversational agent. Empirical results with this model in a shared task setting indicate that the various verbal and non-verbal cues used by the avatar can effectively shape the multiparty conversational dynamics. In addition, we identify and discuss several context variables which impact the turn allocation process.\",\"PeriodicalId\":181145,\"journal\":{\"name\":\"ICMI-MLMI '10\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"165\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICMI-MLMI '10\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1891903.1891910\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICMI-MLMI '10","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1891903.1891910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Facilitating multiparty dialog with gaze, gesture, and speech
We study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation for tracking and communicating intentions to hold, release, or take control of the conversational floor. We then present implementation aspects of this model in an embodied conversational agent. Empirical results with this model in a shared task setting indicate that the various verbal and non-verbal cues used by the avatar can effectively shape the multiparty conversational dynamics. In addition, we identify and discuss several context variables which impact the turn allocation process.