Joseph B. Wiggins, Toni V. Earle-Randell, Dolly Bounajim, Yingbo Ma, Julián Ruiz, Ruohan Liu, M. Celepkolu, Maya Israel, E. Wiebe, Collin Lynch, K. Boyer
{"title":"Building the dream team: children's reactions to virtual agents that model collaborative talk","authors":"Joseph B. Wiggins, Toni V. Earle-Randell, Dolly Bounajim, Yingbo Ma, Julián Ruiz, Ruohan Liu, M. Celepkolu, Maya Israel, E. Wiebe, Collin Lynch, K. Boyer","doi":"10.1145/3514197.3549683","DOIUrl":"https://doi.org/10.1145/3514197.3549683","url":null,"abstract":"Intelligent virtual agents have tremendous potential for facilitating collaborative learning by modeling and reinforcing desirable collaborative practices. Despite recent work in this area, the extent to which intelligent virtual agents can facilitate improvements in the collaborative behavior of children is largely unknown. This study employed a wizard-of-oz study design and investigated elementary children's collaborative behavior after interacting with virtual agents. These agents model exploratory talk for upper elementary school dyads, such as asking higher-order questions and listening to their partners. The findings uncover associations between elementary learner dyads' positive changes in collaboration after agent interventions, the dyads' affective reactions to interventions, and their attentiveness to the agents. Our results also reveal associations between positive changes in collaboration and the timing of interventions: for example, earlier interventions had a higher occurrence of positive changes, and positive changes in collaboration typically happened within five seconds of interventions. The results suggest ways in which intelligent virtual agents may be used to promote effective collaborative learning practices for children.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129484634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akira Morikawa, Ryo Ishii, H. Noto, A. Fukayama, Takao Nakamura
{"title":"Determining most suitable listener backchannel type for speaker's utterance","authors":"Akira Morikawa, Ryo Ishii, H. Noto, A. Fukayama, Takao Nakamura","doi":"10.1145/3514197.3549619","DOIUrl":"https://doi.org/10.1145/3514197.3549619","url":null,"abstract":"A major hurdle in achieving a dialogue system that enables smooth dialogue is to determine how to generate an appropriate response to a user's utterance. Previous research has focused mainly on estimating whether to make an utterance backchannel in response to the user's utterance. We go one step further by examining, for the first time, the relationship between the type of utterance backchannel to be used and intent and type of the speaker's utterance, known as a dialogue act (DA). Specifically, we propose a new method for classifying utterance backchannels into nine types. We also created a corpus consisting of the DAs of speaker utterances and the backchannel types of listener utterances then used it to analyze the relationship between a speaker's and listener's utterances. Our findings clarify that the occurrence frequencies of a listener's backchannel types significantly depend on the DAs of the speaker's utterances. Since the goal of our research is to construct a dialogue system that generates a more natural backchannel, this classification method, which determines certain types of aids from the speaker's DA, will be beneficial to such a system.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Guillermo Feijóo García, Mohan S Zalake, Heng Yao, A. G. D. Siqueira, Benjamin C. Lok
{"title":"Can we talk about bruno?: exploring virtual human counselors' spoken accents and their impact on users' conversations","authors":"Pedro Guillermo Feijóo García, Mohan S Zalake, Heng Yao, A. G. D. Siqueira, Benjamin C. Lok","doi":"10.1145/3514197.3549694","DOIUrl":"https://doi.org/10.1145/3514197.3549694","url":null,"abstract":"Counseling requires intimacy between a counselor and a patient to reach healing and growth. However, building rapport between virtual human counselors and computing college students is a complex problem. It requires understanding students' experiences and goals, as also the effects the characteristics of a virtual human counselor, like the spoken accent, have in the interaction with a patient in regards to messenger credibility and self-disclosure. This paper reports findings of how virtual human counselors' spoken accents impact computing undergraduate students' mental wellness conversations in regard to students' self-reported multilingual skills: monolingual or multilingual. We developed two English-speaking rapport-building virtual humans, each with a different spoken English accent-American or German, to interview 62 North American undergraduate computing students from a North American campus. Our findings suggest that virtual humans' spoken accents impacted students' perceptions of the virtual humans' speaking skills. Additionally, we found a similarity-attraction effect between monolingual English speakers and the American-English-accented virtual human counselor concerning participants' engagement and perceptions of the virtual human's speaking skills.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugene Lee, Zachary McNulty, Alex Gentle, Prerak Tusharkumar Pradhan, J. Gratch
{"title":"Examining the impact of emotion and agency on negotiator behavior","authors":"Eugene Lee, Zachary McNulty, Alex Gentle, Prerak Tusharkumar Pradhan, J. Gratch","doi":"10.1145/3514197.3549673","DOIUrl":"https://doi.org/10.1145/3514197.3549673","url":null,"abstract":"Virtual human expressions can shape user behavior [1, 2, 3], yet in negotiation, findings have been underwhelming. For example, human negotiators can use anger to claim value (i.e., extract concessions) [4], but anger has no effect when exhibited by a virtual human [5]. Other psychological work suggests that emotions can create value (e.g., happy negotiators can better discover tradeoffs across issues that \"grow the pie\"), but little research has examined how virtual human expressions shape value creation. Here we present an agent architecture and pilot study that examines differences between how the emotional expressions of human and virtual-human opponents shape value claiming and value creation. We replicate the finding that virtual human anger fails to influence value claiming but discover counter-intuitive findings on value creation. We argue these findings highlight the potential for intelligent virtual humans to yield insight into human psychology.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114722324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motivating health behavior change with a storytelling virtual agent","authors":"Hye Sun Yun, Matias Volonte, T. Bickmore","doi":"10.1145/3514197.3549684","DOIUrl":"https://doi.org/10.1145/3514197.3549684","url":null,"abstract":"We developed a virtual agent that motivates church-going users to change their health behavior by telling existing cultural narratives that have high relevance with the counseling topic in an engaging way. We evaluated this agent in a between-subjects experiment where participants interacted with an agent that counseled them on nutrition either without a story, with a story but told in a neutral speech style, or with a story using dramatic delivery inspired by church sermons. We found that interaction with either one of the storytelling agents leads to a significantly greater change in confidence to engage in the target behavior of healthy eating than interacting with a non-storytelling agent, demonstrating the efficacy of stories in health counseling by virtual agents.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Rodrigues, Ricardo Silva, Ricardo Pereira, C. Martinho
{"title":"A cautionary tale of side-by-side evaluations while developing emotional expression for intelligent virtual agents","authors":"R. Rodrigues, Ricardo Silva, Ricardo Pereira, C. Martinho","doi":"10.1145/3514197.3549672","DOIUrl":"https://doi.org/10.1145/3514197.3549672","url":null,"abstract":"When designing interactive scenarios that depend on emotion expression, it is imperative to consider the levels of recognition associated with said expressions, to ascertain whether or not an acceptable degree of emotional communication has been achieved. In this work, two experiments were conducted with that aim, one asking participants to compare two different versions of an application side-by-side when conveying a specific emotion, and another asking the participants to recognize the emotion being expressed in each version. We found that, for some emotions, the approach rated higher in terms of emotion expression during the side-by-side comparison would not translate to the approach with a higher emotion recognition in the second experiment. Although this discrepancy is generally consistent with what happens with emotion recognition in humans, it is noteworthy that some higher-rated choices ended up not being as effective in the expression of emotion. We discuss how these discrepancies might have originated from forced-choice and feature dominance, and why context should be taken into account when designing experiments.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending the menge crowd simulation framework: visual authoring in unity","authors":"Michelangelo Diamanti, H. Vilhjálmsson","doi":"10.1145/3514197.3549698","DOIUrl":"https://doi.org/10.1145/3514197.3549698","url":null,"abstract":"Crowd simulation is striving to advance the realism of large groups of intelligent virtual agents. There have been several efforts to create common frameworks which can expedite collaboration among researchers. In this paper, we propose three extensions to one such framework called Menge, making it easier to use with Unity 3D.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116942610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Janet Wessler, T. Schneeberger, Leon Christidis, Patrick Gebhard
{"title":"Virtual backlash: nonverbal expression of dominance leads to less liking of dominant female versus male agents","authors":"Janet Wessler, T. Schneeberger, Leon Christidis, Patrick Gebhard","doi":"10.1145/3514197.3549682","DOIUrl":"https://doi.org/10.1145/3514197.3549682","url":null,"abstract":"Backlash is a form of social penalty occurring when people behave counter-stereotypically. When promoting themselves, dominant females compared to males are typically liked less and paid worse, because dominance is associated with males, and proscribed for females. Such backlash effects have been shown in human-human interactions, but attempts to replicate them in human-agent interactions have not been successful so far [40]. Here, the goal was to show backlash effects for virtual agents with a nonverbal manipulation of dominance. In an online experiment, N = 223 participants watched the video of a female or male virtual agent presenting themselves as a career coach while using either large or small gestures. They rated the agent's dominance, liking, competence, and made a monetary offer of how much they would pay for the coaching. Agents using large gestures were perceived as more dominant than those using small gestures. Moreover, a backlash effect emerged: Dominant female compared to male agents were liked less. Participants were not penalizing the female dominant agent in monetary offers. Overall, participants rated the female agents as less competent than male ones. The results underline the importance of considering effects of the agent's gender in research on human-agent interaction.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134365705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher You, Rashi Ghosh, Andrew Maxim, J. Stuart, Eric J. Cooks, Benjamin C. Lok
{"title":"How does a virtual human earn your trust?: guidelines to improve willingness to self-disclose to intelligent virtual agents","authors":"Christopher You, Rashi Ghosh, Andrew Maxim, J. Stuart, Eric J. Cooks, Benjamin C. Lok","doi":"10.1145/3514197.3549686","DOIUrl":"https://doi.org/10.1145/3514197.3549686","url":null,"abstract":"Virtual humans demonstrate the ability to act as non-judgmental conversational partners, eliciting greater self-disclosure. However, it is unclear what virtual human and conversational characteristics are important when self-disclosing. To address this gap, we conducted a set of qualitative, semi-formal interviews (n = 17) among computer science students to investigate participant mental models of willingness to disclose to virtual humans and characteristics of virtual humans that affect their self-disclosure. Our findings indicate that participants' mental models of virtual humans are largely inconsistent with current literature. This inconsistency appears to eliciting hesitancy and discomfort with virtual humans. Furthermore, trust and listening were identified as two primary characteristics of a virtual human interaction that are valuable towards willingness to disclose. Additionally, these characteristics were also valued in different ways for virtual humans in comparison to real humans. From the interviews, we identify and provide guidelines of designing virtual human interactions and conversations to elicit greater willingness to disclose.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134034307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comprehensive guidelines for emotion annotation","authors":"Md. Adnanul Islam, Md. Saddam Hossain Mukta, P. Olivier, Md. Mahbubur Rahman","doi":"10.1145/3514197.3549640","DOIUrl":"https://doi.org/10.1145/3514197.3549640","url":null,"abstract":"Emotions are psychological traits which are associated with an individuals' thoughts, feelings, behavioral responses, and experiences of pleasure and displeasure. The ability to recognise a conversational partner's emotional state from their speech (and respond accordingly) is a longstanding requirement of a fully capable intelligent virtual agent. However, despite the fact that current approaches to emotion recognition primarily depend upon supervised machine learning models, there are no comprehensive guidelines for emotion label annotation of the corpora used to train such models. We present comprehensive guidelines for consistent and effective annotation of text corpora with emotion labels. In particular, our proposal directly addresses the requirements of multi-label emotion recognition, and we demonstrate how an implementation of our proposed guidelines led to substantially (30%) higher agreement score among human annotators.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}