Setareh Nasihati Gilani, D. Traum, R. Sortino, Grady Gallagher, Kailyn Aaron-Lozano, C. Padilla, Ari Shapiro, Jason Lamberton, L. Petitto
{"title":"Can a Signing Virtual Human Engage a Baby's Attention?","authors":"Setareh Nasihati Gilani, D. Traum, R. Sortino, Grady Gallagher, Kailyn Aaron-Lozano, C. Padilla, Ari Shapiro, Jason Lamberton, L. Petitto","doi":"10.1145/3308532.3329463","DOIUrl":"https://doi.org/10.1145/3308532.3329463","url":null,"abstract":"The child developmental period of ages 6-12 months marks a widely understood \"critical period\" for healthy language learning, during which, failure to receive exposure to language can place babies at risk for language and reading problems spanning life. Deaf babies constitute one vulnerable population as they can experience dramatically reduced or no access to usable linguistic input during this period. Technology has been used to augment linguistic input (e.g., auditory devices; language videotapes) but research finds limitations in learning. We evaluated an AI system that uses an Avatar (provides language and socially contingent interactions) and a robot (aids attention to the Avatar) to facilitate infants' ability to learn aspects of American Sign Language (ASL), and asked three questions: (1) Can babies with little/no exposure to ASL distinguish among the Avatar's different conversational modes (Linguistic Nursery Rhymes; Social Gestures; Idle/nonlinguistic postures; 3rd person observer)? (2) Can an Avatar stimulate babies' production of socially contingent responses, and crucially, nascent language responses? (3) What is the impact of parents' presence/absence of conversational participation? Surprisingly, babies (i) spontaneously distinguished among Avatar conversational modes, (ii) produced varied socially contingent responses to Avatar's modes, and (iii) parents influenced an increase in babies' response tokens to some Avatar modes, but the overall categories and pattern of babies' behavioral responses remained proportionately similar irrespective of parental participation. Of note, babies produced the greatest percentage of linguistic responses to the Avatar's Linguistic Nursery Rhymes versus other Avatar conversational modes. This work demonstrates the potential for Avatars to facilitate language learning in young babies.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125039901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Individuals in a Romantic Relationship Express Guilt and Devaluate Attractive Alternatives after Flirting with a Virtual Bartender","authors":"Y. R. Chen, G. Birnbaum, J. Giron, D. Friedman","doi":"10.1145/3308532.3329420","DOIUrl":"https://doi.org/10.1145/3308532.3329420","url":null,"abstract":"Interactions with virtual agents may have psychological and behavioral implications, even if the participants know that they are interacting with a virtual entity. As virtual agents are gradually becoming part of human society, it is important to understand the extent to which virtual encounters can affect our daily lives, and whether engaging in a specific behavior with virtual humans affects the way that individuals perceive and asses real humans in their surroundings. We examined the effect that seductive interplays might have on individuals in committed relationships and their way of managing a virtual threat to their relationship. One hundred and thirty heterosexual participants conversed with an opposite-sex virtual human in a virtual reality (VR) setup in either a seductive or neutral way. Shortly after, participants were interviewed by an attractive opposite-sex confederate. Results revealed that participants in the seductive condition felt increased feelings of guilt, and that participants in the seductive condition were more prone to devaluate the sexual and intellectual attractiveness of the confederate than participants in the neutral condition. This study thus demonstrates, for the first time, that flirting with a virtual human may influence real-life attitudes towards real people.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115109026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time to Go ONLINE! A Modular Framework for Building Internet-based Socially Interactive Agents","authors":"Mihai Polceanu, C. Lisetti","doi":"10.1145/3308532.3329452","DOIUrl":"https://doi.org/10.1145/3308532.3329452","url":null,"abstract":"Although socially interactive agents have emerged as a new metaphor for human-computer interaction, they are, to date, absent from the Internet. We describe the design choices, implementation, and challenges in building EEVA, the first fully integrated platform-independent framework for deploying realistic 3D web-based social agents: with real-time multimodal perception of, and response to, the user's verbal and non-verbal social cues, EEVA agents are capable of communicating rich customizable content to users in real time, while building and maintaining users' profiles for long-term interactions. The modularity of the EEVA framework enables it to be used as a testbed for agents' social communication model development of increasing performance and sophistication (e.g. building rapport, expressing empathy). We discuss the framework's feasibility by analyzing the response time of the system over the Internet, in the context of a health intervention built using EEVA authoring functionalities.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134523443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Delamarre, Stephanie J. Lunn, Cédric Buche, E. Shernoff, S. Frazier, C. Lisetti
{"title":"Interdisciplinary Collaboration and Establishment of Requirements for a 3D Interactive Virtual Training for Teachers","authors":"A. Delamarre, Stephanie J. Lunn, Cédric Buche, E. Shernoff, S. Frazier, C. Lisetti","doi":"10.1145/3308532.3329439","DOIUrl":"https://doi.org/10.1145/3308532.3329439","url":null,"abstract":"Simulation-based training systems have proven effective in a variety of domains, both for facilitating the learning of skills as well for applying this knowledge to real life. Although difficulties managing students' disruptive behavior in classrooms has been identified as one of the main causes of teachers' turnover, only a handful of virtual training environments have focused on providing training to teachers, and still no clear methodologies exist for their design, their implementation, nor their evaluation. In this article we discuss the methodologies employed by an interdisciplinary team of computer science and education researchers involved the development of the first of four iterative, increasingly sophisticated, prototypes of a web-based 3D Interactive Virtual Training Environment for Teachers (IVT-T). IVT-T simulates students with disruptive behaviors that teachers can interact with in a 3D virtual classroom, which provides teachers practice in managing classrooms, as well as feedback and reflection opportunities about their classroom behavior management skills. We currently describe the processes we conducted to derive the main system requirements for IVT-T 1.0 (the system is still evolving), which led to our suggestions for general requirements, in addition to the next lifecycle steps we identified for the successful implementation of the final system.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134553988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards an Adaptive Regulation Scaffolding through Role-based Strategies","authors":"Sooraj Krishna, C. Pelachaud, Arvid Kappas","doi":"10.1145/3308532.3329412","DOIUrl":"https://doi.org/10.1145/3308532.3329412","url":null,"abstract":"Agents (virtual/physical) in a learning environment can be introduced in different roles, such as a tutor, mentor, motivator, expert, peer student etc. Each agent type brings an expertise, creating a unique social relationship with students. Depending on their role, agents have specific goals and beliefs, as well as attitudes towards the learners, thereby influencing different aspects of learning such as cognitive, affective and meta-cognitive processes in a learner. The proposed research will primarily investigate the meta-cognitive aspect of self-regulation in collaborative learning interactions and its variations with various scaffolding strategies based on agent roles. The learning interaction will be based on the socially shared regulation model of self regulation, which accommodates the social context of self regulated learning created by agents in multiples roles and behaviours. The objectives of this research will be to understand how various roles and behaviours of the agents would influence the self regulation skills of the learner and to design a role-based strategy selection model for regulation scaffolding, based on the behavioural, motivational and cognitive measures of the learning interaction.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Suendermann-Oeft, Amanda Robinson, Andrew Cornish, Doug Habberstad, D. Pautler, Dirk Schnelle-Walka, Franz Haller, J. Liscombe, Michael Neumann, Mike Merrill, Oliver Roesler, Renko Geffarth
{"title":"NEMSI","authors":"David Suendermann-Oeft, Amanda Robinson, Andrew Cornish, Doug Habberstad, D. Pautler, Dirk Schnelle-Walka, Franz Haller, J. Liscombe, Michael Neumann, Mike Merrill, Oliver Roesler, Renko Geffarth","doi":"10.1145/3308532.3329415","DOIUrl":"https://doi.org/10.1145/3308532.3329415","url":null,"abstract":"We present NEMSI, a cloud-based multimodal dialog system designed to have naturalistic interactions with individuals for the purpose of screening neurological or mental conditions. The system has been used by thousands of people capturing audio and video responses to open-ended questions and structured health surveys.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122121570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Interdependent Model of Personality, Motivation, Emotion, and Mood for Intelligent Virtual Agents","authors":"Maayan Shvo, Jakob Buhmann, Mubbasir Kapadia","doi":"10.1145/3308532.3329474","DOIUrl":"https://doi.org/10.1145/3308532.3329474","url":null,"abstract":"Building intelligent agents that can believably interact with humans is a difficult yet important task in a host of applications, including therapy, education, and entertainment. We submit that in order to enhance believability, the agent's affective state should be accurately modeled and should realistically influence the agent's behavior. We propose a computational model of affect which incorporates an empirically-based interplay between its various affective components - personality, motivation, emotion, and mood. Further, our model captures a number of salient mechanisms that are observable in humans and that influence the agent's behavior. We are therefore hopeful that our model will facilitate more engaging and meaningful human-agent interactions. We evaluate our model and illustrate its efficacy, as well as the importance of the different components in the model and their interplay.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126592875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aryana Collins Jackson, Elisabetta Bevacqua, P. D. Loor, R. Querrec
{"title":"Modelling an Embodied Conversational Agent for Remote and Isolated Caregivers on Leadership Styles","authors":"Aryana Collins Jackson, Elisabetta Bevacqua, P. D. Loor, R. Querrec","doi":"10.1145/3308532.3329411","DOIUrl":"https://doi.org/10.1145/3308532.3329411","url":null,"abstract":"In a medical environment, coordination between medical staff is imperative. In cases in which a human doctor or medical coordinator is not present, patient care, particularly from non-experts, becomes more difficult. The difficulty increases when care is completed at a remote site, for example, on a manned mission to Mars. Communication capability from medical experts on Mars is limited. To address this problem, a medical assistant remote system is proposed to act as a coordinator between the humans present and the remote medical experts. A virtual agent assuming such a role will accept feedback from both, running the situation without errors and additional stress. Leadership styles will be employed by the agent to develop trust and perception of competence among its followers. Additionally, prediction of behaviour and situational changes by both medical professionals and by the agent are necessary in order to combat a 10-minute latency affecting communication between Earth and Mars.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text-driven Visual Prosody Generation for Embodied Conversational Agents","authors":"Jiali Chen, Yong Liu, Zhimeng Zhang, Changjie Fan, Yu Ding","doi":"10.1145/3308532.3329445","DOIUrl":"https://doi.org/10.1145/3308532.3329445","url":null,"abstract":"In face-to-face conversations, head motions play a crucial role in encoding information, and humans are very skilled at decoding multiple messages from interlocutors' head motions. It is of great importance to endow embodied conversational agents (ECAs) with the capability of conveying communicative intention through head movements. Our work is aimed at automatically synthesizing head motions for an ECA speaking Chinese. We propose to take only transcripts as input to compute head movements, based on a statistical framework. Subjective experiments are conducted to validate the proposed statistical framework. The results show that the generated head animation is able to improve human perception in terms of naturalness and demonstrate that the head animation is synchronized with the input of synthetic speech.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal Cues of the Sense of Presence and Co-presence in Human-Virtual Agent Interaction","authors":"M. Ochs, Jeremie Bousquet, P. Blache","doi":"10.1145/3308532.3329438","DOIUrl":"https://doi.org/10.1145/3308532.3329438","url":null,"abstract":"A key challenge when studying human-agent interaction is the evaluation of user's experience. In virtual reality, this question is addressed by studying the sense of \"presence'' and\"co-presence'', generally assessed thanks to well-grounded subjective post-experience questionnaires. In this article, we aim at exploring behavioral measures of presence and co-presence by analyzing multimodal cues produced during an interaction both by the user and the virtual agent. In our study, we started from a corpus of human-agent interaction collected in a task-oriented context: a virtual environment aiming at training doctors to break bad news to a patient (played by a virtual agent). Based on this corpus, we have used machine learning algorithms to explore the possibility of predicting user's sense of presence and co-presence. In particular, we have applied and compared two techniques, Random forest and SVM, both showing very good results in predicting the level of presence and co-presence.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}