Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents最新文献

筛选
英文 中文
Multimodal Cues of the Sense of Presence and Co-presence in Human-Virtual Agent Interaction 人-虚拟Agent交互中存在感和共在场感的多模态线索
M. Ochs, Jeremie Bousquet, P. Blache
{"title":"Multimodal Cues of the Sense of Presence and Co-presence in Human-Virtual Agent Interaction","authors":"M. Ochs, Jeremie Bousquet, P. Blache","doi":"10.1145/3308532.3329438","DOIUrl":"https://doi.org/10.1145/3308532.3329438","url":null,"abstract":"A key challenge when studying human-agent interaction is the evaluation of user's experience. In virtual reality, this question is addressed by studying the sense of \"presence'' and\"co-presence'', generally assessed thanks to well-grounded subjective post-experience questionnaires. In this article, we aim at exploring behavioral measures of presence and co-presence by analyzing multimodal cues produced during an interaction both by the user and the virtual agent. In our study, we started from a corpus of human-agent interaction collected in a task-oriented context: a virtual environment aiming at training doctors to break bad news to a patient (played by a virtual agent). Based on this corpus, we have used machine learning algorithms to explore the possibility of predicting user's sense of presence and co-presence. In particular, we have applied and compared two techniques, Random forest and SVM, both showing very good results in predicting the level of presence and co-presence.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An End-to-End Conversational Style Matching Agent 端到端会话风格匹配代理
Rens Hoegen, Deepali Aneja, Daniel J. McDuff, M. Czerwinski
{"title":"An End-to-End Conversational Style Matching Agent","authors":"Rens Hoegen, Deepali Aneja, Daniel J. McDuff, M. Czerwinski","doi":"10.1145/3308532.3329473","DOIUrl":"https://doi.org/10.1145/3308532.3329473","url":null,"abstract":"We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115467471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Analyzing Input and Output Representations for Speech-Driven Gesture Generation 语音驱动手势生成的输入输出表征分析
Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström
{"title":"Analyzing Input and Output Representations for Speech-Driven Gesture Generation","authors":"Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström","doi":"10.1145/3308532.3329472","DOIUrl":"https://doi.org/10.1145/3308532.3329472","url":null,"abstract":"This paper presents a novel framework for automatic speech-driven gesture generation, applicable to human-agent interaction including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. Our approach consists of two steps. First, we learn a lower-dimensional representation of human motion using a denoising autoencoder neural network, consisting of a motion encoder MotionE and a motion decoder MotionD. The learned representation preserves the most important aspects of the human pose variation while removing less relevant variation. Second, we train a novel encoder network SpeechE to map from speech to a corresponding motion representation with reduced dimensionality. At test time, the speech encoder and the motion decoder networks are combined: SpeechE predicts motion representations based on a given speech signal and MotionD then decodes these representations to produce motion sequences. We evaluate different representation sizes in order to find the most effective dimensionality for the representation. We also evaluate the effects of using different speech features as input to the model. We find that mel-frequency cepstral coefficients (MFCCs), alone or combined with prosodic features, perform the best. The results of a subsequent user study confirm the benefits of the representation learning.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书