Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI最新文献

筛选
英文 中文
Towards Enriching Responses with Crowd-sourced Knowledge for Task-oriented Dialogue 面向任务的对话,用众包知识丰富回应
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI Pub Date : 2021-11-17 DOI: 10.1145/3475959.3485392
Ying He, Lizi Liao, Zheng Zhang, Tat-Seng Chua
{"title":"Towards Enriching Responses with Crowd-sourced Knowledge for Task-oriented Dialogue","authors":"Ying He, Lizi Liao, Zheng Zhang, Tat-Seng Chua","doi":"10.1145/3475959.3485392","DOIUrl":"https://doi.org/10.1145/3475959.3485392","url":null,"abstract":"Task-oriented dialogue agents are built to assist users in completing various tasks. Generating appropriate responses for satisfactory task completion is the ultimate goal. Hence, as a convenient and straightforward way, metrics such as success rate, inform rate etc., have been widely leveraged to evaluate the generated responses. However, beyond task completion, there are several other factors that largely affect user satisfaction, which remain under-explored. In this work, we focus on analyzing different agent behavior patterns that lead to higher user satisfaction scores. Based on the findings, we design a neural response generation model EnRG. It naturally combines the power of pre-trained GPT-2 in response semantic modeling and the merit of dual attention in making use of the external crowd-sourced knowledge. Equipped with two gates via explicit dialogue act modeling, it effectively controls the usage of external knowledge sources in the form of both text and image. We conduct extensive experiments. Both automatic and human evaluation results demonstrate that, beyond comparable task completion, our proposed method manages to generate responses gaining higher user satisfaction.","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116802958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Design of a Trust-based Game as a Conversational Component of Interactive Environment for a Human-agent Negotiation 基于信任的博弈作为人机协商交互环境中的会话组件的设计
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI Pub Date : 2021-11-17 DOI: 10.1145/3475959.3485393
Andrey V. Vlasov, O. Zinchenko, Zhenjie Zhao, Mansur Bakaev, Arsenjy Karavaev
{"title":"The Design of a Trust-based Game as a Conversational Component of Interactive Environment for a Human-agent Negotiation","authors":"Andrey V. Vlasov, O. Zinchenko, Zhenjie Zhao, Mansur Bakaev, Arsenjy Karavaev","doi":"10.1145/3475959.3485393","DOIUrl":"https://doi.org/10.1145/3475959.3485393","url":null,"abstract":"This research shed the light on how humans interact with virtual partners (Figure 1; 3D view: https://p3d.in/bVJpq) in an interactive environment based on economic games and how this environment can be applied to the training process with immersive technologies. The designed system could be integrated as a tool and be a component of an e-learning platform with Conversational AI and human-agent-interactions which allows human users to play and learn. Scientifically, we have considered the trust problem from a different point of view - learning by doing (i.e., gaming), and proposed that individuals can wear \"trust care\" lenses on trained \"golden eyes\" while communicating with others. We explore how contextual trust can be used to promote any human-agent collaboration even in the domain of a competitive negotiation scenario. We present small-scale online testing via instant messaging in Telegram [@trudicbot] and prepare VR testing to demonstrate the potentials of the trust- based game approach.","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124502171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards a Real-time Measure of the Perception of Anthropomorphism in Human-robot Interaction 对人机交互中拟人化感知的实时测量
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI Pub Date : 2021-11-17 DOI: 10.1145/3475959.3485394
Maria Tsfasman, Avinash Saravanan, Dekel Viner, Daan Goslinga, Sarah de Wolf, Chirag Raman, C. Jonker, Catharine Oertel
{"title":"Towards a Real-time Measure of the Perception of Anthropomorphism in Human-robot Interaction","authors":"Maria Tsfasman, Avinash Saravanan, Dekel Viner, Daan Goslinga, Sarah de Wolf, Chirag Raman, C. Jonker, Catharine Oertel","doi":"10.1145/3475959.3485394","DOIUrl":"https://doi.org/10.1145/3475959.3485394","url":null,"abstract":"How human-like do conversational robots need to look to enable long-term human-robot conversation? One essential aspect of long-term interaction is a human's ability to adapt to the varying degrees of a conversational partner's engagement and emotions. Prosodically, this can be achieved through (dis)entrainment. While speech-synthesis has been a limiting factor for many years, restrictions in this regard are increasingly mitigated. These advancements now emphasise the importance of studying the effect of robot embodiment on human entrainment. In this study, we conducted a between-subjects online human-robot interaction experiment in an educational use-case scenario where a tutor was either embodied through a human or a robot face. 43 English-speaking participants took part in the study for whom we analysed the degree of acoustic-prosodic entrainment to the human or robot face, respectively. We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133865334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
iFetch iFetch
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI Pub Date : 2021-11-17 DOI: 10.1145/3475959.3485395
R. Sousa, Pedro Ferreira, P. Costa, Pedro Azevedo, João Costeira, Carlos Santiago, João Magalhães, David Semedo, Rafael Ferreira, Alexander I. Rudnicky, Alexander Hauptmann
{"title":"iFetch","authors":"R. Sousa, Pedro Ferreira, P. Costa, Pedro Azevedo, João Costeira, Carlos Santiago, João Magalhães, David Semedo, Rafael Ferreira, Alexander I. Rudnicky, Alexander Hauptmann","doi":"10.1145/3475959.3485395","DOIUrl":"https://doi.org/10.1145/3475959.3485395","url":null,"abstract":"Most of the interaction between large organizations and their users will be mediated by AI agents in the near future. This perception is becoming undisputed as online shopping dominates entire market segments, and the new \"digitally-native\" generations become consumers. iFetch is a new generation of task-oriented conversational agents that interact with users seamlessly using verbal and visual information. Through the conversation, iFetch provides targeted advice and a \"physical store-like\" experience while maintaining user engagement. This context entails the following vital components: 1) highly complex memory models that keep track of the conversation, 2) extraction of key semantic features from language and images that reveal user intent, 3) generation of multimodal responses that will keep users engaged in the conversation and 4) an interrelated knowledge base of products from which to extract relevant product lists.","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128009632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Conversational AI Efforts within Facebook AI Applied Research Facebook AI应用研究中的会话AI努力
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI Pub Date : 2021-11-17 DOI: 10.1145/3475959.3478678
A. Geramifard
{"title":"Conversational AI Efforts within Facebook AI Applied Research","authors":"A. Geramifard","doi":"10.1145/3475959.3478678","DOIUrl":"https://doi.org/10.1145/3475959.3478678","url":null,"abstract":"The goal of the conversational AI team at Facebook AI Applied Research team is to create AI driven dialog capabilities with the augmented/virtual reality product focus. This talk provides an overview of our recent efforts on data collection, multimodal dialog, pipelined model-based policies and end-to-end architectures.","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132310985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI 第二届ACM多模态会话AI多媒体研讨会论文集
{"title":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","authors":"","doi":"10.1145/3475959","DOIUrl":"https://doi.org/10.1145/3475959","url":null,"abstract":"","PeriodicalId":346594,"journal":{"name":"Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130069889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信