Understanding How People Rate Their Conversations

A. Papangelis, Nicole Chartier, Pankaj Rajan, J. Hirschberg, Dilek Z. Hakkani-Tür
{"title":"Understanding How People Rate Their Conversations","authors":"A. Papangelis, Nicole Chartier, Pankaj Rajan, J. Hirschberg, Dilek Z. Hakkani-Tür","doi":"10.48550/arXiv.2206.00167","DOIUrl":null,"url":null,"abstract":"User ratings play a significant role in spoken dialogue systems. Typically, such ratings tend to be averaged across all users and then utilized as feedback to improve the system or personalize its behavior. While this method can be useful to understand broad, general issues with the system and its behavior, it does not take into account differences between users that affect their ratings. In this work, we conduct a study to better understand how people rate their interactions with conversational agents. One macro-level characteristic that has been shown to cor-relate with how people perceive their inter-personal communication is personality [1, 2, 12]. We specifically focus on agreeableness and extraversion as variables that may explain variation in ratings and therefore provide a more meaningful signal for training or personalization. In order to elicit those personality traits during an interaction with a conversational agent, we designed and validated a fictional story, grounded in prior work in psychology. We then implemented the story into an experimental conversational agent that allowed users to opt-in to hearing the story. Our results suggest that for human-conversational agent interactions, extraversion may play a role in user ratings, but more data is needed to determine if the relationship is significant. Agreeableness, on the other hand, plays a statistically significant role in conversation ratings: users who are more agreeable are more likely to provide a higher rating for their interaction. In addition, we found that users who opted to hear the story were, in general, more likely to rate their conversational experience higher than those who did not.","PeriodicalId":200919,"journal":{"name":"International Workshop on Spoken Dialogue Systems Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Workshop on Spoken Dialogue Systems Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2206.00167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

User ratings play a significant role in spoken dialogue systems. Typically, such ratings tend to be averaged across all users and then utilized as feedback to improve the system or personalize its behavior. While this method can be useful to understand broad, general issues with the system and its behavior, it does not take into account differences between users that affect their ratings. In this work, we conduct a study to better understand how people rate their interactions with conversational agents. One macro-level characteristic that has been shown to cor-relate with how people perceive their inter-personal communication is personality [1, 2, 12]. We specifically focus on agreeableness and extraversion as variables that may explain variation in ratings and therefore provide a more meaningful signal for training or personalization. In order to elicit those personality traits during an interaction with a conversational agent, we designed and validated a fictional story, grounded in prior work in psychology. We then implemented the story into an experimental conversational agent that allowed users to opt-in to hearing the story. Our results suggest that for human-conversational agent interactions, extraversion may play a role in user ratings, but more data is needed to determine if the relationship is significant. Agreeableness, on the other hand, plays a statistically significant role in conversation ratings: users who are more agreeable are more likely to provide a higher rating for their interaction. In addition, we found that users who opted to hear the story were, in general, more likely to rate their conversational experience higher than those who did not.
了解人们如何评价他们的谈话
用户评分在口语对话系统中起着重要的作用。通常,这样的评分倾向于对所有用户进行平均,然后将其用作改进系统或个性化其行为的反馈。虽然这种方法对于理解系统及其行为的广泛、一般问题很有用,但它没有考虑到影响其评分的用户之间的差异。在这项工作中,我们进行了一项研究,以更好地了解人们如何评价他们与会话代理的互动。与人们如何感知人际交往相关的一个宏观特征是人格[1,2,12]。我们特别关注亲和性和外向性这两个变量,它们可以解释评分的变化,从而为培训或个性化提供更有意义的信号。为了在与对话代理的互动中引出这些个性特征,我们设计并验证了一个虚构的故事,该故事以心理学先前的工作为基础。然后,我们将这个故事实现到一个实验性的对话代理中,允许用户选择是否收听这个故事。我们的研究结果表明,对于人类会话代理交互,外向性可能在用户评分中发挥作用,但需要更多的数据来确定这种关系是否重要。另一方面,亲和性在会话评分中起着统计学意义上的重要作用:更随和的用户更有可能为他们的互动提供更高的评分。此外,我们发现,选择听故事的用户通常比不听故事的用户更有可能对他们的对话体验进行更高的评价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信