在基于上下文的口语对话系统中评估影响检测中的用户偏见

S. Lutfi, F. Martínez, Andrés Casanova-García, Lorena Lopez-Lebon, J. Montero-Martínez
{"title":"在基于上下文的口语对话系统中评估影响检测中的用户偏见","authors":"S. Lutfi, F. Martínez, Andrés Casanova-García, Lorena Lopez-Lebon, J. Montero-Martínez","doi":"10.1109/SocialCom-PASSAT.2012.112","DOIUrl":null,"url":null,"abstract":"This paper presents an empirical evidence of user bias within a laboratory-oriented evaluation of a Spoken Dialog System. Specifically, we addressed user bias in their satisfaction judgements. We question the reliability of this data for modeling user emotion, focusing on contentment and frustration in a spoken dialog system. This bias is detected through machine learning experiments that were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. The target used was the satisfaction rating and the predictors were conversational/dialog features. Our results indicated that standard classifiers were significantly more successful in discriminating frustration and contentment and the intensities of these emotions (reflected by user satisfaction ratings) from annotator data than from user data. Indirectly, the results showed that conversational features are reliable predictors of the two abovementioned emotions.","PeriodicalId":129526,"journal":{"name":"2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing","volume":"146 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing User Bias in Affect Detection within Context-Based Spoken Dialog Systems\",\"authors\":\"S. Lutfi, F. Martínez, Andrés Casanova-García, Lorena Lopez-Lebon, J. Montero-Martínez\",\"doi\":\"10.1109/SocialCom-PASSAT.2012.112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an empirical evidence of user bias within a laboratory-oriented evaluation of a Spoken Dialog System. Specifically, we addressed user bias in their satisfaction judgements. We question the reliability of this data for modeling user emotion, focusing on contentment and frustration in a spoken dialog system. This bias is detected through machine learning experiments that were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. The target used was the satisfaction rating and the predictors were conversational/dialog features. Our results indicated that standard classifiers were significantly more successful in discriminating frustration and contentment and the intensities of these emotions (reflected by user satisfaction ratings) from annotator data than from user data. Indirectly, the results showed that conversational features are reliable predictors of the two abovementioned emotions.\",\"PeriodicalId\":129526,\"journal\":{\"name\":\"2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing\",\"volume\":\"146 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SocialCom-PASSAT.2012.112\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SocialCom-PASSAT.2012.112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了用户偏见的经验证据,在实验室导向的口语对话系统的评估。具体来说,我们解决了用户在满意度判断中的偏见。我们质疑这些数据对用户情感建模的可靠性,重点关注口语对话系统中的满足感和挫败感。这种偏差是通过在两个数据集(用户和注释者)上进行的机器学习实验来检测的,然后将它们进行比较,以评估这些数据集的可靠性。使用的目标是满意度评分,预测因子是会话/对话特征。我们的结果表明,标准分类器在区分来自注释者数据的沮丧和满足以及这些情绪的强度(由用户满意度评分反映)方面比来自用户数据的分类器要成功得多。间接地,结果表明会话特征是上述两种情绪的可靠预测因子。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing User Bias in Affect Detection within Context-Based Spoken Dialog Systems
This paper presents an empirical evidence of user bias within a laboratory-oriented evaluation of a Spoken Dialog System. Specifically, we addressed user bias in their satisfaction judgements. We question the reliability of this data for modeling user emotion, focusing on contentment and frustration in a spoken dialog system. This bias is detected through machine learning experiments that were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. The target used was the satisfaction rating and the predictors were conversational/dialog features. Our results indicated that standard classifiers were significantly more successful in discriminating frustration and contentment and the intensities of these emotions (reflected by user satisfaction ratings) from annotator data than from user data. Indirectly, the results showed that conversational features are reliable predictors of the two abovementioned emotions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信