会话用户界面中道德透明度对道德偏见的缓解作用

Joel Wester, Minha Lee, N. V. Berkel
{"title":"会话用户界面中道德透明度对道德偏见的缓解作用","authors":"Joel Wester, Minha Lee, N. V. Berkel","doi":"10.1145/3571884.3603752","DOIUrl":null,"url":null,"abstract":"From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., “I’m sorry, but as an AI language model, I cannot say...”). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users’ autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers—which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces\",\"authors\":\"Joel Wester, Minha Lee, N. V. Berkel\",\"doi\":\"10.1145/3571884.3603752\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., “I’m sorry, but as an AI language model, I cannot say...”). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users’ autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers—which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.\",\"PeriodicalId\":127379,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Conversational User Interfaces\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Conversational User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3571884.3603752\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Conversational User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3571884.3603752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

从直接的交互到成熟的开放式对话,会话用户界面(gui)旨在支持最终用户的目标并遵循他们的请求。随着gui的功能越来越强大,研究如何限制或限制它们执行用户请求的能力变得越来越重要。目前,这种有意限制的用户交互都伴随着一个通用的解释(例如,“对不起,作为一个AI语言模型,我不能说……”)。我们将道德偏见在这种用户限制中的作用描述为CUI用户自治与CUI设计者生成的系统特征之间冲突的潜在来源。就像ui的用户有不同的道德观一样,CUI的设计者也有不同的道德观——这有意无意地影响了ui的沟通方式。减轻用户的道德偏见,使CUI设计者的道德观点明显是CUI设计的关键路径。我们描述了ui中的道德透明度如何支持这一目标,例如通过智能的不服从。最后,我们讨论了gui中道德透明度的风险和回报,并概述了为未来gui设计提供信息的研究机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces
From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., “I’m sorry, but as an AI language model, I cannot say...”). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users’ autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers—which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信