AI表示:“她说的是,用户在实践中如何看待消费者评分。

Lena Recki, Margarita Esau-Held, Dennis Lawo, G. Stevens
{"title":"AI表示:“她说的是,用户在实践中如何看待消费者评分。","authors":"Lena Recki, Margarita Esau-Held, Dennis Lawo, G. Stevens","doi":"10.1145/3603555.3603562","DOIUrl":null,"url":null,"abstract":"As digitization continues, consumers are increasingly exposed to AI scoring decisions. However, currently lacking is a thorough understanding of how users’ misjudgments of an AI-supported system lead to it being rejected. Therefore, investigations are needed into the appropriation of such socio-technical systems in practice and how users describe their experience with algorithm-based scoring. To address this issue, we evaluated 1,003 user reviews of an app on car insurance that calculates premiums based on the consumers’ individual driving behavior. We find evidence that users develop their own folk theories to explain the algorithms with the help of situation-related experiences and that insufficient explanations lead to power asymmetries between consumers, the system, and the company. In particular, as a result of the different needs of the stakeholders, we uncover a fundamental conflict between computational risk assessment and the perceived agency to influence the score.","PeriodicalId":132553,"journal":{"name":"Proceedings of Mensch und Computer 2023","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI said, She said - How Users Perceive Consumer Scoring in Practice\",\"authors\":\"Lena Recki, Margarita Esau-Held, Dennis Lawo, G. Stevens\",\"doi\":\"10.1145/3603555.3603562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As digitization continues, consumers are increasingly exposed to AI scoring decisions. However, currently lacking is a thorough understanding of how users’ misjudgments of an AI-supported system lead to it being rejected. Therefore, investigations are needed into the appropriation of such socio-technical systems in practice and how users describe their experience with algorithm-based scoring. To address this issue, we evaluated 1,003 user reviews of an app on car insurance that calculates premiums based on the consumers’ individual driving behavior. We find evidence that users develop their own folk theories to explain the algorithms with the help of situation-related experiences and that insufficient explanations lead to power asymmetries between consumers, the system, and the company. In particular, as a result of the different needs of the stakeholders, we uncover a fundamental conflict between computational risk assessment and the perceived agency to influence the score.\",\"PeriodicalId\":132553,\"journal\":{\"name\":\"Proceedings of Mensch und Computer 2023\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of Mensch und Computer 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3603555.3603562\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of Mensch und Computer 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3603555.3603562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着数字化的不断推进,消费者越来越多地接触到人工智能评分决策。然而,目前缺乏的是对用户对人工智能支持的系统的错误判断是如何导致它被拒绝的透彻理解。因此,需要调查这种社会技术系统在实践中的挪用,以及用户如何描述他们的经验与基于算法的评分。为了解决这个问题,我们评估了1003个用户对汽车保险应用程序的评论,该应用程序根据消费者的个人驾驶行为计算保费。我们发现有证据表明,用户在情境相关经验的帮助下发展了自己的民间理论来解释算法,而解释不足导致消费者、系统和公司之间的权力不对称。特别是,由于利益相关者的不同需求,我们发现了计算风险评估与影响得分的感知代理之间的根本冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI said, She said - How Users Perceive Consumer Scoring in Practice
As digitization continues, consumers are increasingly exposed to AI scoring decisions. However, currently lacking is a thorough understanding of how users’ misjudgments of an AI-supported system lead to it being rejected. Therefore, investigations are needed into the appropriation of such socio-technical systems in practice and how users describe their experience with algorithm-based scoring. To address this issue, we evaluated 1,003 user reviews of an app on car insurance that calculates premiums based on the consumers’ individual driving behavior. We find evidence that users develop their own folk theories to explain the algorithms with the help of situation-related experiences and that insufficient explanations lead to power asymmetries between consumers, the system, and the company. In particular, as a result of the different needs of the stakeholders, we uncover a fundamental conflict between computational risk assessment and the perceived agency to influence the score.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信