Measuring User Experience Inclusivity in Human-AI Interaction via Five User Problem-Solving Styles

IF 4.3 3区 材料科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Andrew Anderson, Jimena Noa Guevara, Fatima Moussaoui, Tianyi Li, Mihaela Vorvoreanu, Margaret Burnett
{"title":"Measuring User Experience Inclusivity in Human-AI Interaction via Five User Problem-Solving Styles","authors":"Andrew Anderson, Jimena Noa Guevara, Fatima Moussaoui, Tianyi Li, Mihaela Vorvoreanu, Margaret Burnett","doi":"10.1145/3663740","DOIUrl":null,"url":null,"abstract":"<p><b>Motivations:</b> Recent research has emerged on generally how to improve AI products’ Human-AI Interaction (HAI) User Experience (UX), but relatively little is known about HAI-UX inclusivity. For example, what kinds of users are supported, and who are left out? What product changes would make it more inclusive?</p><p><b>Objectives:</b> To help fill this gap, we present an approach to measuring what kinds of diverse users an AI product leaves out and how to act upon that knowledge. To bring actionability to the results, the approach focuses on users’ problem-solving diversity. Thus, our specific objectives were: (1) to show how the measure can reveal which participants with diverse problem-solving styles were left behind in a set of AI products; and (2) to relate participants’ problem-solving diversity to their demographic diversity, specifically gender and age.</p><p><b>Methods:</b> We performed 18 experiments, discarding two that failed manipulation checks. Each experiment was a 2x2 factorial experiment with online participants, comparing two AI products: one deliberately violating one of 18 HAI guideline and the other applying the same guideline. For our first objective, we used our measure to analyze how much each AI product gained/lost HAI-UX inclusivity compared to its counterpart, where inclusivity meant supportiveness to participants with particular problem-solving styles. For our second objective, we analyzed how participants’ problem-solving styles aligned with their gender identities and ages.</p><p><b>Results &amp; Implications:</b> Participants’ diverse problem-solving styles revealed six types of inclusivity results: (1) the AI products that followed an HAI guideline were almost always more inclusive across diversity of problem-solving styles than the products that did not follow that guideline—but “who” got most of the inclusivity varied widely by guideline and by problem-solving style; (2) when an AI product had risk implications, four variables’ values varied in tandem: participants’ feelings of control, their (lack of) suspicion, their trust in the product, and their certainty while using the product; (3) the more control an AI product offered users, the more inclusive it was; (4) whether an AI product was learning from “my” data or other people’s affected how inclusive that product was; (5) participants’ problem-solving styles skewed differently by gender and age group; and (6) almost all of the results suggested actions that HAI practitioners could take to improve their products’ inclusivity further. Together, these results suggest that a key to improving the demographic inclusivity of an AI product (e.g., across a wide range of genders, ages, etc.) can often be obtained by improving the product’s support of diverse problem-solving styles.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3663740","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Motivations: Recent research has emerged on generally how to improve AI products’ Human-AI Interaction (HAI) User Experience (UX), but relatively little is known about HAI-UX inclusivity. For example, what kinds of users are supported, and who are left out? What product changes would make it more inclusive?

Objectives: To help fill this gap, we present an approach to measuring what kinds of diverse users an AI product leaves out and how to act upon that knowledge. To bring actionability to the results, the approach focuses on users’ problem-solving diversity. Thus, our specific objectives were: (1) to show how the measure can reveal which participants with diverse problem-solving styles were left behind in a set of AI products; and (2) to relate participants’ problem-solving diversity to their demographic diversity, specifically gender and age.

Methods: We performed 18 experiments, discarding two that failed manipulation checks. Each experiment was a 2x2 factorial experiment with online participants, comparing two AI products: one deliberately violating one of 18 HAI guideline and the other applying the same guideline. For our first objective, we used our measure to analyze how much each AI product gained/lost HAI-UX inclusivity compared to its counterpart, where inclusivity meant supportiveness to participants with particular problem-solving styles. For our second objective, we analyzed how participants’ problem-solving styles aligned with their gender identities and ages.

Results & Implications: Participants’ diverse problem-solving styles revealed six types of inclusivity results: (1) the AI products that followed an HAI guideline were almost always more inclusive across diversity of problem-solving styles than the products that did not follow that guideline—but “who” got most of the inclusivity varied widely by guideline and by problem-solving style; (2) when an AI product had risk implications, four variables’ values varied in tandem: participants’ feelings of control, their (lack of) suspicion, their trust in the product, and their certainty while using the product; (3) the more control an AI product offered users, the more inclusive it was; (4) whether an AI product was learning from “my” data or other people’s affected how inclusive that product was; (5) participants’ problem-solving styles skewed differently by gender and age group; and (6) almost all of the results suggested actions that HAI practitioners could take to improve their products’ inclusivity further. Together, these results suggest that a key to improving the demographic inclusivity of an AI product (e.g., across a wide range of genders, ages, etc.) can often be obtained by improving the product’s support of diverse problem-solving styles.

通过五种用户解决问题的方式衡量人机交互中的用户体验包容性
动机:最近出现了一些关于如何改善人工智能产品的人机交互(HAI)用户体验(UX)的研究,但对 HAI-UX 的包容性却知之甚少。例如,哪些用户得到了支持,哪些用户被排除在外?怎样的产品改变才能使其更具包容性?为了帮助填补这一空白,我们提出了一种方法来衡量人工智能产品遗漏了哪些类型的不同用户,以及如何根据这些知识采取行动。为了使结果具有可操作性,该方法侧重于用户解决问题的多样性。因此,我们的具体目标是(1)展示该方法如何揭示哪些具有不同问题解决风格的参与者被一组人工智能产品所遗漏;(2)将参与者的问题解决多样性与他们的人口统计学多样性(特别是性别和年龄)联系起来:我们进行了 18 次实验,剔除了两次未通过操作检查的实验。每个实验都是由在线参与者参与的 2x2 因式实验,比较两个人工智能产品:一个故意违反了 18 项 HAI 准则中的一项,另一个则采用了相同的准则。在第一个目标中,我们使用我们的测量方法来分析每种人工智能产品与其对应产品相比在 HAI-UX 包容性方面的得失。在第二个目标中,我们分析了参与者解决问题的风格如何与他们的性别认同和年龄相吻合:参与者解决问题的不同风格揭示了六种类型的包容性结果:(1)遵循 HAI 指南的人工智能产品几乎总是比不遵循该指南的产品在解决问题风格的多样性方面更具包容性--但 "谁 "获得了最大的包容性因指南和解决问题风格的不同而有很大差异;(2)当人工智能产品具有风险影响时,四个变量的值同时变化:(3) 人工智能产品为用户提供的控制越多,其包容性就越强;(4) 人工智能产品是从 "我 "的数据还是从其他人的数据中学习,会影响该产品的包容性;(5) 不同性别和年龄组的参与者解决问题的风格也不同;(6) 几乎所有结果都建议人工智能从业者采取行动,进一步提高其产品的包容性。总之,这些结果表明,提高人工智能产品的人口包容性(例如,跨越广泛的性别、年龄等)的关键往往可以通过改善产品对不同问题解决方式的支持来实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
4.30%
发文量
567
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信