A Brief Guide to Designing and Evaluating Human-Centered Interactive Machine Learning

Kory W. Mathewson, Patrick M. Pilarski
{"title":"A Brief Guide to Designing and Evaluating Human-Centered Interactive Machine Learning","authors":"Kory W. Mathewson, Patrick M. Pilarski","doi":"arxiv-2204.09622","DOIUrl":null,"url":null,"abstract":"Interactive machine learning (IML) is a field of research that explores how\nto leverage both human and computational abilities in decision making systems.\nIML represents a collaboration between multiple complementary human and machine\nintelligent systems working as a team, each with their own unique abilities and\nlimitations. This teamwork might mean that both systems take actions at the\nsame time, or in sequence. Two major open research questions in the field of\nIML are: \"How should we design systems that can learn to make better decisions\nover time with human interaction?\" and \"How should we evaluate the design and\ndeployment of such systems?\" A lack of appropriate consideration for the humans\ninvolved can lead to problematic system behaviour, and issues of fairness,\naccountability, and transparency. Thus, our goal with this work is to present a\nhuman-centred guide to designing and evaluating IML systems while mitigating\nrisks. This guide is intended to be used by machine learning practitioners who\nare responsible for the health, safety, and well-being of interacting humans.\nAn obligation of responsibility for public interaction means acting with\nintegrity, honesty, fairness, and abiding by applicable legal statutes. With\nthese values and principles in mind, we as a machine learning research\ncommunity can better achieve goals of augmenting human skills and abilities.\nThis practical guide therefore aims to support many of the responsible\ndecisions necessary throughout the iterative design, development, and\ndissemination of IML systems.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - General Literature","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2204.09622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Interactive machine learning (IML) is a field of research that explores how to leverage both human and computational abilities in decision making systems. IML represents a collaboration between multiple complementary human and machine intelligent systems working as a team, each with their own unique abilities and limitations. This teamwork might mean that both systems take actions at the same time, or in sequence. Two major open research questions in the field of IML are: "How should we design systems that can learn to make better decisions over time with human interaction?" and "How should we evaluate the design and deployment of such systems?" A lack of appropriate consideration for the humans involved can lead to problematic system behaviour, and issues of fairness, accountability, and transparency. Thus, our goal with this work is to present a human-centred guide to designing and evaluating IML systems while mitigating risks. This guide is intended to be used by machine learning practitioners who are responsible for the health, safety, and well-being of interacting humans. An obligation of responsibility for public interaction means acting with integrity, honesty, fairness, and abiding by applicable legal statutes. With these values and principles in mind, we as a machine learning research community can better achieve goals of augmenting human skills and abilities. This practical guide therefore aims to support many of the responsible decisions necessary throughout the iterative design, development, and dissemination of IML systems.
设计和评估以人为中心的交互式机器学习的简要指南
交互式机器学习(IML)是一个探索如何在决策系统中利用人和计算能力的研究领域。IML代表了多个互补的人类和机器智能系统之间的协作,作为一个团队工作,每个系统都有自己独特的能力和局限性。这种团队合作可能意味着两个系统同时采取行动,或者按顺序采取行动。iml领域的两个主要开放研究问题是:“我们应该如何设计能够随着时间的推移学会与人类互动做出更好决策的系统?”以及“我们应该如何评估这种系统的设计和部署?”缺乏对相关人员的适当考虑可能导致有问题的系统行为,以及公平性,问责制和透明度问题。因此,我们的目标是在降低风险的同时,为设计和评估IML系统提供以人为本的指导。本指南旨在供负责交互人类健康、安全和福祉的机器学习从业者使用。公共互动的责任义务意味着正直、诚实、公平和遵守适用的法律法规。有了这些价值观和原则,我们作为一个机器学习研究社区可以更好地实现增强人类技能和能力的目标。因此,本实用指南旨在支持在IML系统的迭代设计、开发和传播过程中必要的许多负责任的决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信