“人体是黑盒子”:用深度学习支持临床决策

M. Sendak, M. C. Elish, M. Gao, Joseph D. Futoma, W. Ratliff, M. Nichols, A. Bedoya, S. Balu, Cara O'Brien
{"title":"“人体是黑盒子”:用深度学习支持临床决策","authors":"M. Sendak, M. C. Elish, M. Gao, Joseph D. Futoma, W. Ratliff, M. Nichols, A. Bedoya, S. Balu, Cara O'Brien","doi":"10.1145/3351095.3372827","DOIUrl":null,"url":null,"abstract":"Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to fairness, transparency, and accountability that come from actual, situated use. Serious questions remain underexamined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. Sepsis is a severe infection that can lead to organ failure or death if not treated in time and is the leading cause of inpatient deaths in US hospitals. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing solely on model interpretability to ensure fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for responsibly designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"121","resultStr":"{\"title\":\"\\\"The human body is a black box\\\": supporting clinical decision-making with deep learning\",\"authors\":\"M. Sendak, M. C. Elish, M. Gao, Joseph D. Futoma, W. Ratliff, M. Nichols, A. Bedoya, S. Balu, Cara O'Brien\",\"doi\":\"10.1145/3351095.3372827\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to fairness, transparency, and accountability that come from actual, situated use. Serious questions remain underexamined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. Sepsis is a severe infection that can lead to organ failure or death if not treated in time and is the leading cause of inpatient deaths in US hospitals. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing solely on model interpretability to ensure fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for responsibly designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.\",\"PeriodicalId\":377829,\"journal\":{\"name\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"121\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3351095.3372827\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3372827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 121

摘要

机器学习技术越来越多地用于医疗保健。虽然研究团体专注于创建最先进的模型,但对现实世界的实施以及来自实际使用的公平性、透明度和问责制的相关挑战的关注较少。关于如何合乎道德地构建模型,解释和解释模型输出,识别和解释偏见,以及最大限度地减少对专业知识和工作文化的破坏等严重问题仍未得到充分研究。我们解决了文献中的这一空白,并提供了一个详细的案例研究,涵盖败血症观察的开发、实施和评估,这是一个机器学习驱动的工具,可帮助医院临床医生进行败血症的早期诊断和治疗。败血症是一种严重的感染,如果不及时治疗,可导致器官衰竭或死亡,是美国医院住院患者死亡的主要原因。我们,开发和评估该工具的团队,讨论了我们对该工具的概念化,而不是作为世界上部署的模型,而是作为需要集成到现有社会和专业环境中的社会技术系统。我们不是仅仅关注模型可解释性来确保公平和负责任的机器学习,而是指出在开发机器学习以支持临床决策时应该考虑的四个关键价值观和实践:严格定义上下文中的问题,与利益相关者建立关系,尊重专业自由裁量权,并与利益相关者建立持续的反馈循环。我们的工作对未来关于机构问责机制和负责任设计机器学习系统的考虑因素的研究具有重要意义。我们的工作强调了模型可解释性作为确保实践中透明度、准确性和问责性的解决方案的局限性。相反,我们的工作展示了在设计和实践中实现html值的其他方法和目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
"The human body is a black box": supporting clinical decision-making with deep learning
Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to fairness, transparency, and accountability that come from actual, situated use. Serious questions remain underexamined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. Sepsis is a severe infection that can lead to organ failure or death if not treated in time and is the leading cause of inpatient deaths in US hospitals. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing solely on model interpretability to ensure fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for responsibly designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信