学习安全地批准机器学习算法的更新

Jean Feng
{"title":"学习安全地批准机器学习算法的更新","authors":"Jean Feng","doi":"10.1145/3450439.3451864","DOIUrl":null,"url":null,"abstract":"Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, regulatory bodies like the US FDA have begun discussions on how to autonomously approve modifications to algorithms. Current proposals evaluate algorithmic modifications via hypothesis testing and control a definition of online approval error that only applies if the data is stationary over time, which is unlikely in practice. To this end, we investigate designing approval policies for modifications to ML algorithms in the presence of distributional shifts. Our key observation is that the approval policy most efficient at identifying and approving beneficial modifications varies across problem settings. So, rather than selecting a fixed approval policy a priori, we propose learning the best approval policy by searching over a family of approval strategies. We define a family of strategies that range in their level of optimism when approving modifications. To protect against settings where no version of the ML algorithm performs well, this family includes a pessimistic strategy that rescinds approval. We use the exponentially weighted averaging forecaster (EWAF) to learn the most appropriate strategy and derive tighter regret bounds assuming the distributional shifts are bounded. In simulation studies and empirical analyses, we find that wrapping approval strategies within EWAF is a simple yet effective approach to protect against distributional shifts without significantly slowing down approval of beneficial modifications.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning to safely approve updates to machine learning algorithms\",\"authors\":\"Jean Feng\",\"doi\":\"10.1145/3450439.3451864\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, regulatory bodies like the US FDA have begun discussions on how to autonomously approve modifications to algorithms. Current proposals evaluate algorithmic modifications via hypothesis testing and control a definition of online approval error that only applies if the data is stationary over time, which is unlikely in practice. To this end, we investigate designing approval policies for modifications to ML algorithms in the presence of distributional shifts. Our key observation is that the approval policy most efficient at identifying and approving beneficial modifications varies across problem settings. So, rather than selecting a fixed approval policy a priori, we propose learning the best approval policy by searching over a family of approval strategies. We define a family of strategies that range in their level of optimism when approving modifications. To protect against settings where no version of the ML algorithm performs well, this family includes a pessimistic strategy that rescinds approval. We use the exponentially weighted averaging forecaster (EWAF) to learn the most appropriate strategy and derive tighter regret bounds assuming the distributional shifts are bounded. In simulation studies and empirical analyses, we find that wrapping approval strategies within EWAF is a simple yet effective approach to protect against distributional shifts without significantly slowing down approval of beneficial modifications.\",\"PeriodicalId\":87342,\"journal\":{\"name\":\"Proceedings of the ACM Conference on Health, Inference, and Learning\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Conference on Health, Inference, and Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3450439.3451864\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Conference on Health, Inference, and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3450439.3451864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

医疗保健中的机器学习算法有可能从医疗保健交付过程中生成的真实数据中不断学习,并适应数据集的变化。因此,像美国FDA这样的监管机构已经开始讨论如何自主批准对算法的修改。目前的建议通过假设检验来评估算法修改,并控制在线审批错误的定义,该定义仅适用于数据随时间稳定的情况,这在实践中是不太可能的。为此,我们研究了在存在分布变化的情况下设计修改ML算法的批准策略。我们的主要观察是,在识别和批准有益修改方面最有效的审批策略因问题设置而异。因此,我们建议通过搜索一系列审批策略来学习最佳审批策略,而不是先验地选择固定的审批策略。我们定义了一系列策略,这些策略在批准修改时的乐观程度各不相同。为了防止没有版本的ML算法表现良好的设置,该系列包括一个撤销批准的悲观策略。我们使用指数加权平均预测器(EWAF)来学习最合适的策略,并在假设分布移位有界的情况下推导出更严格的后悔界。在模拟研究和实证分析中,我们发现在EWAF中包装批准策略是一种简单而有效的方法,可以防止分配转移,而不会显着减慢对有益修改的批准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning to safely approve updates to machine learning algorithms
Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, regulatory bodies like the US FDA have begun discussions on how to autonomously approve modifications to algorithms. Current proposals evaluate algorithmic modifications via hypothesis testing and control a definition of online approval error that only applies if the data is stationary over time, which is unlikely in practice. To this end, we investigate designing approval policies for modifications to ML algorithms in the presence of distributional shifts. Our key observation is that the approval policy most efficient at identifying and approving beneficial modifications varies across problem settings. So, rather than selecting a fixed approval policy a priori, we propose learning the best approval policy by searching over a family of approval strategies. We define a family of strategies that range in their level of optimism when approving modifications. To protect against settings where no version of the ML algorithm performs well, this family includes a pessimistic strategy that rescinds approval. We use the exponentially weighted averaging forecaster (EWAF) to learn the most appropriate strategy and derive tighter regret bounds assuming the distributional shifts are bounded. In simulation studies and empirical analyses, we find that wrapping approval strategies within EWAF is a simple yet effective approach to protect against distributional shifts without significantly slowing down approval of beneficial modifications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信