预测和偏好

IF 1 2区 哲学 Q3 ETHICS
Nathaniel Sharadin
{"title":"预测和偏好","authors":"Nathaniel Sharadin","doi":"10.1080/0020174x.2023.2261493","DOIUrl":null,"url":null,"abstract":"ABSTRACTThe use of machine learning, or ‘artificial intelligence’ (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.KEYWORDS: PPPmedical ethicsAIpatient preference predictorspreference shaping Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 In what follows, I mostly refer to these systems as ML systems, rather than as ‘AI’, in order to avoid unfortunate and controversial implications about machine ‘intelligence’.2 For an overview, see (Emanuel et al. Citation1991; Buchanan and Brock Citation2019).3 For discussion, see (Salmond and David Citation2005; Shalowitz, Garrett-Mayer, and Wendler Citation2006; Jezewski et al. Citation2007).4 See (Rid and Wendler Citation2014a) for discussion; for relevant machine learning research, see (O. Evans et al. Citation2018).5 For a selection of moral criticism, see (N. Sharadin Citation2019; N. P. Sharadin Citation2018; Ditto and Clark Citation2014; Kim Citation2014; John Citation2014; Dresser Citation2014; Tretter and Samhammer Citation2023; Mainz Citation2022). For a recent reply to autonomy-based criticism, see (Jardas et al. Citation2022).6 Compare (N. P. Sharadin Citation2018).7 For a technical overview, see (Gneiting and Raftery Citation2007).8 Well, three. We could change our scoring rule, or our performance metric. I ignore this possibility in what follows.9 I follow the literature in saying that a learner is incentivized to do something just in case doing that thing increases performance (or reward). See (Krueger, Maharaj, and Leike Citation2020, 2).10 If this sounds familiar from the Forever War between consequentialists and Kantians, that’s not an accident.11 See (Good Citation2021).12 Philosophers call a related phenomenon self-fulfilling beliefs (Silva Citationforthcoming; Antill Citation2019).13 Following (Perdomo et al. Citation2020). Begrudgingly because it can make it sound as if the model itself is doing something. It isn’t: we are doing something with the model.14 Compare (Franklin et al. Citation2022).15 This follows from broader ideas about the importance of informed consent. For an overview, see (Faden and Beauchamp Citation1986); for critical discussion, see (Manson and O’Neill Citation2007).16 This is not controversial. See (Li and Chapman Citation2020) for discussion.17 For a recent philosophical discussion, see Parmer (Citation2023). The debate over the ethics of nudging is ongoing. For the classic source on ‘nudges’ see Thaler and Sunstein (Citation2008).18 For technical discussion of the broad phenomenon, see (Krueger, Maharaj, and Leike Citation2020; C. Evans and Kasirzadeh Citation2022; Farquhar, Carey, and Everitt Citation2022; Everitt et al. Citation2021).19 See the discussion in (Perdomo et al. Citation2020).20 Thanks to an anonymous referee for encouraging clarity on this point.21 The only research that I’m aware of that approaches the question of performative prediction in the context of medical AI is a review article (Chen et al. Citation2021); there, the authors simply note the possibility of distributional shift (aka performative prediction).22 This is also the conclusion of other AI safety researchers. Compare (Hendrycks et al. Citation2022; C. Evans and Kasirzadeh Citation2022; Ashton and Franklin Citation2022). This is not to say that there are no proposals about how to ensure that models have other interesting properties related to performative prediction, e.g., can achieve various strategic equilibria; for relevant discussion see (Mendler-Dünner et al. Citation2020; Brown, Hod, and Kalemaj Citation2022; Miller, Perdomo, and Zrnic Citation2021). Very recent work aims to identify and penalize induced preference shifts in recommender systems (e.g. Carroll et al. Citation2022); that work is clearly relevant to the present problem, though it doesn’t yet represent a solution.23 This is part of why no one agrees on a definition of the Alignment Problem.24 Thanks to an anonymous referee for this way of putting the contrast between the alignment problem and the problem I identify in the paper.25 Thanks to Simon Goldstein, Dan Hendrycks, Jacqueline Harding, Cameron Kirk-Giannini, David Krueger, Nick Laskowski, Robert Long, Elliot Thornley, and members of the Cottage Group for helpful discussions about these and related issues.","PeriodicalId":47504,"journal":{"name":"Inquiry-An Interdisciplinary Journal of Philosophy","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predicting and preferring\",\"authors\":\"Nathaniel Sharadin\",\"doi\":\"10.1080/0020174x.2023.2261493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTThe use of machine learning, or ‘artificial intelligence’ (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.KEYWORDS: PPPmedical ethicsAIpatient preference predictorspreference shaping Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 In what follows, I mostly refer to these systems as ML systems, rather than as ‘AI’, in order to avoid unfortunate and controversial implications about machine ‘intelligence’.2 For an overview, see (Emanuel et al. Citation1991; Buchanan and Brock Citation2019).3 For discussion, see (Salmond and David Citation2005; Shalowitz, Garrett-Mayer, and Wendler Citation2006; Jezewski et al. Citation2007).4 See (Rid and Wendler Citation2014a) for discussion; for relevant machine learning research, see (O. Evans et al. Citation2018).5 For a selection of moral criticism, see (N. Sharadin Citation2019; N. P. Sharadin Citation2018; Ditto and Clark Citation2014; Kim Citation2014; John Citation2014; Dresser Citation2014; Tretter and Samhammer Citation2023; Mainz Citation2022). For a recent reply to autonomy-based criticism, see (Jardas et al. Citation2022).6 Compare (N. P. Sharadin Citation2018).7 For a technical overview, see (Gneiting and Raftery Citation2007).8 Well, three. We could change our scoring rule, or our performance metric. I ignore this possibility in what follows.9 I follow the literature in saying that a learner is incentivized to do something just in case doing that thing increases performance (or reward). See (Krueger, Maharaj, and Leike Citation2020, 2).10 If this sounds familiar from the Forever War between consequentialists and Kantians, that’s not an accident.11 See (Good Citation2021).12 Philosophers call a related phenomenon self-fulfilling beliefs (Silva Citationforthcoming; Antill Citation2019).13 Following (Perdomo et al. Citation2020). Begrudgingly because it can make it sound as if the model itself is doing something. It isn’t: we are doing something with the model.14 Compare (Franklin et al. Citation2022).15 This follows from broader ideas about the importance of informed consent. For an overview, see (Faden and Beauchamp Citation1986); for critical discussion, see (Manson and O’Neill Citation2007).16 This is not controversial. See (Li and Chapman Citation2020) for discussion.17 For a recent philosophical discussion, see Parmer (Citation2023). The debate over the ethics of nudging is ongoing. For the classic source on ‘nudges’ see Thaler and Sunstein (Citation2008).18 For technical discussion of the broad phenomenon, see (Krueger, Maharaj, and Leike Citation2020; C. Evans and Kasirzadeh Citation2022; Farquhar, Carey, and Everitt Citation2022; Everitt et al. Citation2021).19 See the discussion in (Perdomo et al. Citation2020).20 Thanks to an anonymous referee for encouraging clarity on this point.21 The only research that I’m aware of that approaches the question of performative prediction in the context of medical AI is a review article (Chen et al. Citation2021); there, the authors simply note the possibility of distributional shift (aka performative prediction).22 This is also the conclusion of other AI safety researchers. Compare (Hendrycks et al. Citation2022; C. Evans and Kasirzadeh Citation2022; Ashton and Franklin Citation2022). This is not to say that there are no proposals about how to ensure that models have other interesting properties related to performative prediction, e.g., can achieve various strategic equilibria; for relevant discussion see (Mendler-Dünner et al. Citation2020; Brown, Hod, and Kalemaj Citation2022; Miller, Perdomo, and Zrnic Citation2021). Very recent work aims to identify and penalize induced preference shifts in recommender systems (e.g. Carroll et al. Citation2022); that work is clearly relevant to the present problem, though it doesn’t yet represent a solution.23 This is part of why no one agrees on a definition of the Alignment Problem.24 Thanks to an anonymous referee for this way of putting the contrast between the alignment problem and the problem I identify in the paper.25 Thanks to Simon Goldstein, Dan Hendrycks, Jacqueline Harding, Cameron Kirk-Giannini, David Krueger, Nick Laskowski, Robert Long, Elliot Thornley, and members of the Cottage Group for helpful discussions about these and related issues.\",\"PeriodicalId\":47504,\"journal\":{\"name\":\"Inquiry-An Interdisciplinary Journal of Philosophy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inquiry-An Interdisciplinary Journal of Philosophy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0020174x.2023.2261493\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inquiry-An Interdisciplinary Journal of Philosophy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0020174x.2023.2261493","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要机器学习或“人工智能”(AI)在医学中的应用广泛且不断增长。在本文中,我重点关注人工智能的具体临床应用:使用模型预测无行为能力患者的治疗偏好。根据机器学习的结果,我认为这一提议面临着一个特殊的道德问题。在进行实验研究之前,机器学习研究人员应该在这方面向我们保证。在我的结论中,我将这种担忧与人工智能安全的更广泛问题联系起来。关键词:ppp医学伦理患者偏好预测偏好形成披露声明作者未报告潜在利益冲突注1在下文中,我主要将这些系统称为ML系统,而不是“AI”,以避免关于机器“智能”的不幸和有争议的含义有关概述,请参阅(Emanuel等)。Citation1991;2 .布坎南和布洛克引文(2019)有关讨论,请参见(Salmond and David Citation2005;Shalowitz, Garrett-Mayer, and Wendler Citation2006;Jezewski等。Citation2007) 4。详见(Rid and Wendler Citation2014a);有关机器学习的相关研究,请参见(O. Evans等)。Citation2018) 5有关道德批评的选择,请参见(N. Sharadin Citation2019;N. P. Sharadin Citation2018;同上和克拉克引文2014;金正日Citation2014;约翰Citation2014;梳妆台Citation2014;Tretter and Samhammer;美因茨Citation2022)。有关最近对基于自治的批评的回应,请参见Jardas et al。Citation2022)。67 .比较(n.p. Sharadin Citation2018)有关技术概述,请参阅(Gneiting and Raftery citation) 2007三。我们可以改变我们的评分规则,或者我们的绩效指标。在接下来的内容中,我忽略了这种可能性我遵循文献的说法,学习者被激励去做一些事情,只是为了以防做这件事会提高表现(或奖励)。参见(Krueger, Maharaj, and Leike Citation2020, 2).10如果这听起来很像结果主义者和康德主义者之间的永恒之战,那绝非偶然参见(Good Citation2021)哲学家们把一种相关的现象称为自我实现的信念(席尔瓦引文即将到来;Antill Citation2019) 13。以下是(Perdomo等人)Citation2020)。不情愿,因为这会让它听起来好像模型本身在做什么。不是的,我们正在对模型做些什么比较富兰克林等人。Citation2022实施率达)这源于对知情同意重要性的更广泛的认识。有关概述,请参见(Faden and Beauchamp Citation1986);关于批判性的讨论,见(Manson and O 'Neill citation, 2007)这是没有争议的。参见(Li and Chapman Citation2020)进行讨论有关最近的哲学讨论,请参见Parmer (Citation2023)。关于“轻推”是否道德的争论仍在继续。关于“助推”的经典来源见塞勒和桑斯坦(Citation2008)有关广泛现象的技术讨论,请参见Krueger, Maharaj和Leike Citation2020;C. Evans and Kasirzadeh Citation2022;Farquhar, Carey, and Everitt Citation2022;Everitt等人。Citation2021) .19参见Perdomo等人的讨论。Citation2020)”感谢一位匿名的裁判,他鼓励大家澄清这一点据我所知,唯一一项涉及医疗人工智能背景下的行为预测问题的研究是一篇综述文章(Chen等人)。Citation2021);在这里,作者只是简单地指出了分布转移的可能性(又名绩效预测)这也是其他人工智能安全研究人员的结论。比较hendricks等人。Citation2022;C. Evans and Kasirzadeh Citation2022;阿什顿和富兰克林引文(2022)。这并不是说没有关于如何确保模型具有与绩效预测相关的其他有趣属性的建议,例如,可以实现各种战略均衡;相关讨论见mendler - d nner et al。Citation2020;Brown, Hod, and Kalemaj Citation2022;Miller, Perdomo, and Zrnic Citation2021)。最近的工作旨在识别和惩罚推荐系统中诱导的偏好变化(例如Carroll等人)。Citation2022);那项工作显然与目前的问题有关,尽管它还不是解决问题的办法这就是为什么没有人对对齐问题的定义达成一致的部分原因。感谢一位匿名的推荐人,他用这种方式将对齐问题与我在论文中发现的问题进行了对比感谢Simon Goldstein, Dan hendricks, Jacqueline Harding, Cameron Kirk-Giannini, David Krueger, Nick Laskowski, Robert Long, Elliot Thornley以及Cottage Group成员就这些问题和相关问题进行了有益的讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Predicting and preferring
ABSTRACTThe use of machine learning, or ‘artificial intelligence’ (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.KEYWORDS: PPPmedical ethicsAIpatient preference predictorspreference shaping Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 In what follows, I mostly refer to these systems as ML systems, rather than as ‘AI’, in order to avoid unfortunate and controversial implications about machine ‘intelligence’.2 For an overview, see (Emanuel et al. Citation1991; Buchanan and Brock Citation2019).3 For discussion, see (Salmond and David Citation2005; Shalowitz, Garrett-Mayer, and Wendler Citation2006; Jezewski et al. Citation2007).4 See (Rid and Wendler Citation2014a) for discussion; for relevant machine learning research, see (O. Evans et al. Citation2018).5 For a selection of moral criticism, see (N. Sharadin Citation2019; N. P. Sharadin Citation2018; Ditto and Clark Citation2014; Kim Citation2014; John Citation2014; Dresser Citation2014; Tretter and Samhammer Citation2023; Mainz Citation2022). For a recent reply to autonomy-based criticism, see (Jardas et al. Citation2022).6 Compare (N. P. Sharadin Citation2018).7 For a technical overview, see (Gneiting and Raftery Citation2007).8 Well, three. We could change our scoring rule, or our performance metric. I ignore this possibility in what follows.9 I follow the literature in saying that a learner is incentivized to do something just in case doing that thing increases performance (or reward). See (Krueger, Maharaj, and Leike Citation2020, 2).10 If this sounds familiar from the Forever War between consequentialists and Kantians, that’s not an accident.11 See (Good Citation2021).12 Philosophers call a related phenomenon self-fulfilling beliefs (Silva Citationforthcoming; Antill Citation2019).13 Following (Perdomo et al. Citation2020). Begrudgingly because it can make it sound as if the model itself is doing something. It isn’t: we are doing something with the model.14 Compare (Franklin et al. Citation2022).15 This follows from broader ideas about the importance of informed consent. For an overview, see (Faden and Beauchamp Citation1986); for critical discussion, see (Manson and O’Neill Citation2007).16 This is not controversial. See (Li and Chapman Citation2020) for discussion.17 For a recent philosophical discussion, see Parmer (Citation2023). The debate over the ethics of nudging is ongoing. For the classic source on ‘nudges’ see Thaler and Sunstein (Citation2008).18 For technical discussion of the broad phenomenon, see (Krueger, Maharaj, and Leike Citation2020; C. Evans and Kasirzadeh Citation2022; Farquhar, Carey, and Everitt Citation2022; Everitt et al. Citation2021).19 See the discussion in (Perdomo et al. Citation2020).20 Thanks to an anonymous referee for encouraging clarity on this point.21 The only research that I’m aware of that approaches the question of performative prediction in the context of medical AI is a review article (Chen et al. Citation2021); there, the authors simply note the possibility of distributional shift (aka performative prediction).22 This is also the conclusion of other AI safety researchers. Compare (Hendrycks et al. Citation2022; C. Evans and Kasirzadeh Citation2022; Ashton and Franklin Citation2022). This is not to say that there are no proposals about how to ensure that models have other interesting properties related to performative prediction, e.g., can achieve various strategic equilibria; for relevant discussion see (Mendler-Dünner et al. Citation2020; Brown, Hod, and Kalemaj Citation2022; Miller, Perdomo, and Zrnic Citation2021). Very recent work aims to identify and penalize induced preference shifts in recommender systems (e.g. Carroll et al. Citation2022); that work is clearly relevant to the present problem, though it doesn’t yet represent a solution.23 This is part of why no one agrees on a definition of the Alignment Problem.24 Thanks to an anonymous referee for this way of putting the contrast between the alignment problem and the problem I identify in the paper.25 Thanks to Simon Goldstein, Dan Hendrycks, Jacqueline Harding, Cameron Kirk-Giannini, David Krueger, Nick Laskowski, Robert Long, Elliot Thornley, and members of the Cottage Group for helpful discussions about these and related issues.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.60
自引率
23.10%
发文量
144
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信