The Ethics of Artificial Intelligence-based Screening for End-of-life and Palliative Care

IF 3.2 2区 医学 Q2 CLINICAL NEUROLOGY
Kathryn Huber MD, Matthew DeCamp MD PhD, Ahmed Alasmar, Mika Hamer PhD MPH
{"title":"The Ethics of Artificial Intelligence-based Screening for End-of-life and Palliative Care","authors":"Kathryn Huber MD,&nbsp;Matthew DeCamp MD PhD,&nbsp;Ahmed Alasmar,&nbsp;Mika Hamer PhD MPH","doi":"10.1016/j.jpainsymman.2025.02.031","DOIUrl":null,"url":null,"abstract":"<div><h3>Outcomes</h3><div>1. Participants will be able to comprehend the use of artificial intelligence-based prognostication as a form of “screening” for end-of-life.</div><div>2. Participants will be able to analyze the ethical challenges that could shape the implementation of artificial intelligence-based prognostication in palliative care and apply ethical principles that can help guide that implementation.</div></div><div><h3>Key Message</h3><div>Based on qualitative interviews at four U.S. medical centers, palliative care team members view artificial intelligence-based prognostication tools as a form of “screening” – so understood, the established ethics principles governing screening can yield concrete recommendations for the ethical use of these AI prognostic tools.</div></div><div><h3>Abstract</h3><div>Artificial Intelligence (AI) tools for healthcare applications are rapidly emerging, with some tools already being used and more on their way. One example is AI-based prognostication tools which can predict patient mortality automatically and with accuracy that outperforms clinicians and other available tools. In palliative care, prognostication may be particularly important; these tools may change practice in ways we do not fully understand and raise important ethical and implementation questions.</div></div><div><h3>Objective</h3><div>To identify the ethical challenges that could shape implementation of AI-based prognostication in palliative care.</div></div><div><h3>Methods</h3><div>We conducted semi-structured interviews with 45 palliative care physicians, nurses, and other team members from four academic medical centers. Interviews were transcribed and analyzed using grounded theory.</div></div><div><h3>Results</h3><div>A central theme emerged: implementation of AI-based prognostication was seen as a form of “screening” for end-of-life (EoL). While the idea of prognostication as screening for EoL is novel, the ethics of screening in other clinical contexts is well-established. For this reason, we drew on a model of screening ethics (1) as a framework for our analysis. Interpreting our interview data through this lens, we identified four principles to guide the implementation of AI-based prognostication as screening: (i) screening for EoL should be evidence based, (ii) screening for EoL should take opportunity cost into account, (iii) screening for EoL should distribute costs and benefits fairly, and (iv) screening for EoL should offer respect for persons and their dignity.</div></div><div><h3>Conclusion</h3><div>Our findings help us understand how palliative care team members view emerging AI-based prognostic tools and offer guiding principles for their implementation as screening for EoL. In the future, it will be important to define the role of screening in this context and to understand how the result of the screening affects decision-making for patients, families, and care teams.</div></div><div><h3>References</h3><div>1.Bailey MA, Murray TH. Ethics, evidence, and cost in newborn screening. Hastings Cent Rep 2008;38(3):23-31.</div></div>","PeriodicalId":16634,"journal":{"name":"Journal of pain and symptom management","volume":"69 5","pages":"Pages e425-e426"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of pain and symptom management","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885392425000910","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Outcomes

1. Participants will be able to comprehend the use of artificial intelligence-based prognostication as a form of “screening” for end-of-life.
2. Participants will be able to analyze the ethical challenges that could shape the implementation of artificial intelligence-based prognostication in palliative care and apply ethical principles that can help guide that implementation.

Key Message

Based on qualitative interviews at four U.S. medical centers, palliative care team members view artificial intelligence-based prognostication tools as a form of “screening” – so understood, the established ethics principles governing screening can yield concrete recommendations for the ethical use of these AI prognostic tools.

Abstract

Artificial Intelligence (AI) tools for healthcare applications are rapidly emerging, with some tools already being used and more on their way. One example is AI-based prognostication tools which can predict patient mortality automatically and with accuracy that outperforms clinicians and other available tools. In palliative care, prognostication may be particularly important; these tools may change practice in ways we do not fully understand and raise important ethical and implementation questions.

Objective

To identify the ethical challenges that could shape implementation of AI-based prognostication in palliative care.

Methods

We conducted semi-structured interviews with 45 palliative care physicians, nurses, and other team members from four academic medical centers. Interviews were transcribed and analyzed using grounded theory.

Results

A central theme emerged: implementation of AI-based prognostication was seen as a form of “screening” for end-of-life (EoL). While the idea of prognostication as screening for EoL is novel, the ethics of screening in other clinical contexts is well-established. For this reason, we drew on a model of screening ethics (1) as a framework for our analysis. Interpreting our interview data through this lens, we identified four principles to guide the implementation of AI-based prognostication as screening: (i) screening for EoL should be evidence based, (ii) screening for EoL should take opportunity cost into account, (iii) screening for EoL should distribute costs and benefits fairly, and (iv) screening for EoL should offer respect for persons and their dignity.

Conclusion

Our findings help us understand how palliative care team members view emerging AI-based prognostic tools and offer guiding principles for their implementation as screening for EoL. In the future, it will be important to define the role of screening in this context and to understand how the result of the screening affects decision-making for patients, families, and care teams.

References

1.Bailey MA, Murray TH. Ethics, evidence, and cost in newborn screening. Hastings Cent Rep 2008;38(3):23-31.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.90
自引率
6.40%
发文量
821
审稿时长
26 days
期刊介绍: The Journal of Pain and Symptom Management is an internationally respected, peer-reviewed journal and serves an interdisciplinary audience of professionals by providing a forum for the publication of the latest clinical research and best practices related to the relief of illness burden among patients afflicted with serious or life-threatening illness.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信