Kathryn Huber MD, Matthew DeCamp MD PhD, Ahmed Alasmar, Mika Hamer PhD MPH
{"title":"The Ethics of Artificial Intelligence-based Screening for End-of-life and Palliative Care","authors":"Kathryn Huber MD, Matthew DeCamp MD PhD, Ahmed Alasmar, Mika Hamer PhD MPH","doi":"10.1016/j.jpainsymman.2025.02.031","DOIUrl":null,"url":null,"abstract":"<div><h3>Outcomes</h3><div>1. Participants will be able to comprehend the use of artificial intelligence-based prognostication as a form of “screening” for end-of-life.</div><div>2. Participants will be able to analyze the ethical challenges that could shape the implementation of artificial intelligence-based prognostication in palliative care and apply ethical principles that can help guide that implementation.</div></div><div><h3>Key Message</h3><div>Based on qualitative interviews at four U.S. medical centers, palliative care team members view artificial intelligence-based prognostication tools as a form of “screening” – so understood, the established ethics principles governing screening can yield concrete recommendations for the ethical use of these AI prognostic tools.</div></div><div><h3>Abstract</h3><div>Artificial Intelligence (AI) tools for healthcare applications are rapidly emerging, with some tools already being used and more on their way. One example is AI-based prognostication tools which can predict patient mortality automatically and with accuracy that outperforms clinicians and other available tools. In palliative care, prognostication may be particularly important; these tools may change practice in ways we do not fully understand and raise important ethical and implementation questions.</div></div><div><h3>Objective</h3><div>To identify the ethical challenges that could shape implementation of AI-based prognostication in palliative care.</div></div><div><h3>Methods</h3><div>We conducted semi-structured interviews with 45 palliative care physicians, nurses, and other team members from four academic medical centers. Interviews were transcribed and analyzed using grounded theory.</div></div><div><h3>Results</h3><div>A central theme emerged: implementation of AI-based prognostication was seen as a form of “screening” for end-of-life (EoL). While the idea of prognostication as screening for EoL is novel, the ethics of screening in other clinical contexts is well-established. For this reason, we drew on a model of screening ethics (1) as a framework for our analysis. Interpreting our interview data through this lens, we identified four principles to guide the implementation of AI-based prognostication as screening: (i) screening for EoL should be evidence based, (ii) screening for EoL should take opportunity cost into account, (iii) screening for EoL should distribute costs and benefits fairly, and (iv) screening for EoL should offer respect for persons and their dignity.</div></div><div><h3>Conclusion</h3><div>Our findings help us understand how palliative care team members view emerging AI-based prognostic tools and offer guiding principles for their implementation as screening for EoL. In the future, it will be important to define the role of screening in this context and to understand how the result of the screening affects decision-making for patients, families, and care teams.</div></div><div><h3>References</h3><div>1.Bailey MA, Murray TH. Ethics, evidence, and cost in newborn screening. Hastings Cent Rep 2008;38(3):23-31.</div></div>","PeriodicalId":16634,"journal":{"name":"Journal of pain and symptom management","volume":"69 5","pages":"Pages e425-e426"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of pain and symptom management","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885392425000910","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Outcomes
1. Participants will be able to comprehend the use of artificial intelligence-based prognostication as a form of “screening” for end-of-life.
2. Participants will be able to analyze the ethical challenges that could shape the implementation of artificial intelligence-based prognostication in palliative care and apply ethical principles that can help guide that implementation.
Key Message
Based on qualitative interviews at four U.S. medical centers, palliative care team members view artificial intelligence-based prognostication tools as a form of “screening” – so understood, the established ethics principles governing screening can yield concrete recommendations for the ethical use of these AI prognostic tools.
Abstract
Artificial Intelligence (AI) tools for healthcare applications are rapidly emerging, with some tools already being used and more on their way. One example is AI-based prognostication tools which can predict patient mortality automatically and with accuracy that outperforms clinicians and other available tools. In palliative care, prognostication may be particularly important; these tools may change practice in ways we do not fully understand and raise important ethical and implementation questions.
Objective
To identify the ethical challenges that could shape implementation of AI-based prognostication in palliative care.
Methods
We conducted semi-structured interviews with 45 palliative care physicians, nurses, and other team members from four academic medical centers. Interviews were transcribed and analyzed using grounded theory.
Results
A central theme emerged: implementation of AI-based prognostication was seen as a form of “screening” for end-of-life (EoL). While the idea of prognostication as screening for EoL is novel, the ethics of screening in other clinical contexts is well-established. For this reason, we drew on a model of screening ethics (1) as a framework for our analysis. Interpreting our interview data through this lens, we identified four principles to guide the implementation of AI-based prognostication as screening: (i) screening for EoL should be evidence based, (ii) screening for EoL should take opportunity cost into account, (iii) screening for EoL should distribute costs and benefits fairly, and (iv) screening for EoL should offer respect for persons and their dignity.
Conclusion
Our findings help us understand how palliative care team members view emerging AI-based prognostic tools and offer guiding principles for their implementation as screening for EoL. In the future, it will be important to define the role of screening in this context and to understand how the result of the screening affects decision-making for patients, families, and care teams.
References
1.Bailey MA, Murray TH. Ethics, evidence, and cost in newborn screening. Hastings Cent Rep 2008;38(3):23-31.
期刊介绍:
The Journal of Pain and Symptom Management is an internationally respected, peer-reviewed journal and serves an interdisciplinary audience of professionals by providing a forum for the publication of the latest clinical research and best practices related to the relief of illness burden among patients afflicted with serious or life-threatening illness.