{"title":"Xenotransplantation Clinical Trials and Equitable Patient Selection.","authors":"Christopher Bobier, Daniel Rodger","doi":"10.1017/S096318012300052X","DOIUrl":"10.1017/S096318012300052X","url":null,"abstract":"<p><p>Xenotransplant patient selection recommendations restrict clinical trial participation to seriously ill patients for whom alternative therapies are unavailable or who will likely die while waiting for an allotransplant. Despite a scholarly consensus that this is advisable, we propose to examine this restriction. We offer three lines of criticism: (1) The risk-benefit calculation may well be unfavorable for seriously ill patients and society; (2) the guidelines conflict with criteria for equitable patient selection; and (3) the selection of seriously ill patients may compromise informed consent. We conclude by highlighting how the current guidance reveals a tension between the societal values of justice and beneficence.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"425-434"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41140389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jon Tilburt, Fred Hafferty, Andrea Leep Hunderfund, Ellen Meltzer, Bjorg Thorsteinsdottir
{"title":"Ethics Education in Health Sciences Should Engage Contentious Social Issues: Here Is Why and How.","authors":"Jon Tilburt, Fred Hafferty, Andrea Leep Hunderfund, Ellen Meltzer, Bjorg Thorsteinsdottir","doi":"10.1017/S0963180123000567","DOIUrl":"10.1017/S0963180123000567","url":null,"abstract":"<p><p>Teaching ethics is crucial to health sciences education. Doing it well requires a willingness to engage contentious social issues. Those issues introduce conflict and risk, but avoiding them ignores moral diversity and renders the work of ethics education irrelevant. Therefore, when (not if) contentious issues and moral differences arise, they must be acknowledged and can be addressed with humility, collegiality, and openness to support learning. Faculty must risk moments when not everyone will \"feel safe,\" so the candor implied in psychological safety can emerge. The deliberative and social work of ethics education involves generous listening, wading into difference, and wondering together if our beliefs and arguments are as sound as we once thought. By <i>forecasting</i> the need for candid engagement with contentious issues and moral difference, establishing <i>ground rules</i>, and bolstering <i>due process structures</i> for faculty and students, a riskier and more relevant ethics pedagogy can emerge. Doing so will prepare everyone for the moral diversity they can expect in our common life and in practice.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"435-439"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Virtues of Interpretable Medical AI.","authors":"Joshua Hatherley, Robert Sparrow, Mark Howard","doi":"10.1017/S0963180122000664","DOIUrl":"10.1017/S0963180122000664","url":null,"abstract":"<p><p>Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are \"black boxes.\" The initial response in the literature was a demand for \"explainable AI.\" However, recently, several authors have suggested that making AI more explainable or \"interpretable\" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a \"lethal prejudice.\" In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"323-332"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10514452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.","authors":"Charles Rathkopf, Bert Heinrichs","doi":"10.1017/S0963180122000688","DOIUrl":"10.1017/S0963180122000688","url":null,"abstract":"<p><p>Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called <i>trustworthy AI.</i> In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing the ethics of AI in terms of trustworthiness, we reinforce unjustified anthropocentric assumptions that stand in the way of clear analysis. Furthermore, even if we insist on a purely epistemic interpretation of the concept, according to which trustworthiness just means measurable reliability, it turns out that the analysis will, nevertheless, suffer from a subtle form of anthropocentrism. The paper goes on to develop the concept of strange error, which serves both to sharpen the initial diagnosis of the inadequacy of trustworthy AI and to articulate the novel epistemological situation created by the use of AI. The paper concludes with a discussion of how strange error puts pressure on standard practices of assessing moral culpability, particularly in the context of medicine.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"333-345"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10553771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: The Ethical Implications of Using AI in Medicine.","authors":"Orsolya Friedrich, Sebastian Schleidgen","doi":"10.1017/S0963180123000671","DOIUrl":"10.1017/S0963180123000671","url":null,"abstract":"","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"307-309"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139426137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes, Miranda van Hooff
{"title":"Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.","authors":"Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes, Miranda van Hooff","doi":"10.1017/S0963180122000718","DOIUrl":"10.1017/S0963180122000718","url":null,"abstract":"<p><p>Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain \"on the loop,\" by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes \"reflection machines\" (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible with the planned decision. RMs think against the proposed decision in order to increase human resistance against automation complacency. Building on preliminary research, this paper will (1) make a case for deriving a set of design requirements for RMs from EU regulations, (2) suggest a way how RMs could support decision-making, (3) describe the possibility of how a prototype of an RM could apply to the medical domain of chronic low back pain, and (4) highlight the importance of exploring an RM's functionality and the experiences of users working with it.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"380-389"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10514450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?","authors":"Catrin Misselhorn","doi":"10.1017/S0963180123000555","DOIUrl":"10.1017/S0963180123000555","url":null,"abstract":"<p><p>It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity for some degree of moral decision-making and agency to cope with morally relevant situations (artificial morality). Machine ethics provides the theoretical and ethical framework for artificial morality. This article scrutinizes the question how artificial moral agents that enhance user autonomy could look like. It discusses, in particular, the suggestion that they should be designed as moral avatars of their users to enhance user autonomy in a substantial sense.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"346-359"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139426138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.","authors":"Andreas Wolkenstein","doi":"10.1017/S0963180123000646","DOIUrl":"10.1017/S0963180123000646","url":null,"abstract":"<p><p>In the ethics of algorithms, a specifically <i>epistemological</i> analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (<i>preemptionism</i>). If this were true, it would be a reason to distrust medical BBAs. First, the author describes what EAs are and why BBAs can be considered EAs. Then, preemptionism will be outlined and criticized as an answer to the question of how to deal with an EA. The discussion leads to some requirements for dealing with a BBA as an EA.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"370-379"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sam's Story: Reflections on Suicide and the Doctor/Patient Relationship.","authors":"William Andereck","doi":"10.1017/S0963180123000610","DOIUrl":"10.1017/S0963180123000610","url":null,"abstract":"","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"440-442"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139075938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Naming and Describing Disability in Law and Medicine.","authors":"Heloise Robinson, Jonathan Herring","doi":"10.1017/S0963180123000609","DOIUrl":"10.1017/S0963180123000609","url":null,"abstract":"<p><p>This article explores the effects of naming and describing disability in law and medicine. Instead of focusing on substantive issues like medical treatment or legal rights, it will address questions which arise in relation to the use of language itself. When a label which is attached to a disability is associated with a negative meaning, this can have a profound effect on the individual concerned and can create stigma. Overly negative descriptions of disabilities can be misleading, not only for the individual, but also more broadly in society, if there are inaccurate perceptions about disability in the social context. This article will examine some relevant examples of terminology, where these issues arise. It will also suggest that the role of medicine and the law in naming and describing disability is particularly important because in these areas there is, perhaps more than anywhere else, a recognized source of authority for the choice of terminology. Labels and descriptions used in the medical and legal contexts can not only perpetuate existing stigmatization of disabled people, but can also contribute to creating stigma at its source, given that the words used in these contexts can constitute an exercise of power.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"401-412"},"PeriodicalIF":1.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139075937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}