{"title":"Meaningful Human Control over AI for Health? A Review.","authors":"Eva Maria Hille, Patrik Hummel, Matthias Braun","doi":"10.1136/jme-2023-109095","DOIUrl":"10.1136/jme-2023-109095","url":null,"abstract":"<p><p>Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results show that there is not yet a robust MHC concept for health. We propose a broader understanding of MHC along three strands of action: enabling, exercising and evaluating control. Taking into account these strands of action and the established rules and processes in the different health sectors, the MHC concept needs to be further developed to avoid falling into two gaps, which we have described as theoretical and labelling gaps.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e50-e58"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13151516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41148075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medical AI, inductive risk and the communication of uncertainty: the case of disorders of consciousness.","authors":"Jonathan Birch","doi":"10.1136/jme-2023-109424","DOIUrl":"10.1136/jme-2023-109424","url":null,"abstract":"<p><p>Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is 'cognitive-motor dissociation' (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient's family, because this information may confuse, alarm and mislead. Instead, we need a procedure for generating case-specific probabilistic assessments that can be communicated clearly. This article constructs a possible procedure with three key elements: (1) A shift from categorical 'responding or not' assessments to degrees of evidence; (2) The use of patient-centred priors to convert degrees of evidence to probabilistic assessments; and (3) The use of standardised probability yardsticks to convey those assessments as clearly as possible.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e22-e29"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138047084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Which AI doctor would you like to see? Emulating healthcare provider-patient communication models with GPT-4: proof-of-concept and ethical exploration.","authors":"Hazem Zohny, Jemima Winifred Allen, Dominic Wilkinson, Julian Savulescu","doi":"10.1136/jme-2024-110256","DOIUrl":"10.1136/jme-2024-110256","url":null,"abstract":"<p><p>Large language models (LLMs) have demonstrated potential in enhancing various aspects of healthcare, including health provider-patient communication. However, some have raised the concern that such communication may adopt implicit communication norms that deviate from what patients want or need from talking with their healthcare provider. This paper explores the possibility of using LLMs to enable patients to choose their preferred communication style when discussing their medical cases. By providing a proof-of-concept demonstration using ChatGPT-4, we suggest LLMs can emulate different healthcare provider-patient communication approaches (building on Emanuel and Emanuel's four models: paternalistic, informative, interpretive and deliberative). This allows patients to engage in a communication style that aligns with their individual needs and preferences. We also highlight potential risks associated with using LLMs in healthcare communication, such as reinforcing patients' biases and the persuasive capabilities of LLMs that may lead to unintended manipulation.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e36-e43"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143542269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms advise, humans decide: the evidential role of the patient preference predictor.","authors":"Nicholas Makins","doi":"10.1136/jme-2024-110175","DOIUrl":"10.1136/jme-2024-110175","url":null,"abstract":"<p><p>An AI-based 'patient preference predictor' (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP-that between algorithmic prediction and decision-making-and argue that much of the recent philosophical disagreement stems from this oversight. I show how three prominent objections to the PPP only challenge its use as the sole determinant of a choice, and actually support its use as a source of evidence about patient preferences to inform human decision-making. The upshot is that we should adopt the evidential conception of the PPP and shift our evaluation of this technology towards the ethics of algorithmic prediction, rather than decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e30-e35"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142391123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristian G Barman, Pawel Pawlowski, Jasper Debrabander
{"title":"Reframing the responsibility gap in medical artificial intelligence: insights from causal selection and authorship attribution.","authors":"Kristian G Barman, Pawel Pawlowski, Jasper Debrabander","doi":"10.1136/jme-2024-110600","DOIUrl":"10.1136/jme-2024-110600","url":null,"abstract":"<p><p>The increasing use of AI in healthcare has sparked debates about responsibility and accountability for AI-related errors. The difficulty in attributing moral responsibility for undesirable outcomes caused by increasingly autonomous (often opaque) AI systems has become a new focal point in the debate on 'responsibility gaps'. We approach the problem of these gaps by offering a framework that combines causal selection principles from the philosophy of science with recent accounts of authorship attribution in AI contexts. We argue this framework offers a more comprehensive and context-sensitive approach to the responsibility gap in medical AI.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e16-e21"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144181942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence, invisible victims and the trolley problem.","authors":"Jacob M Appel","doi":"10.1136/jme-2024-110626","DOIUrl":"10.1136/jme-2024-110626","url":null,"abstract":"<p><p>The allocation of scarce healthcare resources inherently involves trade-offs between the interests of 'visible' and 'invisible' victims (ie, individuals who are aware that they are shortchanged by trade-offs and those who are not). At present, decisions regarding such trade-offs are often based on highly speculative predictions; the vast array of possible trade-offs simply cannot be enumerated, let alone the optimal outcomes calculated, by human beings. Artificial intelligence has the potential to change that reality by mining large data sets and other sources of information in order to produce far more precise and comprehensive predictions of likely outcomes and to delineate optimal allocation choices. Such technologies will inevitably render 'invisible' victims 'visible', generating a colossal, real-world trolley dilemma for anyone involved in medical or healthcare policy decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e8-e11"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethics of AI in medicine: how smarter systems lead to tougher judgments.","authors":"Shalom Chalson, Brian D Earp","doi":"10.1136/jme-2026-112003","DOIUrl":"https://doi.org/10.1136/jme-2026-112003","url":null,"abstract":"","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":"52 e1","pages":"e1-e3"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147839048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Charting the ethical landscape of generative AI-augmented clinical documentation.","authors":"Qiwei Wilton Sun, Jennifer Miller, Sarah C Hull","doi":"10.1136/jme-2024-110656","DOIUrl":"10.1136/jme-2024-110656","url":null,"abstract":"<p><p>Generative artificial intelligence (AI) chatbots such as ChatGPT have several potential clinical applications, but their use for clinical documentation remains underexplored. AI-generated clinical documentation presents an appealing solution to administrative burden but raises new and old ethical concerns that may be overlooked. This article reviews the potential use of generative AI chatbots for purposes such as note-writing, handoffs, and prior authorisation letters, and the ethical considerations arising from their use in this context. AI-generated documentation may offer standardised and consistent documentation across encounters but may also embed biases that can spread across clinical teams relying on previous notes or handoffs, compromising clinical judgement, especially for vulnerable populations such as cognitively impaired or non-English-speaking patients. These tools may transform clinician-patient relationships by reducing administrative work and enhancing shared decision-making but may also compromise the emotional and moral elements of patient care. Moreover, the lack of algorithmic transparency raises concerns that may complicate the determination of responsibility when errors occur. To address these considerations, we propose notifying patients when the use of AI-generated clinical documentation meaningfully impacts their understanding of care, requiring clinician review of drafts, and clarifying areas of ambiguity to protect patient autonomy. Generative AI-specific legislation, error reporting databases and accountable measures for clinicians and AI developers can promote transparency. Equitable deployment requires careful procurement of training data representative of the populations served that incorporate social determinants while engaging stakeholders, ensuring cultural sensitivity in generated text, and enhancing medical education.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e4-e7"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144173959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence, pharmaceutical development and dual-use research of concern: a call to action.","authors":"Christopher Bobier, Daniel J Hurst, John Obeid","doi":"10.1136/jme-2025-110750","DOIUrl":"10.1136/jme-2025-110750","url":null,"abstract":"<p><p>Fervent attention was paid to what is coined dual-use research (DUR), or research that can both benefit and harm humanity, and dual-use research of concern (DURC), a particular subset of DUR that is reasonably anticipated to be a safety and security concern if misapplied. The aim of this paper is not to reiterate the challenges of DURC governance but to look at a new turn in DURC, namely the challenges posed by the use of artificial intelligence (AI) in pharmaceutical development. This is important, as AI is increasingly being used for pharmaceutical development in the industry. There is growing recognition that AI is DURC, and there is a dearth of industry and governmental guidance.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e12-e15"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143730329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose?","authors":"Michal Pruski","doi":"10.1136/jme-2023-109675","DOIUrl":"10.1136/jme-2023-109675","url":null,"abstract":"<p><p>There have been repeated calls to ensure that clinical artificial intelligence (AI) is not discriminatory, that is, it provides its intended benefit to all members of society irrespective of the status of any protected characteristics of individuals in whose healthcare the AI might participate. There have also been repeated calls to ensure that any clinical AI is tailored to the local population in which it is being used to ensure that it is fit-for-purpose. Yet, there might be a clash between these two calls since tailoring an AI to a local population might reduce its effectiveness when the AI is used in the care of individuals who have characteristics which are not represented in the local population. Here, I explore the bioethical concept of local fairness as applied to clinical AI. I first introduce the discussion concerning fairness and inequalities in healthcare and how this problem has continued in attempts to develop AI-enhanced healthcare. I then discuss various technical aspects which might affect the implementation of local fairness. Next, I introduce some rule of law considerations into the discussion to contextualise the issue better by drawing key parallels. I then discuss some potential technical solutions which have been proposed to address the issue of local fairness. Finally, I outline which solutions I consider most likely to contribute to a fit-for-purpose and fair AI.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e44-e49"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139996516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}