Journal of Medical Ethics最新文献

筛选
英文 中文
Meaningful Human Control over AI for Health? A Review. 人类对人工智能的健康控制有意义吗?评论。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2023-109095
Eva Maria Hille, Patrik Hummel, Matthias Braun
{"title":"Meaningful Human Control over AI for Health? A Review.","authors":"Eva Maria Hille, Patrik Hummel, Matthias Braun","doi":"10.1136/jme-2023-109095","DOIUrl":"10.1136/jme-2023-109095","url":null,"abstract":"<p><p>Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results show that there is not yet a robust MHC concept for health. We propose a broader understanding of MHC along three strands of action: enabling, exercising and evaluating control. Taking into account these strands of action and the established rules and processes in the different health sectors, the MHC concept needs to be further developed to avoid falling into two gaps, which we have described as theoretical and labelling gaps.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e50-e58"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13151516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41148075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical AI, inductive risk and the communication of uncertainty: the case of disorders of consciousness. 医疗人工智能、诱导风险和不确定性的沟通:以意识障碍为例。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2023-109424
Jonathan Birch
{"title":"Medical AI, inductive risk and the communication of uncertainty: the case of disorders of consciousness.","authors":"Jonathan Birch","doi":"10.1136/jme-2023-109424","DOIUrl":"10.1136/jme-2023-109424","url":null,"abstract":"<p><p>Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is 'cognitive-motor dissociation' (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient's family, because this information may confuse, alarm and mislead. Instead, we need a procedure for generating case-specific probabilistic assessments that can be communicated clearly. This article constructs a possible procedure with three key elements: (1) A shift from categorical 'responding or not' assessments to degrees of evidence; (2) The use of patient-centred priors to convert degrees of evidence to probabilistic assessments; and (3) The use of standardised probability yardsticks to convey those assessments as clearly as possible.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e22-e29"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138047084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Which AI doctor would you like to see? Emulating healthcare provider-patient communication models with GPT-4: proof-of-concept and ethical exploration. 你想看哪位人工智能医生?使用GPT-4模拟医疗保健提供者-患者沟通模型:概念验证和伦理探索。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2024-110256
Hazem Zohny, Jemima Winifred Allen, Dominic Wilkinson, Julian Savulescu
{"title":"Which AI doctor would you like to see? Emulating healthcare provider-patient communication models with GPT-4: proof-of-concept and ethical exploration.","authors":"Hazem Zohny, Jemima Winifred Allen, Dominic Wilkinson, Julian Savulescu","doi":"10.1136/jme-2024-110256","DOIUrl":"10.1136/jme-2024-110256","url":null,"abstract":"<p><p>Large language models (LLMs) have demonstrated potential in enhancing various aspects of healthcare, including health provider-patient communication. However, some have raised the concern that such communication may adopt implicit communication norms that deviate from what patients want or need from talking with their healthcare provider. This paper explores the possibility of using LLMs to enable patients to choose their preferred communication style when discussing their medical cases. By providing a proof-of-concept demonstration using ChatGPT-4, we suggest LLMs can emulate different healthcare provider-patient communication approaches (building on Emanuel and Emanuel's four models: paternalistic, informative, interpretive and deliberative). This allows patients to engage in a communication style that aligns with their individual needs and preferences. We also highlight potential risks associated with using LLMs in healthcare communication, such as reinforcing patients' biases and the persuasive capabilities of LLMs that may lead to unintended manipulation.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e36-e43"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143542269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithms advise, humans decide: the evidential role of the patient preference predictor. 算法建议,人类决定:患者偏好预测器的证据作用。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2024-110175
Nicholas Makins
{"title":"Algorithms advise, humans decide: the evidential role of the patient preference predictor.","authors":"Nicholas Makins","doi":"10.1136/jme-2024-110175","DOIUrl":"10.1136/jme-2024-110175","url":null,"abstract":"<p><p>An AI-based 'patient preference predictor' (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP-that between algorithmic prediction and decision-making-and argue that much of the recent philosophical disagreement stems from this oversight. I show how three prominent objections to the PPP only challenge its use as the sole determinant of a choice, and actually support its use as a source of evidence about patient preferences to inform human decision-making. The upshot is that we should adopt the evidential conception of the PPP and shift our evaluation of this technology towards the ethics of algorithmic prediction, rather than decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e30-e35"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142391123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reframing the responsibility gap in medical artificial intelligence: insights from causal selection and authorship attribution. 重构医疗人工智能中的责任差距:来自因果选择和作者归属的见解。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2024-110600
Kristian G Barman, Pawel Pawlowski, Jasper Debrabander
{"title":"Reframing the responsibility gap in medical artificial intelligence: insights from causal selection and authorship attribution.","authors":"Kristian G Barman, Pawel Pawlowski, Jasper Debrabander","doi":"10.1136/jme-2024-110600","DOIUrl":"10.1136/jme-2024-110600","url":null,"abstract":"<p><p>The increasing use of AI in healthcare has sparked debates about responsibility and accountability for AI-related errors. The difficulty in attributing moral responsibility for undesirable outcomes caused by increasingly autonomous (often opaque) AI systems has become a new focal point in the debate on 'responsibility gaps'. We approach the problem of these gaps by offering a framework that combines causal selection principles from the philosophy of science with recent accounts of authorship attribution in AI contexts. We argue this framework offers a more comprehensive and context-sensitive approach to the responsibility gap in medical AI.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e16-e21"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144181942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence, invisible victims and the trolley problem. 人工智能、隐形受害者和电车问题。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2024-110626
Jacob M Appel
{"title":"Artificial intelligence, invisible victims and the trolley problem.","authors":"Jacob M Appel","doi":"10.1136/jme-2024-110626","DOIUrl":"10.1136/jme-2024-110626","url":null,"abstract":"<p><p>The allocation of scarce healthcare resources inherently involves trade-offs between the interests of 'visible' and 'invisible' victims (ie, individuals who are aware that they are shortchanged by trade-offs and those who are not). At present, decisions regarding such trade-offs are often based on highly speculative predictions; the vast array of possible trade-offs simply cannot be enumerated, let alone the optimal outcomes calculated, by human beings. Artificial intelligence has the potential to change that reality by mining large data sets and other sources of information in order to produce far more precise and comprehensive predictions of likely outcomes and to delineate optimal allocation choices. Such technologies will inevitably render 'invisible' victims 'visible', generating a colossal, real-world trolley dilemma for anyone involved in medical or healthcare policy decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e8-e11"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics of AI in medicine: how smarter systems lead to tougher judgments. 医学中人工智能的伦理:更智能的系统如何导致更严格的判断。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2026-112003
Shalom Chalson, Brian D Earp
{"title":"Ethics of AI in medicine: how smarter systems lead to tougher judgments.","authors":"Shalom Chalson, Brian D Earp","doi":"10.1136/jme-2026-112003","DOIUrl":"https://doi.org/10.1136/jme-2026-112003","url":null,"abstract":"","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":"52 e1","pages":"e1-e3"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147839048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Charting the ethical landscape of generative AI-augmented clinical documentation. 绘制生成人工智能增强临床文件的伦理景观。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2024-110656
Qiwei Wilton Sun, Jennifer Miller, Sarah C Hull
{"title":"Charting the ethical landscape of generative AI-augmented clinical documentation.","authors":"Qiwei Wilton Sun, Jennifer Miller, Sarah C Hull","doi":"10.1136/jme-2024-110656","DOIUrl":"10.1136/jme-2024-110656","url":null,"abstract":"<p><p>Generative artificial intelligence (AI) chatbots such as ChatGPT have several potential clinical applications, but their use for clinical documentation remains underexplored. AI-generated clinical documentation presents an appealing solution to administrative burden but raises new and old ethical concerns that may be overlooked. This article reviews the potential use of generative AI chatbots for purposes such as note-writing, handoffs, and prior authorisation letters, and the ethical considerations arising from their use in this context. AI-generated documentation may offer standardised and consistent documentation across encounters but may also embed biases that can spread across clinical teams relying on previous notes or handoffs, compromising clinical judgement, especially for vulnerable populations such as cognitively impaired or non-English-speaking patients. These tools may transform clinician-patient relationships by reducing administrative work and enhancing shared decision-making but may also compromise the emotional and moral elements of patient care. Moreover, the lack of algorithmic transparency raises concerns that may complicate the determination of responsibility when errors occur. To address these considerations, we propose notifying patients when the use of AI-generated clinical documentation meaningfully impacts their understanding of care, requiring clinician review of drafts, and clarifying areas of ambiguity to protect patient autonomy. Generative AI-specific legislation, error reporting databases and accountable measures for clinicians and AI developers can promote transparency. Equitable deployment requires careful procurement of training data representative of the populations served that incorporate social determinants while engaging stakeholders, ensuring cultural sensitivity in generated text, and enhancing medical education.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e4-e7"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144173959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence, pharmaceutical development and dual-use research of concern: a call to action. 人工智能、药物开发和军民两用研究的关注:行动呼吁。
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2025-110750
Christopher Bobier, Daniel J Hurst, John Obeid
{"title":"Artificial intelligence, pharmaceutical development and dual-use research of concern: a call to action.","authors":"Christopher Bobier, Daniel J Hurst, John Obeid","doi":"10.1136/jme-2025-110750","DOIUrl":"10.1136/jme-2025-110750","url":null,"abstract":"<p><p>Fervent attention was paid to what is coined dual-use research (DUR), or research that can both benefit and harm humanity, and dual-use research of concern (DURC), a particular subset of DUR that is reasonably anticipated to be a safety and security concern if misapplied. The aim of this paper is not to reiterate the challenges of DURC governance but to look at a new turn in DURC, namely the challenges posed by the use of artificial intelligence (AI) in pharmaceutical development. This is important, as AI is increasingly being used for pharmaceutical development in the industry. There is growing recognition that AI is DURC, and there is a dearth of industry and governmental guidance.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e12-e15"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143730329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose? 临床人工智能的公正性意味着什么:本地公平性与适用性之间的冲突?
IF 3.4 2区 哲学
Journal of Medical Ethics Pub Date : 2026-05-05 DOI: 10.1136/jme-2023-109675
Michal Pruski
{"title":"What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose?","authors":"Michal Pruski","doi":"10.1136/jme-2023-109675","DOIUrl":"10.1136/jme-2023-109675","url":null,"abstract":"<p><p>There have been repeated calls to ensure that clinical artificial intelligence (AI) is not discriminatory, that is, it provides its intended benefit to all members of society irrespective of the status of any protected characteristics of individuals in whose healthcare the AI might participate. There have also been repeated calls to ensure that any clinical AI is tailored to the local population in which it is being used to ensure that it is fit-for-purpose. Yet, there might be a clash between these two calls since tailoring an AI to a local population might reduce its effectiveness when the AI is used in the care of individuals who have characteristics which are not represented in the local population. Here, I explore the bioethical concept of local fairness as applied to clinical AI. I first introduce the discussion concerning fairness and inequalities in healthcare and how this problem has continued in attempts to develop AI-enhanced healthcare. I then discuss various technical aspects which might affect the implementation of local fairness. Next, I introduce some rule of law considerations into the discussion to contextualise the issue better by drawing key parallels. I then discuss some potential technical solutions which have been proposed to address the issue of local fairness. Finally, I outline which solutions I consider most likely to contribute to a fit-for-purpose and fair AI.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"e44-e49"},"PeriodicalIF":3.4,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139996516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书