Ben H Lang, Kristin Kostick-Quenet, Jared N Smith, Meghan Hurley, Rita Dexter, Jennifer Blumenthal-Barby
{"title":"Should Physicians Take the Rap? Normative Analysis of Clinician Perspectives on Responsible Use of 'Black Box' AI Tools.","authors":"Ben H Lang, Kristin Kostick-Quenet, Jared N Smith, Meghan Hurley, Rita Dexter, Jennifer Blumenthal-Barby","doi":"10.1080/23294515.2025.2497755","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Increasing interest in deploying artificial intelligence tools in clinical contexts has raised several ethical questions of both normative and empirical interest. One such question in the literature is whether \"responsibility gaps\" (r-gaps) are created when clinicians utilize or rely on such tools for providing care, and if so, what to do about them. These gaps are particularly likely to arise when using opaque, \"black box\" AI tools. Compared to normative and legal analysis of AI-generated responsibility gaps in health care, little is known, empirically, about health care providers views on this issue. The present study examines clinician perspectives on this issue in the context of black box AI decisional support systems (BBAI-DSS) in advanced heart failure.</p><p><strong>Methods: </strong>Semi-structured interviews were conducted with 20 clinicians (14 cardiologists and 6 LVAD nurse coordinators). Interviews were transcribed, coded, and thematically analyzed for salient themes. All study procedures were approved by local IRB.</p><p><strong>Results: </strong>We found that <i>all</i> clinicians voiced that, if someone were responsible for the use and outcomes of black box AI, it would be physicians. We compare clinician perspectives on the existence of r-gaps and their impact on responsibility for errors or adverse outcomes when BBAI-DSS tools are used against a taxonomy from the literature, finding some clinicians acknowledging an r-gap and others denying it or its relevance in medical decision-making.</p><p><strong>Conclusion: </strong>Clinicians varied in their view about the existence of r-gaps but were united in their ascriptions of physician responsibility for the use of BBAI-DSS in clinical care. It was unclear at times whether these were descriptive or normative judgments (i.e., is it merely inevitable physicians will be responsible, or is it morally appropriate that they be held responsible?) or both. We discuss the likely normative inadequacy of such a conception of physician responsibility for BBAI tool use.</p>","PeriodicalId":38118,"journal":{"name":"AJOB Empirical Bioethics","volume":" ","pages":"1-12"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJOB Empirical Bioethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/23294515.2025.2497755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Increasing interest in deploying artificial intelligence tools in clinical contexts has raised several ethical questions of both normative and empirical interest. One such question in the literature is whether "responsibility gaps" (r-gaps) are created when clinicians utilize or rely on such tools for providing care, and if so, what to do about them. These gaps are particularly likely to arise when using opaque, "black box" AI tools. Compared to normative and legal analysis of AI-generated responsibility gaps in health care, little is known, empirically, about health care providers views on this issue. The present study examines clinician perspectives on this issue in the context of black box AI decisional support systems (BBAI-DSS) in advanced heart failure.
Methods: Semi-structured interviews were conducted with 20 clinicians (14 cardiologists and 6 LVAD nurse coordinators). Interviews were transcribed, coded, and thematically analyzed for salient themes. All study procedures were approved by local IRB.
Results: We found that all clinicians voiced that, if someone were responsible for the use and outcomes of black box AI, it would be physicians. We compare clinician perspectives on the existence of r-gaps and their impact on responsibility for errors or adverse outcomes when BBAI-DSS tools are used against a taxonomy from the literature, finding some clinicians acknowledging an r-gap and others denying it or its relevance in medical decision-making.
Conclusion: Clinicians varied in their view about the existence of r-gaps but were united in their ascriptions of physician responsibility for the use of BBAI-DSS in clinical care. It was unclear at times whether these were descriptive or normative judgments (i.e., is it merely inevitable physicians will be responsible, or is it morally appropriate that they be held responsible?) or both. We discuss the likely normative inadequacy of such a conception of physician responsibility for BBAI tool use.