{"title":"What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.","authors":"Sabine Salloch, Andreas Eriksen","doi":"10.1080/15265161.2024.2353800","DOIUrl":null,"url":null,"abstract":"<p><p>Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as \"human in the loop\" or \"meaningful human control\" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the \"human in the loop\" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as \"fellow workers\" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.</p>","PeriodicalId":50962,"journal":{"name":"American Journal of Bioethics","volume":" ","pages":"67-78"},"PeriodicalIF":17.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Bioethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/15265161.2024.2353800","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
期刊介绍:
The American Journal of Bioethics (AJOB) is a renowned global publication focused on bioethics. It tackles pressing ethical challenges in the realm of health sciences.
With a commitment to the original vision of bioethics, AJOB explores the social consequences of advancements in biomedicine. It sparks meaningful discussions that have proved invaluable to a wide range of professionals, including judges, senators, journalists, scholars, and educators.
AJOB covers various areas of interest, such as the ethical implications of clinical research, ensuring access to healthcare services, and the responsible handling of medical records and data.
The journal welcomes contributions in the form of target articles presenting original research, open peer commentaries facilitating a dialogue, book reviews, and responses to open peer commentaries.
By presenting insightful and authoritative content, AJOB continues to shape the field of bioethics and engage diverse stakeholders in crucial conversations about the intersection of medicine, ethics, and society.