Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger
{"title":"伦理视角下的人工智能:医疗保健领域人工智能指南的话语分析》。","authors":"Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger","doi":"10.1007/s11948-024-00486-0","DOIUrl":null,"url":null,"abstract":"<p><p>While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"24"},"PeriodicalIF":2.7000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150179/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.\",\"authors\":\"Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger\",\"doi\":\"10.1007/s11948-024-00486-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.</p>\",\"PeriodicalId\":49564,\"journal\":{\"name\":\"Science and Engineering Ethics\",\"volume\":\"30 3\",\"pages\":\"24\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150179/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science and Engineering Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11948-024-00486-0\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-024-00486-0","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
在人工智能(AI)技术不断飞速发展的同时,人们对 AI 的有益产出也有了越来越多的承诺,同时也对医疗保健领域人机交互所面临的挑战表示担忧。为了解决这些问题,越来越多的医疗机构开始发布人工智能医疗指南,旨在使人工智能符合道德规范。然而,指南作为一种书面语言形式,可以通过分析来认识其文本交流与潜在社会观念之间的相互联系。从这个角度出发,我们进行了一项话语分析,以了解这些指南是如何构建、阐明和框定医疗保健领域的人工智能伦理的。我们纳入了八份指南,并确定了三种普遍存在且相互交织的话语:(1)人工智能是不可避免的,也是可取的;(2)人工智能需要以(某种形式的)原则为指导;(3)对人工智能的信任是工具性的,也是首要的。这些论述表明,技术理想已过度溢出人工智能伦理,如过度乐观主义和由此产生的过度批判。本研究深入探讨了人工智能指南中的基本思想,以及指南如何影响人工智能的实践,如何使人工智能与伦理、法律和社会价值观保持一致,从而在医疗保健领域塑造人工智能。
AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.
While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.
期刊介绍:
Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society.
While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation.
We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.