在癌症护理中应用人工智能面临的关键问题。

IF 503.1 1区 医学 Q1 ONCOLOGY
Mike Fillon
{"title":"在癌症护理中应用人工智能面临的关键问题。","authors":"Mike Fillon","doi":"10.3322/caac.21860","DOIUrl":null,"url":null,"abstract":"<p>With artificial intelligence (AI) erupting across all aspects of life, including health care, oncology is a logical field ripe for new applications. AI is already used in cancer care and diagnosis, such as tumor identification on x-rays and pathology slides. Beyond that, emerging technology is using AI to forecast the prognosis of patients and to assess their treatment options. One unknown is how oncologists feel about this trend, which includes possibly relinquishing some control over their profession and patients.</p><p>A new study asked 204 oncologists for their views on the rapidly developing AI tools. Specifically, they were asked about ethical issues that they face regarding the deployment of AI (e.g., whether they believed that AI could be used effectively in patient-care decisions). The main issue that the researchers investigated was to what degree patients should provide explicit informed consent for the use of AI during treatment decision-making. The study appears in <i>JAMA Network Open</i> (doi:10.1001/jamanetworkopen.2024.4077).</p><p>In the study, which was conducted from November 15, 2022 to July 31, 2023, a random sample of oncologists from across the country were asked 24 questions via traditional mail (which included a $25 gift card) about their views on the use of AI in clinical practice. Follow-ups with nonresponders were conducted via email and phone calls.</p><p>Issues covered bias, responsibilities, and whether they would be able to explain to patients how the technology was deployed in determining their care. There were 387 surveys sent to oncologists; 52.7% (<i>n</i> = 204) were completed. Those responding came from 37 states; 63.7% (<i>n</i> = 120) were male, and 62.7% (<i>n</i> = 128) identified as non-Hispanic White.</p><p>Very few respondents said that AI prognostic and clinical decision models could be used clinically when only researchers could explain them (13.2% of respondents [<i>n</i> = 27] for prognosis and 7.8% [<i>n</i> = 16] for clinical decisions).</p><p>For AI prognostic and clinical decision models that oncologists could explain, the percentages were much higher: 81.3% (<i>n</i> = 165) and 84.8% (<i>n</i> = 173), respectively. Fewer respondents—13.8% (<i>n</i> = 28) and 23.0% (<i>n</i> = 47), respectively—reported that the models also needed to be explainable by patients.</p><p>The survey also found that 36.8% of oncologists (<i>n</i> = 75) believed that if an AI system selected a treatment regimen different from what they would recommend, they would present both options and let the patient decide. Although that represented less than half of the respondents, it was the most common answer.</p><p>Regarding responsibility for medical or legal problems arising from AI use, 90.7% of respondents (<i>n</i> = 185) indicated that AI developers should be held accountable. This was considerably higher than the 47.1% (<i>n</i> = 96) who felt that the responsibility should be shared with physicians and the 43.1% (<i>n</i> = 88) who believed that it should be shared with hospitals.</p><p>Although 76.5% of respondents (<i>n</i> = 156) noted that oncologists should protect patients from biased AI tools (e.g., a nongeneralizable data set used to inform a patient’s care), only 27.9% (<i>n</i> = 57) believed that they could recognize AI models that reflected bias.</p><p>“This study is very important,” says Shiraj Sen, MD, PhD, a medical oncologist at Texas Oncology and a phase 1 investigator and the director of clinical research at NEXT Oncology in Dallas, Texas. He feels that the technology is being developed at a rate that far outpaces clinicians’ knowledge about the implications.</p><p>“While AI tools in oncology are being rapidly developed, few studies are capturing oncologists’ perspectives around who will be responsible for the ethical domains of its use.”</p><p>Dr Sen adds, “Now is the time for oncologists to begin to think through and discuss the nuances of this. This study helps highlight the differences in opinion many oncologists are already beginning to share and underscores the need for broader discussion as a community on how the responsibilities of decision-making will be shared between the oncologist and patient when AI-assisted tools are utilized.”</p><p>Study author Andrew Hantel, MD, an instructor in medicine at Harvard Medical School and a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics in Boston, Massachusetts, says that it is impossible to miss the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges. He notes that as AI begins to affect cancer care delivery, understanding the ethical implications from those who will be asked to implement it—oncologists—is crucial.</p><p>This survey, Dr Hantel adds, is designed to bring data to this space and focuses on ethical concerns such as explainability, consent, responsibility, and equity. “Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”</p><p>Dr Hantel says that before this survey, stakeholder views on these ethical concerns were not known. In addition to its novelty, he adds, the study is important because they found consensus among oncologists on several fronts: the necessity for AI models to be explainable by oncologists, the importance of patient consent in AI’s use for treatment decisions, and a strong belief by oncologists that their professional role included safeguarding patients from biased AI.</p><p>“Surprisingly, a significant number of respondents indicated a lack of confidence in identifying biases in AI models. The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology.” He adds, “Interestingly, while oncologists did not think patients needed to be able to explain AI models, when we presented them with a scenario in which AI disagreed with their treatment recommendation, the most common response was to present both options to the patient and let them decide. This finding highlights that many physicians are unsure about how to act in relation to AI and counsel patients about such situations.”</p><p>Dr Sen believes that AI tools are headed in three main directions. First, there are treatment decisions. “Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient. However, often these treatment options have not been studied thoroughly. AI tools that can help incorporate prognostic factors, various biomarkers, and other patient-related factors may soon be able to help in this scenario.”</p><p>Second is radiographic response assessment. “Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway. In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and help guide personalized treatment strategies.”</p><p>The final area, says Dr Sen, is clinical trial identification and assessment. “Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial. AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for. These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.”</p><p>Dr Sen says that naturally there will be pitfalls and concerns with the accuracy of each of these applications. “Having extensive validation and intimate involvement of oncologists in the development of these tools may help curb these concerns. My advice on the topic of AI is for all oncologists to remain knowledgeable on AI tools as they develop. As was the case when we transitioned from paper charts to EMRs [electronic medical charts], the intentional use of AI tools can help an oncologist deliver high quality care efficiently and effectively if applied correctly.”</p><p>Dr Hantel says that for the ethical deployment of AI in oncology to occur, the priority must be the development of infrastructure that supports oncologist training as well as transparency, consent, accountability, and equity. “This means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”</p><p>Dr Hantel continues that there is another important point the survey found that must be taken seriously: the need to understand the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. “We then need to develop and test the effectiveness of the ethics infrastructure for developing and deploying AI that maximizes benefits and minimizes harms and these other ethical issues, and educate clinicians about AI models and the ethics of their use.”</p>","PeriodicalId":137,"journal":{"name":"CA: A Cancer Journal for Clinicians","volume":"74 4","pages":"320-322"},"PeriodicalIF":503.1000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.3322/caac.21860","citationCount":"0","resultStr":"{\"title\":\"Key issues face AI deployment in cancer care\",\"authors\":\"Mike Fillon\",\"doi\":\"10.3322/caac.21860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With artificial intelligence (AI) erupting across all aspects of life, including health care, oncology is a logical field ripe for new applications. AI is already used in cancer care and diagnosis, such as tumor identification on x-rays and pathology slides. Beyond that, emerging technology is using AI to forecast the prognosis of patients and to assess their treatment options. One unknown is how oncologists feel about this trend, which includes possibly relinquishing some control over their profession and patients.</p><p>A new study asked 204 oncologists for their views on the rapidly developing AI tools. Specifically, they were asked about ethical issues that they face regarding the deployment of AI (e.g., whether they believed that AI could be used effectively in patient-care decisions). The main issue that the researchers investigated was to what degree patients should provide explicit informed consent for the use of AI during treatment decision-making. The study appears in <i>JAMA Network Open</i> (doi:10.1001/jamanetworkopen.2024.4077).</p><p>In the study, which was conducted from November 15, 2022 to July 31, 2023, a random sample of oncologists from across the country were asked 24 questions via traditional mail (which included a $25 gift card) about their views on the use of AI in clinical practice. Follow-ups with nonresponders were conducted via email and phone calls.</p><p>Issues covered bias, responsibilities, and whether they would be able to explain to patients how the technology was deployed in determining their care. There were 387 surveys sent to oncologists; 52.7% (<i>n</i> = 204) were completed. Those responding came from 37 states; 63.7% (<i>n</i> = 120) were male, and 62.7% (<i>n</i> = 128) identified as non-Hispanic White.</p><p>Very few respondents said that AI prognostic and clinical decision models could be used clinically when only researchers could explain them (13.2% of respondents [<i>n</i> = 27] for prognosis and 7.8% [<i>n</i> = 16] for clinical decisions).</p><p>For AI prognostic and clinical decision models that oncologists could explain, the percentages were much higher: 81.3% (<i>n</i> = 165) and 84.8% (<i>n</i> = 173), respectively. Fewer respondents—13.8% (<i>n</i> = 28) and 23.0% (<i>n</i> = 47), respectively—reported that the models also needed to be explainable by patients.</p><p>The survey also found that 36.8% of oncologists (<i>n</i> = 75) believed that if an AI system selected a treatment regimen different from what they would recommend, they would present both options and let the patient decide. Although that represented less than half of the respondents, it was the most common answer.</p><p>Regarding responsibility for medical or legal problems arising from AI use, 90.7% of respondents (<i>n</i> = 185) indicated that AI developers should be held accountable. This was considerably higher than the 47.1% (<i>n</i> = 96) who felt that the responsibility should be shared with physicians and the 43.1% (<i>n</i> = 88) who believed that it should be shared with hospitals.</p><p>Although 76.5% of respondents (<i>n</i> = 156) noted that oncologists should protect patients from biased AI tools (e.g., a nongeneralizable data set used to inform a patient’s care), only 27.9% (<i>n</i> = 57) believed that they could recognize AI models that reflected bias.</p><p>“This study is very important,” says Shiraj Sen, MD, PhD, a medical oncologist at Texas Oncology and a phase 1 investigator and the director of clinical research at NEXT Oncology in Dallas, Texas. He feels that the technology is being developed at a rate that far outpaces clinicians’ knowledge about the implications.</p><p>“While AI tools in oncology are being rapidly developed, few studies are capturing oncologists’ perspectives around who will be responsible for the ethical domains of its use.”</p><p>Dr Sen adds, “Now is the time for oncologists to begin to think through and discuss the nuances of this. This study helps highlight the differences in opinion many oncologists are already beginning to share and underscores the need for broader discussion as a community on how the responsibilities of decision-making will be shared between the oncologist and patient when AI-assisted tools are utilized.”</p><p>Study author Andrew Hantel, MD, an instructor in medicine at Harvard Medical School and a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics in Boston, Massachusetts, says that it is impossible to miss the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges. He notes that as AI begins to affect cancer care delivery, understanding the ethical implications from those who will be asked to implement it—oncologists—is crucial.</p><p>This survey, Dr Hantel adds, is designed to bring data to this space and focuses on ethical concerns such as explainability, consent, responsibility, and equity. “Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”</p><p>Dr Hantel says that before this survey, stakeholder views on these ethical concerns were not known. In addition to its novelty, he adds, the study is important because they found consensus among oncologists on several fronts: the necessity for AI models to be explainable by oncologists, the importance of patient consent in AI’s use for treatment decisions, and a strong belief by oncologists that their professional role included safeguarding patients from biased AI.</p><p>“Surprisingly, a significant number of respondents indicated a lack of confidence in identifying biases in AI models. The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology.” He adds, “Interestingly, while oncologists did not think patients needed to be able to explain AI models, when we presented them with a scenario in which AI disagreed with their treatment recommendation, the most common response was to present both options to the patient and let them decide. This finding highlights that many physicians are unsure about how to act in relation to AI and counsel patients about such situations.”</p><p>Dr Sen believes that AI tools are headed in three main directions. First, there are treatment decisions. “Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient. However, often these treatment options have not been studied thoroughly. AI tools that can help incorporate prognostic factors, various biomarkers, and other patient-related factors may soon be able to help in this scenario.”</p><p>Second is radiographic response assessment. “Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway. In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and help guide personalized treatment strategies.”</p><p>The final area, says Dr Sen, is clinical trial identification and assessment. “Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial. AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for. These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.”</p><p>Dr Sen says that naturally there will be pitfalls and concerns with the accuracy of each of these applications. “Having extensive validation and intimate involvement of oncologists in the development of these tools may help curb these concerns. My advice on the topic of AI is for all oncologists to remain knowledgeable on AI tools as they develop. As was the case when we transitioned from paper charts to EMRs [electronic medical charts], the intentional use of AI tools can help an oncologist deliver high quality care efficiently and effectively if applied correctly.”</p><p>Dr Hantel says that for the ethical deployment of AI in oncology to occur, the priority must be the development of infrastructure that supports oncologist training as well as transparency, consent, accountability, and equity. “This means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”</p><p>Dr Hantel continues that there is another important point the survey found that must be taken seriously: the need to understand the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. “We then need to develop and test the effectiveness of the ethics infrastructure for developing and deploying AI that maximizes benefits and minimizes harms and these other ethical issues, and educate clinicians about AI models and the ethics of their use.”</p>\",\"PeriodicalId\":137,\"journal\":{\"name\":\"CA: A Cancer Journal for Clinicians\",\"volume\":\"74 4\",\"pages\":\"320-322\"},\"PeriodicalIF\":503.1000,\"publicationDate\":\"2024-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.3322/caac.21860\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CA: A Cancer Journal for Clinicians\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.3322/caac.21860\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CA: A Cancer Journal for Clinicians","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.3322/caac.21860","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能(AI)在包括医疗保健在内的生活各个方面的爆发,肿瘤学也顺理成章地成为了一个成熟的新应用领域。人工智能已被用于癌症护理和诊断,如在 X 光片和病理切片上识别肿瘤。除此之外,新兴技术正在利用人工智能预测病人的预后并评估他们的治疗方案。一项新的研究询问了 204 名肿瘤学家对快速发展的人工智能工具的看法。具体来说,他们被问及在部署人工智能方面所面临的伦理问题(例如,他们是否认为人工智能可以有效地用于患者护理决策)。研究人员调查的主要问题是,在治疗决策过程中使用人工智能时,患者应在多大程度上提供明确的知情同意。这项研究发表在《美国医学会杂志网络版》(JAMA Network Open)上(doi:10.1001/jamanetworkopen.2024.4077)。研究于2022年11月15日至2023年7月31日进行,研究人员通过传统邮件(其中包括一张价值25美元的礼品卡)随机抽取了全国各地的肿瘤学家,向他们提出了24个问题,询问他们对在临床实践中使用人工智能的看法。问题涉及偏见、责任以及他们是否能够向患者解释在决定他们的治疗时是如何部署该技术的。共向肿瘤学家发送了 387 份调查问卷;52.7%(n = 204)的人完成了问卷。受访者来自美国 37 个州;63.7%(n=120)为男性,62.7%(n=128)为非西班牙裔白人。对于肿瘤学家可以解释的人工智能预后和临床决策模型,百分比要高得多:分别为 81.3%(n = 165)和 84.8%(n = 173)。调查还发现,36.8% 的肿瘤学家(n = 75)认为,如果人工智能系统选择的治疗方案与他们推荐的不同,他们会提出两种方案,让患者决定。关于人工智能使用过程中产生的医疗或法律问题的责任,90.7% 的受访者(n = 185)表示人工智能开发者应承担责任。尽管76.5%的受访者(n = 156)指出,肿瘤学家应保护患者免受有偏见的人工智能工具(如:使用的非通用数据集)的影响,但这一比例远高于认为应由医生分担责任的47.1%(n = 96)和认为应由医院分担责任的43.1%(n = 88)、尽管 76.5%的受访者(n = 156)指出,肿瘤学家应保护患者免受有偏见的人工智能工具(例如,用于为患者提供护理信息的非通用数据集)的影响,但只有 27.9%(n = 57)的受访者认为他们能够识别出反映偏见的人工智能模型。"这项研究非常重要,"德克萨斯肿瘤医院的肿瘤内科医生、第一阶段研究员、德克萨斯州达拉斯市 NEXT 肿瘤医院临床研究主任 Shiraj Sen 博士说。他认为,该技术的发展速度远远超过了临床医生对其影响的认识。"虽然肿瘤学领域的人工智能工具正在迅速发展,但很少有研究能捕捉到肿瘤学家对谁将负责其使用的伦理领域的观点。"森博士补充说,"现在是肿瘤学家开始思考和讨论其中细微差别的时候了。这项研究有助于凸显许多肿瘤学家已经开始出现的意见分歧,并强调有必要作为一个群体就使用人工智能辅助工具时肿瘤学家和患者之间如何分担决策责任进行更广泛的讨论。"该研究的作者安德鲁-汉特尔(Andrew Hantel)医学博士是哈佛大学医学院的医学讲师,同时也是马萨诸塞州波士顿市丹娜-法伯癌症研究所白血病和人口科学部门以及哈佛大学医学院生命伦理学中心的教员。他指出,随着人工智能开始影响癌症护理的提供,了解那些将被要求实施人工智能的人--肿瘤学家--的伦理影响至关重要。Hantel 博士补充说,这项调查旨在为这一领域提供数据,并重点关注可解释性、同意、责任和公平等伦理问题。 "我们的目的是提出执业肿瘤学家的观点,以便以符合伦理的方式部署人工智能,满足肿瘤学家和患者的需求,同时解决潜在的伦理困境。"汉特尔博士说,在这项调查之前,利益相关者对这些伦理问题的看法并不为人所知。他补充说,除了新颖性之外,这项研究的重要性还在于他们发现肿瘤学家在几个方面达成了共识:肿瘤学家有必要对人工智能模型进行解释,患者同意人工智能用于治疗决策的重要性,以及肿瘤学家坚信他们的专业职责包括保护患者免受有偏见的人工智能的伤害。"令人惊讶的是,相当多的受访者表示对识别人工智能模型中的偏见缺乏信心。这些观点的一致性突出表明,肿瘤学领域迫切需要结构化的人工智能教育和伦理准则。他补充说:"有趣的是,虽然肿瘤学家认为患者不需要能够解释人工智能模型,但当我们向他们提出人工智能不同意其治疗建议的情景时,最常见的反应是向患者提出两种选择,让他们自己决定。这一发现突出表明,许多医生不知道如何在人工智能方面采取行动,并就这种情况向患者提供咨询。"森博士认为,人工智能工具主要朝着三个方向发展。首先是治疗决策。"对于患者来说,幸运的是,新型治疗方案的出现为肿瘤医生提供了多种治疗选择,在特定的治疗环境下,任何一个患者都可以选择多种治疗方案。然而,这些治疗方案往往没有经过深入研究。能够帮助纳入预后因素、各种生物标志物和其他患者相关因素的人工智能工具可能很快就能在这种情况下提供帮助。"二是放射学反应评估。"使用人工智能辅助工具对抗癌治疗进行放射反应评估的临床试验已经在进行中。未来有一天,这些工具甚至可能帮助描述肿瘤异质性、预测治疗反应、评估肿瘤侵袭性,并帮助指导个性化治疗策略。"森博士说,最后一个领域是临床试验识别和评估。"每20个癌症患者中只有不到1人会参加临床试验。人工智能工具可能很快就能帮助确定适合患者的临床试验,甚至协助肿瘤学家初步评估患者有资格参加哪些试验。这些工具将有助于简化晚期癌症患者及其肿瘤学家获得临床试验的途径。"Sen 博士说,这些应用的准确性自然会存在隐患和问题。"在这些工具的开发过程中,肿瘤学家的广泛验证和密切参与可能有助于减少这些担忧。在人工智能这个话题上,我的建议是,随着人工智能工具的发展,所有肿瘤学家都要保持对人工智能工具的了解。正如我们从纸质病历过渡到 EMR(电子病历)时的情况一样,如果应用得当,有意识地使用人工智能工具可以帮助肿瘤学家高效、有效地提供优质医疗服务。"Hantel 博士说,要在肿瘤学领域合乎道德地部署人工智能,当务之急必须是开发支持肿瘤学家培训以及透明度、同意、问责和公平的基础设施。"这意味着需要围绕癌症人工智能开发基础设施,以确保其伦理部署。"Hantel 博士继续说,调查还发现了另一个必须认真对待的要点:需要了解患者--特别是那些历史上被边缘化和代表性不足的群体--对这些相同问题的看法。"然后,我们需要开发和测试用于开发和部署人工智能的伦理基础设施的有效性,以实现利益最大化和伤害最小化以及这些其他伦理问题,并对临床医生进行人工智能模型及其使用伦理方面的教育。"
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Key issues face AI deployment in cancer care

With artificial intelligence (AI) erupting across all aspects of life, including health care, oncology is a logical field ripe for new applications. AI is already used in cancer care and diagnosis, such as tumor identification on x-rays and pathology slides. Beyond that, emerging technology is using AI to forecast the prognosis of patients and to assess their treatment options. One unknown is how oncologists feel about this trend, which includes possibly relinquishing some control over their profession and patients.

A new study asked 204 oncologists for their views on the rapidly developing AI tools. Specifically, they were asked about ethical issues that they face regarding the deployment of AI (e.g., whether they believed that AI could be used effectively in patient-care decisions). The main issue that the researchers investigated was to what degree patients should provide explicit informed consent for the use of AI during treatment decision-making. The study appears in JAMA Network Open (doi:10.1001/jamanetworkopen.2024.4077).

In the study, which was conducted from November 15, 2022 to July 31, 2023, a random sample of oncologists from across the country were asked 24 questions via traditional mail (which included a $25 gift card) about their views on the use of AI in clinical practice. Follow-ups with nonresponders were conducted via email and phone calls.

Issues covered bias, responsibilities, and whether they would be able to explain to patients how the technology was deployed in determining their care. There were 387 surveys sent to oncologists; 52.7% (n = 204) were completed. Those responding came from 37 states; 63.7% (n = 120) were male, and 62.7% (n = 128) identified as non-Hispanic White.

Very few respondents said that AI prognostic and clinical decision models could be used clinically when only researchers could explain them (13.2% of respondents [n = 27] for prognosis and 7.8% [n = 16] for clinical decisions).

For AI prognostic and clinical decision models that oncologists could explain, the percentages were much higher: 81.3% (n = 165) and 84.8% (n = 173), respectively. Fewer respondents—13.8% (n = 28) and 23.0% (n = 47), respectively—reported that the models also needed to be explainable by patients.

The survey also found that 36.8% of oncologists (n = 75) believed that if an AI system selected a treatment regimen different from what they would recommend, they would present both options and let the patient decide. Although that represented less than half of the respondents, it was the most common answer.

Regarding responsibility for medical or legal problems arising from AI use, 90.7% of respondents (n = 185) indicated that AI developers should be held accountable. This was considerably higher than the 47.1% (n = 96) who felt that the responsibility should be shared with physicians and the 43.1% (n = 88) who believed that it should be shared with hospitals.

Although 76.5% of respondents (n = 156) noted that oncologists should protect patients from biased AI tools (e.g., a nongeneralizable data set used to inform a patient’s care), only 27.9% (n = 57) believed that they could recognize AI models that reflected bias.

“This study is very important,” says Shiraj Sen, MD, PhD, a medical oncologist at Texas Oncology and a phase 1 investigator and the director of clinical research at NEXT Oncology in Dallas, Texas. He feels that the technology is being developed at a rate that far outpaces clinicians’ knowledge about the implications.

“While AI tools in oncology are being rapidly developed, few studies are capturing oncologists’ perspectives around who will be responsible for the ethical domains of its use.”

Dr Sen adds, “Now is the time for oncologists to begin to think through and discuss the nuances of this. This study helps highlight the differences in opinion many oncologists are already beginning to share and underscores the need for broader discussion as a community on how the responsibilities of decision-making will be shared between the oncologist and patient when AI-assisted tools are utilized.”

Study author Andrew Hantel, MD, an instructor in medicine at Harvard Medical School and a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics in Boston, Massachusetts, says that it is impossible to miss the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges. He notes that as AI begins to affect cancer care delivery, understanding the ethical implications from those who will be asked to implement it—oncologists—is crucial.

This survey, Dr Hantel adds, is designed to bring data to this space and focuses on ethical concerns such as explainability, consent, responsibility, and equity. “Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”

Dr Hantel says that before this survey, stakeholder views on these ethical concerns were not known. In addition to its novelty, he adds, the study is important because they found consensus among oncologists on several fronts: the necessity for AI models to be explainable by oncologists, the importance of patient consent in AI’s use for treatment decisions, and a strong belief by oncologists that their professional role included safeguarding patients from biased AI.

“Surprisingly, a significant number of respondents indicated a lack of confidence in identifying biases in AI models. The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology.” He adds, “Interestingly, while oncologists did not think patients needed to be able to explain AI models, when we presented them with a scenario in which AI disagreed with their treatment recommendation, the most common response was to present both options to the patient and let them decide. This finding highlights that many physicians are unsure about how to act in relation to AI and counsel patients about such situations.”

Dr Sen believes that AI tools are headed in three main directions. First, there are treatment decisions. “Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient. However, often these treatment options have not been studied thoroughly. AI tools that can help incorporate prognostic factors, various biomarkers, and other patient-related factors may soon be able to help in this scenario.”

Second is radiographic response assessment. “Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway. In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and help guide personalized treatment strategies.”

The final area, says Dr Sen, is clinical trial identification and assessment. “Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial. AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for. These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.”

Dr Sen says that naturally there will be pitfalls and concerns with the accuracy of each of these applications. “Having extensive validation and intimate involvement of oncologists in the development of these tools may help curb these concerns. My advice on the topic of AI is for all oncologists to remain knowledgeable on AI tools as they develop. As was the case when we transitioned from paper charts to EMRs [electronic medical charts], the intentional use of AI tools can help an oncologist deliver high quality care efficiently and effectively if applied correctly.”

Dr Hantel says that for the ethical deployment of AI in oncology to occur, the priority must be the development of infrastructure that supports oncologist training as well as transparency, consent, accountability, and equity. “This means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”

Dr Hantel continues that there is another important point the survey found that must be taken seriously: the need to understand the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. “We then need to develop and test the effectiveness of the ethics infrastructure for developing and deploying AI that maximizes benefits and minimizes harms and these other ethical issues, and educate clinicians about AI models and the ethics of their use.”

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
873.20
自引率
0.10%
发文量
51
审稿时长
1 months
期刊介绍: CA: A Cancer Journal for Clinicians" has been published by the American Cancer Society since 1950, making it one of the oldest peer-reviewed journals in oncology. It maintains the highest impact factor among all ISI-ranked journals. The journal effectively reaches a broad and diverse audience of health professionals, offering a unique platform to disseminate information on cancer prevention, early detection, various treatment modalities, palliative care, advocacy matters, quality-of-life topics, and more. As the premier journal of the American Cancer Society, it publishes mission-driven content that significantly influences patient care.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信