{"title":"Key issues face AI deployment in cancer care","authors":"Mike Fillon","doi":"10.3322/caac.21860","DOIUrl":null,"url":null,"abstract":"<p>With artificial intelligence (AI) erupting across all aspects of life, including health care, oncology is a logical field ripe for new applications. AI is already used in cancer care and diagnosis, such as tumor identification on x-rays and pathology slides. Beyond that, emerging technology is using AI to forecast the prognosis of patients and to assess their treatment options. One unknown is how oncologists feel about this trend, which includes possibly relinquishing some control over their profession and patients.</p><p>A new study asked 204 oncologists for their views on the rapidly developing AI tools. Specifically, they were asked about ethical issues that they face regarding the deployment of AI (e.g., whether they believed that AI could be used effectively in patient-care decisions). The main issue that the researchers investigated was to what degree patients should provide explicit informed consent for the use of AI during treatment decision-making. The study appears in <i>JAMA Network Open</i> (doi:10.1001/jamanetworkopen.2024.4077).</p><p>In the study, which was conducted from November 15, 2022 to July 31, 2023, a random sample of oncologists from across the country were asked 24 questions via traditional mail (which included a $25 gift card) about their views on the use of AI in clinical practice. Follow-ups with nonresponders were conducted via email and phone calls.</p><p>Issues covered bias, responsibilities, and whether they would be able to explain to patients how the technology was deployed in determining their care. There were 387 surveys sent to oncologists; 52.7% (<i>n</i> = 204) were completed. Those responding came from 37 states; 63.7% (<i>n</i> = 120) were male, and 62.7% (<i>n</i> = 128) identified as non-Hispanic White.</p><p>Very few respondents said that AI prognostic and clinical decision models could be used clinically when only researchers could explain them (13.2% of respondents [<i>n</i> = 27] for prognosis and 7.8% [<i>n</i> = 16] for clinical decisions).</p><p>For AI prognostic and clinical decision models that oncologists could explain, the percentages were much higher: 81.3% (<i>n</i> = 165) and 84.8% (<i>n</i> = 173), respectively. Fewer respondents—13.8% (<i>n</i> = 28) and 23.0% (<i>n</i> = 47), respectively—reported that the models also needed to be explainable by patients.</p><p>The survey also found that 36.8% of oncologists (<i>n</i> = 75) believed that if an AI system selected a treatment regimen different from what they would recommend, they would present both options and let the patient decide. Although that represented less than half of the respondents, it was the most common answer.</p><p>Regarding responsibility for medical or legal problems arising from AI use, 90.7% of respondents (<i>n</i> = 185) indicated that AI developers should be held accountable. This was considerably higher than the 47.1% (<i>n</i> = 96) who felt that the responsibility should be shared with physicians and the 43.1% (<i>n</i> = 88) who believed that it should be shared with hospitals.</p><p>Although 76.5% of respondents (<i>n</i> = 156) noted that oncologists should protect patients from biased AI tools (e.g., a nongeneralizable data set used to inform a patient’s care), only 27.9% (<i>n</i> = 57) believed that they could recognize AI models that reflected bias.</p><p>“This study is very important,” says Shiraj Sen, MD, PhD, a medical oncologist at Texas Oncology and a phase 1 investigator and the director of clinical research at NEXT Oncology in Dallas, Texas. He feels that the technology is being developed at a rate that far outpaces clinicians’ knowledge about the implications.</p><p>“While AI tools in oncology are being rapidly developed, few studies are capturing oncologists’ perspectives around who will be responsible for the ethical domains of its use.”</p><p>Dr Sen adds, “Now is the time for oncologists to begin to think through and discuss the nuances of this. This study helps highlight the differences in opinion many oncologists are already beginning to share and underscores the need for broader discussion as a community on how the responsibilities of decision-making will be shared between the oncologist and patient when AI-assisted tools are utilized.”</p><p>Study author Andrew Hantel, MD, an instructor in medicine at Harvard Medical School and a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics in Boston, Massachusetts, says that it is impossible to miss the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges. He notes that as AI begins to affect cancer care delivery, understanding the ethical implications from those who will be asked to implement it—oncologists—is crucial.</p><p>This survey, Dr Hantel adds, is designed to bring data to this space and focuses on ethical concerns such as explainability, consent, responsibility, and equity. “Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”</p><p>Dr Hantel says that before this survey, stakeholder views on these ethical concerns were not known. In addition to its novelty, he adds, the study is important because they found consensus among oncologists on several fronts: the necessity for AI models to be explainable by oncologists, the importance of patient consent in AI’s use for treatment decisions, and a strong belief by oncologists that their professional role included safeguarding patients from biased AI.</p><p>“Surprisingly, a significant number of respondents indicated a lack of confidence in identifying biases in AI models. The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology.” He adds, “Interestingly, while oncologists did not think patients needed to be able to explain AI models, when we presented them with a scenario in which AI disagreed with their treatment recommendation, the most common response was to present both options to the patient and let them decide. This finding highlights that many physicians are unsure about how to act in relation to AI and counsel patients about such situations.”</p><p>Dr Sen believes that AI tools are headed in three main directions. First, there are treatment decisions. “Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient. However, often these treatment options have not been studied thoroughly. AI tools that can help incorporate prognostic factors, various biomarkers, and other patient-related factors may soon be able to help in this scenario.”</p><p>Second is radiographic response assessment. “Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway. In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and help guide personalized treatment strategies.”</p><p>The final area, says Dr Sen, is clinical trial identification and assessment. “Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial. AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for. These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.”</p><p>Dr Sen says that naturally there will be pitfalls and concerns with the accuracy of each of these applications. “Having extensive validation and intimate involvement of oncologists in the development of these tools may help curb these concerns. My advice on the topic of AI is for all oncologists to remain knowledgeable on AI tools as they develop. As was the case when we transitioned from paper charts to EMRs [electronic medical charts], the intentional use of AI tools can help an oncologist deliver high quality care efficiently and effectively if applied correctly.”</p><p>Dr Hantel says that for the ethical deployment of AI in oncology to occur, the priority must be the development of infrastructure that supports oncologist training as well as transparency, consent, accountability, and equity. “This means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”</p><p>Dr Hantel continues that there is another important point the survey found that must be taken seriously: the need to understand the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. “We then need to develop and test the effectiveness of the ethics infrastructure for developing and deploying AI that maximizes benefits and minimizes harms and these other ethical issues, and educate clinicians about AI models and the ethics of their use.”</p>","PeriodicalId":137,"journal":{"name":"CA: A Cancer Journal for Clinicians","volume":null,"pages":null},"PeriodicalIF":503.1000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.3322/caac.21860","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CA: A Cancer Journal for Clinicians","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.3322/caac.21860","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
With artificial intelligence (AI) erupting across all aspects of life, including health care, oncology is a logical field ripe for new applications. AI is already used in cancer care and diagnosis, such as tumor identification on x-rays and pathology slides. Beyond that, emerging technology is using AI to forecast the prognosis of patients and to assess their treatment options. One unknown is how oncologists feel about this trend, which includes possibly relinquishing some control over their profession and patients.
A new study asked 204 oncologists for their views on the rapidly developing AI tools. Specifically, they were asked about ethical issues that they face regarding the deployment of AI (e.g., whether they believed that AI could be used effectively in patient-care decisions). The main issue that the researchers investigated was to what degree patients should provide explicit informed consent for the use of AI during treatment decision-making. The study appears in JAMA Network Open (doi:10.1001/jamanetworkopen.2024.4077).
In the study, which was conducted from November 15, 2022 to July 31, 2023, a random sample of oncologists from across the country were asked 24 questions via traditional mail (which included a $25 gift card) about their views on the use of AI in clinical practice. Follow-ups with nonresponders were conducted via email and phone calls.
Issues covered bias, responsibilities, and whether they would be able to explain to patients how the technology was deployed in determining their care. There were 387 surveys sent to oncologists; 52.7% (n = 204) were completed. Those responding came from 37 states; 63.7% (n = 120) were male, and 62.7% (n = 128) identified as non-Hispanic White.
Very few respondents said that AI prognostic and clinical decision models could be used clinically when only researchers could explain them (13.2% of respondents [n = 27] for prognosis and 7.8% [n = 16] for clinical decisions).
For AI prognostic and clinical decision models that oncologists could explain, the percentages were much higher: 81.3% (n = 165) and 84.8% (n = 173), respectively. Fewer respondents—13.8% (n = 28) and 23.0% (n = 47), respectively—reported that the models also needed to be explainable by patients.
The survey also found that 36.8% of oncologists (n = 75) believed that if an AI system selected a treatment regimen different from what they would recommend, they would present both options and let the patient decide. Although that represented less than half of the respondents, it was the most common answer.
Regarding responsibility for medical or legal problems arising from AI use, 90.7% of respondents (n = 185) indicated that AI developers should be held accountable. This was considerably higher than the 47.1% (n = 96) who felt that the responsibility should be shared with physicians and the 43.1% (n = 88) who believed that it should be shared with hospitals.
Although 76.5% of respondents (n = 156) noted that oncologists should protect patients from biased AI tools (e.g., a nongeneralizable data set used to inform a patient’s care), only 27.9% (n = 57) believed that they could recognize AI models that reflected bias.
“This study is very important,” says Shiraj Sen, MD, PhD, a medical oncologist at Texas Oncology and a phase 1 investigator and the director of clinical research at NEXT Oncology in Dallas, Texas. He feels that the technology is being developed at a rate that far outpaces clinicians’ knowledge about the implications.
“While AI tools in oncology are being rapidly developed, few studies are capturing oncologists’ perspectives around who will be responsible for the ethical domains of its use.”
Dr Sen adds, “Now is the time for oncologists to begin to think through and discuss the nuances of this. This study helps highlight the differences in opinion many oncologists are already beginning to share and underscores the need for broader discussion as a community on how the responsibilities of decision-making will be shared between the oncologist and patient when AI-assisted tools are utilized.”
Study author Andrew Hantel, MD, an instructor in medicine at Harvard Medical School and a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics in Boston, Massachusetts, says that it is impossible to miss the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges. He notes that as AI begins to affect cancer care delivery, understanding the ethical implications from those who will be asked to implement it—oncologists—is crucial.
This survey, Dr Hantel adds, is designed to bring data to this space and focuses on ethical concerns such as explainability, consent, responsibility, and equity. “Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”
Dr Hantel says that before this survey, stakeholder views on these ethical concerns were not known. In addition to its novelty, he adds, the study is important because they found consensus among oncologists on several fronts: the necessity for AI models to be explainable by oncologists, the importance of patient consent in AI’s use for treatment decisions, and a strong belief by oncologists that their professional role included safeguarding patients from biased AI.
“Surprisingly, a significant number of respondents indicated a lack of confidence in identifying biases in AI models. The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology.” He adds, “Interestingly, while oncologists did not think patients needed to be able to explain AI models, when we presented them with a scenario in which AI disagreed with their treatment recommendation, the most common response was to present both options to the patient and let them decide. This finding highlights that many physicians are unsure about how to act in relation to AI and counsel patients about such situations.”
Dr Sen believes that AI tools are headed in three main directions. First, there are treatment decisions. “Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient. However, often these treatment options have not been studied thoroughly. AI tools that can help incorporate prognostic factors, various biomarkers, and other patient-related factors may soon be able to help in this scenario.”
Second is radiographic response assessment. “Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway. In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and help guide personalized treatment strategies.”
The final area, says Dr Sen, is clinical trial identification and assessment. “Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial. AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for. These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.”
Dr Sen says that naturally there will be pitfalls and concerns with the accuracy of each of these applications. “Having extensive validation and intimate involvement of oncologists in the development of these tools may help curb these concerns. My advice on the topic of AI is for all oncologists to remain knowledgeable on AI tools as they develop. As was the case when we transitioned from paper charts to EMRs [electronic medical charts], the intentional use of AI tools can help an oncologist deliver high quality care efficiently and effectively if applied correctly.”
Dr Hantel says that for the ethical deployment of AI in oncology to occur, the priority must be the development of infrastructure that supports oncologist training as well as transparency, consent, accountability, and equity. “This means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”
Dr Hantel continues that there is another important point the survey found that must be taken seriously: the need to understand the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. “We then need to develop and test the effectiveness of the ethics infrastructure for developing and deploying AI that maximizes benefits and minimizes harms and these other ethical issues, and educate clinicians about AI models and the ethics of their use.”
期刊介绍:
CA: A Cancer Journal for Clinicians" has been published by the American Cancer Society since 1950, making it one of the oldest peer-reviewed journals in oncology. It maintains the highest impact factor among all ISI-ranked journals. The journal effectively reaches a broad and diverse audience of health professionals, offering a unique platform to disseminate information on cancer prevention, early detection, various treatment modalities, palliative care, advocacy matters, quality-of-life topics, and more. As the premier journal of the American Cancer Society, it publishes mission-driven content that significantly influences patient care.