{"title":"Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking","authors":"S.M. Bentzen","doi":"10.1016/j.clon.2025.103798","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) will impact a large proportion of jobs in the short to medium term, especially in the developed countries. The consequences will be felt across many sectors including health care, a critical sector for implementation of AI tools because glitches in algorithms or biases in training datasets may lead to suboptimal treatment that may negatively affect the health of an individual. The stakes are obviously higher in case of potentially life-threatening diseases such as cancer and therapies with a potential for causing severe or even fatal adverse events.</div><div>Over the last two decades, much of the research on AI in health care has focussed on diagnostic radiology and digital pathology, but a solid body of research is emerging on AI tools in the radiation oncology workflow. Many of these applications are relatively uncontroversial, although there is still a lack of evidence regarding effectiveness rather than efficiency, and—the ultimate bar—evidence of clinical utility. Proponents of AI will argue that these algorithms should be implemented with robust human supervision. One challenge here is the deskilling effect associated with new technologies. We will become increasingly dependent on the AI tools over time, and we will become less capable of assessing the quality of the AI output.</div><div>Much of this research appears almost old-fashioned in view of the rapid advances in Generative artificial intelligence (GenAI). GenAI can draw from multiple types of data and produce output that is personalised and appears relevant in the given context. Especially the rapid progress in large language models (LLMs) has opened a wide field of potential applications that were out of bounds just a few years ago. One LLM, Generative Pre-trained Transformer 4 (GPT-4), has been made widely accessible to end-users as ChatGPT-4, which passed a rigorous Turing test in a recent study. In this viewpoint, I argue for the necessity of independent academic research to establish evidence-based applications of AI in medicine. Algorithmic medicine is an intervention similar to a new drug or a new medical device. We should be especially concerned about under-represented minorities and rare/atypical clinical cases that may drown in the petabyte-sized training sets. A huge educational push is needed to ensure that the end-users of AI in health care understand the strengths and weaknesses of algorithmic medicine. Finally, we need to address the ethical boundaries for where and when GenAI can replace humans in the relation between patients and healthcare providers.</div></div>","PeriodicalId":10403,"journal":{"name":"Clinical oncology","volume":"41 ","pages":"Article 103798"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical oncology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0936655525000536","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking
Artificial intelligence (AI) will impact a large proportion of jobs in the short to medium term, especially in the developed countries. The consequences will be felt across many sectors including health care, a critical sector for implementation of AI tools because glitches in algorithms or biases in training datasets may lead to suboptimal treatment that may negatively affect the health of an individual. The stakes are obviously higher in case of potentially life-threatening diseases such as cancer and therapies with a potential for causing severe or even fatal adverse events.
Over the last two decades, much of the research on AI in health care has focussed on diagnostic radiology and digital pathology, but a solid body of research is emerging on AI tools in the radiation oncology workflow. Many of these applications are relatively uncontroversial, although there is still a lack of evidence regarding effectiveness rather than efficiency, and—the ultimate bar—evidence of clinical utility. Proponents of AI will argue that these algorithms should be implemented with robust human supervision. One challenge here is the deskilling effect associated with new technologies. We will become increasingly dependent on the AI tools over time, and we will become less capable of assessing the quality of the AI output.
Much of this research appears almost old-fashioned in view of the rapid advances in Generative artificial intelligence (GenAI). GenAI can draw from multiple types of data and produce output that is personalised and appears relevant in the given context. Especially the rapid progress in large language models (LLMs) has opened a wide field of potential applications that were out of bounds just a few years ago. One LLM, Generative Pre-trained Transformer 4 (GPT-4), has been made widely accessible to end-users as ChatGPT-4, which passed a rigorous Turing test in a recent study. In this viewpoint, I argue for the necessity of independent academic research to establish evidence-based applications of AI in medicine. Algorithmic medicine is an intervention similar to a new drug or a new medical device. We should be especially concerned about under-represented minorities and rare/atypical clinical cases that may drown in the petabyte-sized training sets. A huge educational push is needed to ensure that the end-users of AI in health care understand the strengths and weaknesses of algorithmic medicine. Finally, we need to address the ethical boundaries for where and when GenAI can replace humans in the relation between patients and healthcare providers.
期刊介绍:
Clinical Oncology is an International cancer journal covering all aspects of the clinical management of cancer patients, reflecting a multidisciplinary approach to therapy. Papers, editorials and reviews are published on all types of malignant disease embracing, pathology, diagnosis and treatment, including radiotherapy, chemotherapy, surgery, combined modality treatment and palliative care. Research and review papers covering epidemiology, radiobiology, radiation physics, tumour biology, and immunology are also published, together with letters to the editor, case reports and book reviews.