{"title":"Large language models for improving cancer diagnosis and management in primary health care settings","authors":"Albert Andrew, Ethan Tizzard","doi":"10.1016/j.glmedi.2024.100157","DOIUrl":null,"url":null,"abstract":"<div><div>Cancer remains a leading cause of death globally, but diagnosing and treating it is often challenging. Barriers such as multiple consultations, overburdened healthcare systems, and limited cancer-specific training among primary health care clinicians significantly delay diagnoses and worsen outcomes. To address these challenges, health care must enhance patient and clinician knowledge while minimizing diagnostic and treatment delays. Emerging technologies, particularly artificial intelligence (AI), hold great promise in revolutionising cancer care by improving diagnosis, education, and patient management. Large language models (LLMs) such as ChatGPT offer exciting potential to enhance cancer care in three key areas: clinical decision-making, patient education and engagement, and access to oncology research. Studies suggest that ChatGPT-4's oncology-related performance approaches that of medical professionals, enabling it to assist in decision-making, improve outcomes, and streamline cancer care. These tools can help clinicians rule out potential cancer diagnoses based on symptoms and history, reducing unnecessary tests and consultations. Additionally, specialised LLMs can provide accessible, understandable information for patients while disseminating cutting-edge research to clinicians. Despite their potential, LLMs face notable limitations. Output quality varies based on the type of cancer or treatment, the specificity of questions, and phrasing. Many LLMs produce responses requiring advanced literacy, limiting accessibility. Moreover, AI bias remains a concern; training on biased data could perpetuate healthcare inequalities, leading to harmful recommendations. Accountability is another critical issue—the ability for LLMs to produce errors in its outputs raise questions about responsibility, highlighting the need for safeguards and clear frameworks to ensure equitable and reliable AI integration into cancer care.</div></div>","PeriodicalId":100804,"journal":{"name":"Journal of Medicine, Surgery, and Public Health","volume":"4 ","pages":"Article 100157"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medicine, Surgery, and Public Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949916X24001105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Cancer remains a leading cause of death globally, but diagnosing and treating it is often challenging. Barriers such as multiple consultations, overburdened healthcare systems, and limited cancer-specific training among primary health care clinicians significantly delay diagnoses and worsen outcomes. To address these challenges, health care must enhance patient and clinician knowledge while minimizing diagnostic and treatment delays. Emerging technologies, particularly artificial intelligence (AI), hold great promise in revolutionising cancer care by improving diagnosis, education, and patient management. Large language models (LLMs) such as ChatGPT offer exciting potential to enhance cancer care in three key areas: clinical decision-making, patient education and engagement, and access to oncology research. Studies suggest that ChatGPT-4's oncology-related performance approaches that of medical professionals, enabling it to assist in decision-making, improve outcomes, and streamline cancer care. These tools can help clinicians rule out potential cancer diagnoses based on symptoms and history, reducing unnecessary tests and consultations. Additionally, specialised LLMs can provide accessible, understandable information for patients while disseminating cutting-edge research to clinicians. Despite their potential, LLMs face notable limitations. Output quality varies based on the type of cancer or treatment, the specificity of questions, and phrasing. Many LLMs produce responses requiring advanced literacy, limiting accessibility. Moreover, AI bias remains a concern; training on biased data could perpetuate healthcare inequalities, leading to harmful recommendations. Accountability is another critical issue—the ability for LLMs to produce errors in its outputs raise questions about responsibility, highlighting the need for safeguards and clear frameworks to ensure equitable and reliable AI integration into cancer care.