{"title":"Prospectives and drawbacks of ChatGPT in healthcare and clinical medicine","authors":"Khadija Alam, Akhil Kumar, F. N. U. Samiullah","doi":"10.1007/s43681-024-00434-5","DOIUrl":null,"url":null,"abstract":"<div><p>The large language model (LLM) ChatGPT-3.5, a member of generative pre-training transformer (GPT) models created by artificial intelligence (AI), is an updated and finely tuned version of previously launched AI chatbots. It is trained on a large volume of text data available on the internet, can produce human-like responses to a range of prompts and inquiries, and interprets and conversationally creates text, making it suitable for participating in interactive human conversations on a range of topics. Since its release in November 2022, ChatGPT has gained quick popularity not only among the general population but also among healthcare workers and researchers in all fields owing to its versatile, potent, and reliable benefits regarding scientific writing and medical education. Consequently, many studies have been done on ChatGPT regarding scientific research and medical education that have greatly enlightened medical professionals about its efficient use and probable risks. However, its effectiveness in healthcare and clinical medicine is still being determined, given its criticism regarding the authenticity of its diagnostic decisions. This review aims to highlight its viable usage in clinical practice, such as virtual assistance and patient communication, and the potential associated drawbacks, including a lack of human judgment and interactivity, data quality, accountability, and transparency, the risk of overreliance, and medico-legal and ethical considerations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"767 - 773"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00434-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The large language model (LLM) ChatGPT-3.5, a member of generative pre-training transformer (GPT) models created by artificial intelligence (AI), is an updated and finely tuned version of previously launched AI chatbots. It is trained on a large volume of text data available on the internet, can produce human-like responses to a range of prompts and inquiries, and interprets and conversationally creates text, making it suitable for participating in interactive human conversations on a range of topics. Since its release in November 2022, ChatGPT has gained quick popularity not only among the general population but also among healthcare workers and researchers in all fields owing to its versatile, potent, and reliable benefits regarding scientific writing and medical education. Consequently, many studies have been done on ChatGPT regarding scientific research and medical education that have greatly enlightened medical professionals about its efficient use and probable risks. However, its effectiveness in healthcare and clinical medicine is still being determined, given its criticism regarding the authenticity of its diagnostic decisions. This review aims to highlight its viable usage in clinical practice, such as virtual assistance and patient communication, and the potential associated drawbacks, including a lack of human judgment and interactivity, data quality, accountability, and transparency, the risk of overreliance, and medico-legal and ethical considerations.