{"title":"Application of large language models in healthcare: A bibliometric analysis.","authors":"Lanping Zhang, Qing Zhao, Dandan Zhang, Meijuan Song, Yu Zhang, Xiufen Wang","doi":"10.1177/20552076251324444","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The objective is to provide an overview of the application of large language models (LLMs) in healthcare by employing a bibliometric analysis methodology.</p><p><strong>Method: </strong>We performed a comprehensive search for peer-reviewed English-language articles using PubMed and Web of Science. The selected articles were subsequently clustered and analyzed textually, with a focus on lexical co-occurrences, country-level and inter-author collaborations, and other relevant factors. This textual analysis produced high-level concept maps that illustrate specific terms and their interconnections.</p><p><strong>Findings: </strong>Our final sample comprised 371 English-language journal articles. The study revealed a sharp rise in the number of publications related to the application of LLMs in healthcare. However, the development is geographically imbalanced, with a higher concentration of articles originating from developed countries like the United States, Italy, and Germany, which also exhibit strong inter-country collaboration. LLMs are applied across various specialties, with researchers investigating their use in medical education, diagnosis, treatment, administrative reporting, and enhancing doctor-patient communication. Nonetheless, significant concerns persist regarding the risks and ethical implications of LLMs, including the potential for gender and racial bias, as well as the lack of transparency in the training datasets, which can lead to inaccurate or misleading responses.</p><p><strong>Conclusion: </strong>While the application of LLMs in healthcare is promising, the widespread adoption of LLMs in practice requires further improvements in their standardization and accuracy. It is critical to establish clear accountability guidelines, develop a robust regulatory framework, and ensure that training datasets are based on evidence-based sources to minimize risk and ensure ethical and reliable use.</p>","PeriodicalId":51333,"journal":{"name":"DIGITAL HEALTH","volume":"11 ","pages":"20552076251324444"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11873863/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"DIGITAL HEALTH","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/20552076251324444","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: The objective is to provide an overview of the application of large language models (LLMs) in healthcare by employing a bibliometric analysis methodology.
Method: We performed a comprehensive search for peer-reviewed English-language articles using PubMed and Web of Science. The selected articles were subsequently clustered and analyzed textually, with a focus on lexical co-occurrences, country-level and inter-author collaborations, and other relevant factors. This textual analysis produced high-level concept maps that illustrate specific terms and their interconnections.
Findings: Our final sample comprised 371 English-language journal articles. The study revealed a sharp rise in the number of publications related to the application of LLMs in healthcare. However, the development is geographically imbalanced, with a higher concentration of articles originating from developed countries like the United States, Italy, and Germany, which also exhibit strong inter-country collaboration. LLMs are applied across various specialties, with researchers investigating their use in medical education, diagnosis, treatment, administrative reporting, and enhancing doctor-patient communication. Nonetheless, significant concerns persist regarding the risks and ethical implications of LLMs, including the potential for gender and racial bias, as well as the lack of transparency in the training datasets, which can lead to inaccurate or misleading responses.
Conclusion: While the application of LLMs in healthcare is promising, the widespread adoption of LLMs in practice requires further improvements in their standardization and accuracy. It is critical to establish clear accountability guidelines, develop a robust regulatory framework, and ensure that training datasets are based on evidence-based sources to minimize risk and ensure ethical and reliable use.