Qi-Chen Yang, Yan-Mei Zeng, Hong Wei, Cheng Chen, Qian Ling, Xiao-Yu Wang, Xu Chen, Yi Shao
{"title":"Evaluating multiple large language models on orbital diseases.","authors":"Qi-Chen Yang, Yan-Mei Zeng, Hong Wei, Cheng Chen, Qian Ling, Xiao-Yu Wang, Xu Chen, Yi Shao","doi":"10.3389/fcell.2025.1574378","DOIUrl":null,"url":null,"abstract":"<p><p>The avoidance of mistakes by humans is achieved through continuous learning, error correction, and experience accumulation. This process is known to be both time-consuming and laborious, often involving numerous detours. In order to assist humans in their learning endeavors, ChatGPT (Generative Pre-trained Transformer) has been developed as a collection of large language models (LLMs) capable of generating responses that resemble human-like answers to a wide range of problems. In this study, we sought to assess the potential of LLMs as assistants in addressing queries related to orbital diseases. To accomplish this, we gathered a dataset consisting of 100 orbital questions, along with their corresponding answers, sourced from examinations administered to ophthalmologist residents and medical students. Five language models (LLMs) were utilized for testing and comparison purposes, namely, GPT-4, GPT-3.5, PaLM2, Claude 2, and SenseNova. Subsequently, the LLM exhibiting the most exemplary performance was selected for comparison against ophthalmologists and medical students. Notably, GPT-4 and PaLM2 demonstrated a superior average correlation when compared to the other LLMs. Furthermore, GPT-4 exhibited a broader spectrum of accurate responses and attained the highest average score among all the LLMs. Additionally, GPT-4 demonstrated the highest level of confidence during the test. The performance of GPT-4 surpassed that of medical students, albeit falling short of that exhibited by ophthalmologists. In contrast, the findings of the study indicate that GPT-4 exhibited superior performance within the orbital domain of ophthalmology. Given further refinement through training, LLMs possess considerable potential to be utilized as comprehensive instruments alongside medical students and ophthalmologists.</p>","PeriodicalId":12448,"journal":{"name":"Frontiers in Cell and Developmental Biology","volume":"13 ","pages":"1574378"},"PeriodicalIF":4.6000,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12277337/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Cell and Developmental Biology","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.3389/fcell.2025.1574378","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"CELL BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
The avoidance of mistakes by humans is achieved through continuous learning, error correction, and experience accumulation. This process is known to be both time-consuming and laborious, often involving numerous detours. In order to assist humans in their learning endeavors, ChatGPT (Generative Pre-trained Transformer) has been developed as a collection of large language models (LLMs) capable of generating responses that resemble human-like answers to a wide range of problems. In this study, we sought to assess the potential of LLMs as assistants in addressing queries related to orbital diseases. To accomplish this, we gathered a dataset consisting of 100 orbital questions, along with their corresponding answers, sourced from examinations administered to ophthalmologist residents and medical students. Five language models (LLMs) were utilized for testing and comparison purposes, namely, GPT-4, GPT-3.5, PaLM2, Claude 2, and SenseNova. Subsequently, the LLM exhibiting the most exemplary performance was selected for comparison against ophthalmologists and medical students. Notably, GPT-4 and PaLM2 demonstrated a superior average correlation when compared to the other LLMs. Furthermore, GPT-4 exhibited a broader spectrum of accurate responses and attained the highest average score among all the LLMs. Additionally, GPT-4 demonstrated the highest level of confidence during the test. The performance of GPT-4 surpassed that of medical students, albeit falling short of that exhibited by ophthalmologists. In contrast, the findings of the study indicate that GPT-4 exhibited superior performance within the orbital domain of ophthalmology. Given further refinement through training, LLMs possess considerable potential to be utilized as comprehensive instruments alongside medical students and ophthalmologists.
期刊介绍:
Frontiers in Cell and Developmental Biology is a broad-scope, interdisciplinary open-access journal, focusing on the fundamental processes of life, led by Prof Amanda Fisher and supported by a geographically diverse, high-quality editorial board.
The journal welcomes submissions on a wide spectrum of cell and developmental biology, covering intracellular and extracellular dynamics, with sections focusing on signaling, adhesion, migration, cell death and survival and membrane trafficking. Additionally, the journal offers sections dedicated to the cutting edge of fundamental and translational research in molecular medicine and stem cell biology.
With a collaborative, rigorous and transparent peer-review, the journal produces the highest scientific quality in both fundamental and applied research, and advanced article level metrics measure the real-time impact and influence of each publication.