{"title":"DeepSeek-R1和chatgpt - 40在全国医师执业资格考试中的表现比较研究","authors":"Jin Wu, Zhiheng Wang, Yifan Qin","doi":"10.1007/s10916-025-02213-z","DOIUrl":null,"url":null,"abstract":"<p><p>Large Language Models (LLMs) have a significant impact on medical education due to their advanced natural language processing capabilities. ChatGPT-4o (Chat Generative Pre-trained Transformer), a mainstream Western LLM, demonstrates powerful multimodal abilities. DeepSeek-R1, a newly released free and open-source LLM from China, demonstrates capabilities on par with ChatGPT-4o across various domains. This study aims to evaluate the performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination (CNMLE) and explore the performance differences of LLMs from distinct linguistic environments in Chinese medical education. We evaluated both LLMs using 600 multiple-choice questions from the written part of 2024 CNMLE, covering four units. The questions were categorized into low- and high-difficulty groups according to difficulty. The primary outcome was the overall accuracy rate of each LLM. The secondary outcomes included accuracy within each of the four units and within the two difficulty-level groups. DeepSeek-R1 achieved a statistically significantly higher overall accuracy of 92.0% compared to ChatGPT-4o's 87.2% (P < 0.05). In the low-difficulty group, DeepSeek-R1 demonstrated an accuracy rate of 95.9%, which was significantly higher than ChatGPT-4o's 92.0% (P < 0.05). No statistically significant differences were observed between the models in any of the four units or in the high-difficulty group (P > 0.05). DeepSeek-R1 demonstrated a performance advantage on CNMLE.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"74"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination: A Comparative Study.\",\"authors\":\"Jin Wu, Zhiheng Wang, Yifan Qin\",\"doi\":\"10.1007/s10916-025-02213-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large Language Models (LLMs) have a significant impact on medical education due to their advanced natural language processing capabilities. ChatGPT-4o (Chat Generative Pre-trained Transformer), a mainstream Western LLM, demonstrates powerful multimodal abilities. DeepSeek-R1, a newly released free and open-source LLM from China, demonstrates capabilities on par with ChatGPT-4o across various domains. This study aims to evaluate the performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination (CNMLE) and explore the performance differences of LLMs from distinct linguistic environments in Chinese medical education. We evaluated both LLMs using 600 multiple-choice questions from the written part of 2024 CNMLE, covering four units. The questions were categorized into low- and high-difficulty groups according to difficulty. The primary outcome was the overall accuracy rate of each LLM. The secondary outcomes included accuracy within each of the four units and within the two difficulty-level groups. DeepSeek-R1 achieved a statistically significantly higher overall accuracy of 92.0% compared to ChatGPT-4o's 87.2% (P < 0.05). In the low-difficulty group, DeepSeek-R1 demonstrated an accuracy rate of 95.9%, which was significantly higher than ChatGPT-4o's 92.0% (P < 0.05). No statistically significant differences were observed between the models in any of the four units or in the high-difficulty group (P > 0.05). DeepSeek-R1 demonstrated a performance advantage on CNMLE.</p>\",\"PeriodicalId\":16338,\"journal\":{\"name\":\"Journal of Medical Systems\",\"volume\":\"49 1\",\"pages\":\"74\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-06-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Systems\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s10916-025-02213-z\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02213-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination: A Comparative Study.
Large Language Models (LLMs) have a significant impact on medical education due to their advanced natural language processing capabilities. ChatGPT-4o (Chat Generative Pre-trained Transformer), a mainstream Western LLM, demonstrates powerful multimodal abilities. DeepSeek-R1, a newly released free and open-source LLM from China, demonstrates capabilities on par with ChatGPT-4o across various domains. This study aims to evaluate the performance of DeepSeek-R1 and ChatGPT-4o on the Chinese National Medical Licensing Examination (CNMLE) and explore the performance differences of LLMs from distinct linguistic environments in Chinese medical education. We evaluated both LLMs using 600 multiple-choice questions from the written part of 2024 CNMLE, covering four units. The questions were categorized into low- and high-difficulty groups according to difficulty. The primary outcome was the overall accuracy rate of each LLM. The secondary outcomes included accuracy within each of the four units and within the two difficulty-level groups. DeepSeek-R1 achieved a statistically significantly higher overall accuracy of 92.0% compared to ChatGPT-4o's 87.2% (P < 0.05). In the low-difficulty group, DeepSeek-R1 demonstrated an accuracy rate of 95.9%, which was significantly higher than ChatGPT-4o's 92.0% (P < 0.05). No statistically significant differences were observed between the models in any of the four units or in the high-difficulty group (P > 0.05). DeepSeek-R1 demonstrated a performance advantage on CNMLE.
期刊介绍:
Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.