{"title":"日本国家护理考试大型语言模型的性能评价。","authors":"Tomoki Kuribara, Kengo Hirayama, Kenji Hirata","doi":"10.1177/20552076251346571","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic nursing knowledge and identify trends in incorrect answers.</p><p><strong>Methods: </strong>The dataset consisted of 692 questions from the Japanese national nursing examinations over the past 3 years (2021-2023) that were structured with 240 multiple-choice questions per year and a total score of 300 points. The LLMs tested were ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot. Questions were manually entered into each LLM, and their answers were collected. Accuracy rates were calculated to assess whether the LLMs could pass the exam, and deductive content analysis and Chi-squared tests were conducted to identify the tendency of incorrect answers.</p><p><strong>Results: </strong>For over 3 years, the mean total score and standard deviation (SD) using ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot was 180.3 ± 22.2, 251.0 ± 13.1, and 256.7 ± 14.0, respectively. ChatGPT-4 and Microsoft Copilot showed sufficient accuracy rates to pass the examinations for all the years. All LLMs made more mistakes in the health support and social security system domains (<i>p</i> < 0.01).</p><p><strong>Conclusions: </strong>ChatGPT-4 and Microsoft Copilot may perform better than Chat GPT-3.5, and LLMs could incorrectly answer questions about laws and demographic data specific to a particular country.</p>","PeriodicalId":51333,"journal":{"name":"DIGITAL HEALTH","volume":"11 ","pages":"20552076251346571"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117227/pdf/","citationCount":"0","resultStr":"{\"title\":\"Performance evaluation of large language models for the national nursing examination in Japan.\",\"authors\":\"Tomoki Kuribara, Kengo Hirayama, Kenji Hirata\",\"doi\":\"10.1177/20552076251346571\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic nursing knowledge and identify trends in incorrect answers.</p><p><strong>Methods: </strong>The dataset consisted of 692 questions from the Japanese national nursing examinations over the past 3 years (2021-2023) that were structured with 240 multiple-choice questions per year and a total score of 300 points. The LLMs tested were ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot. Questions were manually entered into each LLM, and their answers were collected. Accuracy rates were calculated to assess whether the LLMs could pass the exam, and deductive content analysis and Chi-squared tests were conducted to identify the tendency of incorrect answers.</p><p><strong>Results: </strong>For over 3 years, the mean total score and standard deviation (SD) using ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot was 180.3 ± 22.2, 251.0 ± 13.1, and 256.7 ± 14.0, respectively. ChatGPT-4 and Microsoft Copilot showed sufficient accuracy rates to pass the examinations for all the years. All LLMs made more mistakes in the health support and social security system domains (<i>p</i> < 0.01).</p><p><strong>Conclusions: </strong>ChatGPT-4 and Microsoft Copilot may perform better than Chat GPT-3.5, and LLMs could incorrectly answer questions about laws and demographic data specific to a particular country.</p>\",\"PeriodicalId\":51333,\"journal\":{\"name\":\"DIGITAL HEALTH\",\"volume\":\"11 \",\"pages\":\"20552076251346571\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117227/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"DIGITAL HEALTH\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/20552076251346571\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"DIGITAL HEALTH","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/20552076251346571","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Performance evaluation of large language models for the national nursing examination in Japan.
Objectives: Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic nursing knowledge and identify trends in incorrect answers.
Methods: The dataset consisted of 692 questions from the Japanese national nursing examinations over the past 3 years (2021-2023) that were structured with 240 multiple-choice questions per year and a total score of 300 points. The LLMs tested were ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot. Questions were manually entered into each LLM, and their answers were collected. Accuracy rates were calculated to assess whether the LLMs could pass the exam, and deductive content analysis and Chi-squared tests were conducted to identify the tendency of incorrect answers.
Results: For over 3 years, the mean total score and standard deviation (SD) using ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot was 180.3 ± 22.2, 251.0 ± 13.1, and 256.7 ± 14.0, respectively. ChatGPT-4 and Microsoft Copilot showed sufficient accuracy rates to pass the examinations for all the years. All LLMs made more mistakes in the health support and social security system domains (p < 0.01).
Conclusions: ChatGPT-4 and Microsoft Copilot may perform better than Chat GPT-3.5, and LLMs could incorrectly answer questions about laws and demographic data specific to a particular country.