{"title":"[Performance of generative pre-trained transformer-4 on the certification test for mental health management: A factorial design].","authors":"Kazuhiro Watanabe, Yasuhiro Tsutsui, Takao Tsutsui, Takenori Yamauchi, Mitsuo Uchida, Yuriko Hachiya, Ilsung Kim, Mako Iida, Kotaro Imamura, Asuka Sakuraya, Norito Kawakami","doi":"10.1539/sangyoeisei.2024-017-B","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to investigate the performance of generative pre-trained transformer-4 (GPT-4) on the Certification Test for Mental Health Management and whether tuned prompts could improve its performance.</p><p><strong>Methods: </strong>This study used a 3 × 2 factorial design to examine the performance according to test difficulty (courses) and prompt conditions. We prepared 200 multiple-choice questions (600 questions overall) for each course using the Certification Test for Mental Health Management (levels I-III) and essay questions from the level I test for the previous four examinations. Two conditions were used: a simple prompt condition using the questions as prompts and tuned prompt condition using techniques to obtain better answers. GPT-4 (gpt-4-0613) was adopted and implemented using the OpenAI API.</p><p><strong>Results: </strong>The simple prompt condition scores were 74.5, 71.5, and 64.0 for levels III, II, and I, respectively. The tuned and simple prompt condition scores had no significant differences (OR = 1.03, 95% CI; 0.65-1.62, p = 0.908). Incorrect answers were observed in the simple prompt condition because of the inability to make choices, whereas no incorrect answers were observed in the tuned prompt condition. The average score for the essay questions under the simple prompt condition was 22.5 out of 50 points (45.0%).</p><p><strong>Conclusion: </strong>GPT-4 had a sufficient knowledge network for occupational mental health, surpassing the criteria for levels II and III tests. For the level I test, which required the ability to describe more advanced knowledge accurately, GPT-4 did not meet the criteria. External information may be needed when using GPT-4 at this level. Although the tuned prompts did not significantly improve the performance, they were promising in avoiding unintended outputs and organizing output formats. UMIN trial registration: UMIN-CTR ID = UMIN000053582.</p>","PeriodicalId":94204,"journal":{"name":"Sangyo eiseigaku zasshi = Journal of occupational health","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sangyo eiseigaku zasshi = Journal of occupational health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1539/sangyoeisei.2024-017-B","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study aimed to investigate the performance of generative pre-trained transformer-4 (GPT-4) on the Certification Test for Mental Health Management and whether tuned prompts could improve its performance.
Methods: This study used a 3 × 2 factorial design to examine the performance according to test difficulty (courses) and prompt conditions. We prepared 200 multiple-choice questions (600 questions overall) for each course using the Certification Test for Mental Health Management (levels I-III) and essay questions from the level I test for the previous four examinations. Two conditions were used: a simple prompt condition using the questions as prompts and tuned prompt condition using techniques to obtain better answers. GPT-4 (gpt-4-0613) was adopted and implemented using the OpenAI API.
Results: The simple prompt condition scores were 74.5, 71.5, and 64.0 for levels III, II, and I, respectively. The tuned and simple prompt condition scores had no significant differences (OR = 1.03, 95% CI; 0.65-1.62, p = 0.908). Incorrect answers were observed in the simple prompt condition because of the inability to make choices, whereas no incorrect answers were observed in the tuned prompt condition. The average score for the essay questions under the simple prompt condition was 22.5 out of 50 points (45.0%).
Conclusion: GPT-4 had a sufficient knowledge network for occupational mental health, surpassing the criteria for levels II and III tests. For the level I test, which required the ability to describe more advanced knowledge accurately, GPT-4 did not meet the criteria. External information may be needed when using GPT-4 at this level. Although the tuned prompts did not significantly improve the performance, they were promising in avoiding unintended outputs and organizing output formats. UMIN trial registration: UMIN-CTR ID = UMIN000053582.