Michał Bielówka, Jakub Kufel, Marcin Rojek, Dominika Kaczyńska, Łukasz Czogalik, Adam Mitręga, Wiktoria Bartnikowska, Dominika Kondoł, Kacper Palkij, Sylwia Mielcarska
{"title":"调查分析--ChatGPT 在波兰病理学专业考试中脱颖而出的能力。","authors":"Michał Bielówka, Jakub Kufel, Marcin Rojek, Dominika Kaczyńska, Łukasz Czogalik, Adam Mitręga, Wiktoria Bartnikowska, Dominika Kondoł, Kacper Palkij, Sylwia Mielcarska","doi":"10.5114/pjp.2024.143091","DOIUrl":null,"url":null,"abstract":"<p><p>This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes. ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring \"comprehension and critical thinking\" than \"memory\". The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists.</p>","PeriodicalId":49692,"journal":{"name":"Polish Journal of Pathology","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An investigative analysis - ChatGPT's capability to excel in the Polish speciality exam in pathology.\",\"authors\":\"Michał Bielówka, Jakub Kufel, Marcin Rojek, Dominika Kaczyńska, Łukasz Czogalik, Adam Mitręga, Wiktoria Bartnikowska, Dominika Kondoł, Kacper Palkij, Sylwia Mielcarska\",\"doi\":\"10.5114/pjp.2024.143091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes. ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring \\\"comprehension and critical thinking\\\" than \\\"memory\\\". The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists.</p>\",\"PeriodicalId\":49692,\"journal\":{\"name\":\"Polish Journal of Pathology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Polish Journal of Pathology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.5114/pjp.2024.143091\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Polish Journal of Pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.5114/pjp.2024.143091","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PATHOLOGY","Score":null,"Total":0}
An investigative analysis - ChatGPT's capability to excel in the Polish speciality exam in pathology.
This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes. ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring "comprehension and critical thinking" than "memory". The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists.
期刊介绍:
Polish Journal of Pathology is an official magazine of the Polish Association of Pathologists and the Polish Branch of the International Academy of Pathology. For the last 18 years of its presence on the market it has published more than 360 original papers and scientific reports, often quoted in reviewed foreign magazines. A new extended Scientific Board of the quarterly magazine comprises people with recognised achievements in pathomorphology and biology, including molecular biology and cytogenetics, as well as clinical oncology. Polish scientists who are working abroad and are international authorities have also been invited. Apart from presenting scientific reports, the magazine will also play a didactic and training role.