{"title":"Can an artificial intelligence chatbot be the author of a scholarly article?","authors":"Ju Yoen Lee","doi":"10.3352/jeehp.2022.20.6","DOIUrl":"https://doi.org/10.3352/jeehp.2022.20.6","url":null,"abstract":"At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135892320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study","authors":"Sun Huh","doi":"10.3352/jeehp.2023.20.01","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.01","url":null,"abstract":"This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45226107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancement of the technical and non-technical skills of nurse anesthesia students using the Anesthetic List Management Assessment Tool in Iran: a quasi-experimental study.","authors":"Ali Khalafi, Maedeh Kordnejad, Vahid Saidkhani","doi":"10.3352/jeehp.2023.20.19","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.19","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of evaluations based on the Anesthetic List Management Assessment Tool (ALMAT) form on improving the technical and non-technical skills of final-year nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences (AJUMS).</p><p><strong>Methods: </strong>This was a semi-experimental study with a pre-test and post-test design. It included 45 final-year nurse anesthesia students of AJUMS and lasted for 3 months. The technical and non-technical skills of the intervention group were assessed at 4 university hospitals using formative-feedback evaluation based on\u0000the ALMAT form, from induction of anesthesia until reaching mastery and independence. Finally, the students’ degree of improvement in technical and non-technical skills was compared between the intervention and control groups. Statistical tests (the independent t-test, paired t-test, and Mann-Whitney test) were used to analyze the data.</p><p><strong>Results: </strong>The rate of improvement in post-test scores of technical skills was significantly higher in the intervention group than in the control group (P<0.0001). Similarly, the students in the intervention group received significantly higher post-test scores for non-technical skills than the students in the control group (P<0.0001).</p><p><strong>Conclusion: </strong>The findings of this study showed that the use of ALMAT as a formative-feedback evaluation method to evaluate technical and non-technical skills had a significant effect on improving these skills and was effective in helping students learn and reach mastery and independence.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"19"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10352009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9833205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf
{"title":"What impacts students' satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study.","authors":"Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf","doi":"10.3352/jeehp.2023.20.2","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.2","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students' satisfaction in the Medicine program.</p><p><strong>Methods: </strong>Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach's α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students' overall satisfaction with the program.</p><p><strong>Results: </strong>A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the \"online resources\" factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students' overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students' experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: \"being cared for,\" \"satisfaction with teaching,\" and \"satisfaction with assessment\" (β=0.327, 0.148, 0.148, respectively; all with P<0.001).</p><p><strong>Conclusion: </strong>MedSEQ has good construct validity and high reliability, reflecting students' satisfaction with the Medicine program. Key factors impacting students' satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"2"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9986309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10866573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study","authors":"Aleksandra Ignjatović, Lazar Stevanović","doi":"10.3352/jeehp.2023.20.28","DOIUrl":"10.3352/jeehp.2023.20.28","url":null,"abstract":"Purpose This study aimed to assess the performance of ChatGPT (GPT-3.5 and GPT-4) as a study tool in solving biostatistical problems and to identify any potential drawbacks that might arise from using ChatGPT in medical education, particularly in solving practical biostatistical problems. Methods ChatGPT was tested to evaluate its ability to solve biostatistical problems from the Handbook of Medical Statistics by Peacock and Peacock in this descriptive study. Tables from the problems were transformed into textual questions. Ten biostatistical problems were randomly chosen and used as text-based input for conversation with ChatGPT (versions 3.5 and 4). Results GPT-3.5 solved 5 practical problems in the first attempt, related to categorical data, cross-sectional study, measuring reliability, probability properties, and the t-test. GPT-3.5 failed to provide correct answers regarding analysis of variance, the chi-square test, and sample size within 3 attempts. GPT-4 also solved a task related to the confidence interval in the first attempt and solved all questions within 3 attempts, with precise guidance and monitoring. Conclusion The assessment of both versions of ChatGPT performance in 10 biostatistical problems revealed that GPT-3.5 and 4’s performance was below average, with correct response rates of 5 and 6 out of 10 on the first attempt. GPT-4 succeeded in providing all correct answers within 3 attempts. These findings indicate that students must be aware that this tool, even when providing and calculating different statistical analyses, can be wrong, and they should be aware of ChatGPT’s limitations and be careful when incorporating this model into medical education.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"28"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10646144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41239759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yun Mi Kim, Sun Hee Lee, Sun Ok Lee, Mi Young An, Bu Youn Kim, Jum Mi Park
{"title":"Adequacy of the examination-based licensing system and a training-based licensing system for midwifery license according to changes in childbirth medical infrastructure in Korea: a surveybased descriptive study","authors":"Yun Mi Kim, Sun Hee Lee, Sun Ok Lee, Mi Young An, Bu Youn Kim, Jum Mi Park","doi":"10.3352/jeehp.2023.20.15","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.15","url":null,"abstract":"<p><strong>Purpose: </strong>The number of Korean midwifery licensing examination applicants has steadily decreased due to the low birth rate and lack of training institutions for midwives. This study aimed to evaluate the adequacy of the examination-based licensing system and the possibility of a training-based licensing system.</p><p><strong>Methods: </strong>A survey questionnaire was developed and dispatched to 230 professionals from December 28, 2022 to January 13, 2023, through an online form using Google Surveys. Descriptive statistics were used to analyze the results.</p><p><strong>Results: </strong>Responses from 217 persons (94.3%) were analyzed after excluding incomplete responses. Out of the 217 participants, 198 (91.2%) agreed with maintaining the current examination-based licensing system; 94 (43.3%) agreed with implementing a training-based licensing system to cover the examination costs due to the decreasing number of applicants; 132 (60.8%) agreed with establishing a midwifery education evaluation center for a training-based licensing system; 163 (75.1%) said that the quality of midwifery might be lowered if midwives were produced only by a training-based licensing system, and 197 (90.8%) said that the training of midwives as birth support personnel should be promoted in Korea.</p><p><strong>Conclusion: </strong>Favorable results were reported for the examination-based licensing system; however, if a training-based licensing system is implemented, it will be necessary to establish a midwifery education evaluation center to manage the quality of midwives. As the annual number of candidates for the Korean midwifery licensing examination has been approximately 10 in recent years, it is necessary to consider more actively granting midwifery licenses through a training-based licensing system.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"15"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10325871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9761228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chan-Young Kwon, Sanghoon Lee, Min Hwangbo, Chungsik Cho, Sangwoo Shin, Dong-Hyeon Kim, Aram Jeong, Hye-Yoon Lee
{"title":"Implementation strategy for introducing a clinical skills examination to the Korean Oriental Medicine Licensing Examination: a mixed-method modified Delphi study","authors":"Chan-Young Kwon, Sanghoon Lee, Min Hwangbo, Chungsik Cho, Sangwoo Shin, Dong-Hyeon Kim, Aram Jeong, Hye-Yoon Lee","doi":"10.3352/jeehp.2023.20.23","DOIUrl":"10.3352/jeehp.2023.20.23","url":null,"abstract":"Purpose This study investigated the validity of introducing a clinical skills examination (CSE) to the Korean Oriental Medicine Licensing Examination through a mixed-method modified Delphi study. Methods A 3-round Delphi study was conducted between September and November 2022. The expert panel comprised 21 oriental medicine education experts who were officially recommended by relevant institutions and organizations. The questionnaires included potential content for the CSE and a detailed implementation strategy. Subcommittees were formed to discuss concerns around the introduction of the CSE, which were collected as open-ended questions. In this study, a 66.7% or greater agreement rate was defined as achieving a consensus. Results The expert panel’s evaluation of the proposed clinical presentations and basic clinical skills suggested their priorities. Of the 10 items investigated for building a detailed implementation strategy for the introduction of the CSE to the Korean Oriental Medicine Licensing Examination, a consensus was achieved on 9. However, the agreement rate on the timing of the introduction of the CSE was low. Concerns around 4 clinical topics were discussed in the subcommittees, and potential solutions were proposed. Conclusion This study offers preliminary data and raises some concerns that can be used as a reference while discussing the introduction of the CSE to the Korean Oriental Medicine Licensing Examination.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"23"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10432826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10024447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Journal of Educational Evaluation for Health Professions received the Journal Impact Factor, 4.4 for the first time on June 28, 2023.","authors":"Sun Huh","doi":"10.3352/jeehp.2023.20.21","DOIUrl":"10.3352/jeehp.2023.20.21","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"21"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10432825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10027332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relationships between undergraduate medical students' attitudes toward communication skills learning and demographics in Zambia: a survey-based descriptive study.","authors":"Mercy Ijeoma Okwudili Ezeala, John Volk","doi":"10.3352/jeehp.2023.20.16","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.16","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to detect relationships between undergraduate students' attitudes toward communication skills learning and demographic variables (such as age, academic year, and gender). Understanding these relationships could provide information for communication skills facilitators and curriculum planners on structuring course delivery and integrating communication skills training into the medical curriculum.</p><p><strong>Methods: </strong>The descriptive study involved a survey of 369 undergraduate students from 2 medical schools in Zambia who participated in communication skills training stratified by academic year using the Communication Skills Attitude Scale. Data were collected between October and December 2021 and analyzed using IBM SPSS for Windows version 28.0.</p><p><strong>Results: </strong>One-way analysis of variance revealed a significant difference in attitude between at least 5 academic years. There was a significant difference in attitudes between the 2nd and 5th academic years (t=5.95, P˂0.001). No significant difference in attitudes existed among the academic years on the negative subscale; the 2nd and 3rd (t=3.82, P=0.004), 4th (t=3.61, P=0.011), 5th (t=8.36, P˂0.001), and 6th (t=4.20, P=0.001) academic years showed significant differences on the positive subscale. Age showed no correlation with attitudes. There was a more favorable attitude to learning communication skills among the women participants than among the men participants (P=0.006).</p><p><strong>Conclusion: </strong>Despite positive general attitudes toward learning communication skills, the difference in attitude between the genders, academic years 2 and 5, and the subsequent classes suggest a re-evaluation of the curriculum and teaching methods to facilitate appropriate course structure according to the academic years and a learning process that addressees gender differences.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"16"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10315251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10103011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of medical school faculty members' educational performance in Korea in 2022 through analysis of the promotion regulations: a mixed methods study.","authors":"Hye Won Jang, Janghee Park","doi":"10.3352/jeehp.2023.20.7","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.7","url":null,"abstract":"<p><strong>Purpose: </strong>To ensure faculty members' active participation in education in response to growing demand, medical schools should clearly describe educational activities in their promotion regulations. This study analyzed the status of how medical education activities are evaluated in promotion regulations in 2022, in Korea.</p><p><strong>Methods: </strong>Data were collected from promotion regulations retrieved by searching the websites of 22 medical schools/universities in August 2022. To categorize educational activities and evaluation methods, the Association of American Medical Colleges framework for educational activities was utilized. Correlations between medical schools' characteristics and the evaluation of medical educational activities were analyzed.</p><p><strong>Results: </strong>We defined 6 categories, including teaching, development of education products, education administration and service, scholarship in education, student affairs, and others, and 20 activities with 57 sub-activities. The average number of included activities was highest in the development of education products category and lowest in the scholarship in education category. The weight adjustment factors of medical educational activities were the characteristics of the target subjects and faculty members, the number of involved faculty members, and the difficulty of activities. Private medical schools tended to have more educational activities in the regulations than public medical schools. The greater the number of faculty members, the greater the number of educational activities in the education administration and service categories.</p><p><strong>Conclusion: </strong>Medical schools included various medical education activities and their evaluation methods in promotion regulations in Korea. This study provides basic data for improving the rewarding system for efforts of medical faculty members in education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"7"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10067332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9240555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}