Journal of Educational Evaluation for Health Professions最新文献

筛选
英文 中文
The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration. irtQ R 软件包:基于项目反应理论的测试数据分析和校准的用户友好型工具。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-09-12 DOI: 10.3352/jeehp.2024.21.23
Hwanggyu Lim,Kyung Seok Kang
{"title":"The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration.","authors":"Hwanggyu Lim,Kyung Seok Kang","doi":"10.3352/jeehp.2024.21.23","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.23","url":null,"abstract":"Computerized adaptive testing (CAT) has become a widely adopted test design for high-stakes licensing and certification exams, particularly in the health professions in the United States, due to its ability to tailor test difficulty in real time, reducing testing time while providing precise ability estimates. A key component of CAT is item response theory (IRT), which facilitates the dynamic selection of items based on examinees' ability levels during a test. Accurate estimation of item and ability parameters is essential for successful CAT implementation, necessitating convenient and reliable software to ensure precise parameter estimation. This paper introduces the irtQ R package, which simplifies IRT-based analysis and item calibration under unidimensional IRT models. While it does not directly simulate CAT, it provides essential tools to support CAT development, including parameter estimation using marginal maximum likelihood estimation via the expectation-maximization algorithm, pretest item calibration through fixed item parameter calibration and fixed ability parameter calibration methods, and examinee ability estimation. The package also enables users to compute item and test characteristic curves and information functions necessary for evaluating the psychometric properties of a test. This paper illustrates the key features of the irtQ package through examples using simulated datasets, demonstrating its utility in IRT applications such as test data analysis and ability scoring. By providing a user-friendly environment for IRT analysis, irtQ significantly enhances the capacity for efficient adaptive testing research and operations. Finally, the paper highlights additional core functionalities of irtQ, emphasizing its broader applicability to the development and operation of IRT-based assessments.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"154 1","pages":"23"},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142176646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis. 对本科医学生选拔工具的见解:系统回顾和荟萃分析。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-09-12 DOI: 10.3352/jeehp.2024.21.22
Pin-Hsiang Huang,Arash Arianpoor,Silas Taylor,Jenzel Gonzales,Boaz Shulruf
{"title":"Insights into undergraduate medical student selection tools: a systematic review and meta-analysis.","authors":"Pin-Hsiang Huang,Arash Arianpoor,Silas Taylor,Jenzel Gonzales,Boaz Shulruf","doi":"10.3352/jeehp.2024.21.22","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.22","url":null,"abstract":"PURPOSEEvaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.METHODSA systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.RESULTSSixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen's d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.CONCLUSIONCurrent evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers' reports are used, warrant further research.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"44 1","pages":"22"},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142176647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance, performance frequency, and predicted future importance of dietitians’ jobs by practicing dietitians in Korea: a survey study 韩国执业营养师对营养师工作的重要性、工作频率和未来重要性的预测:一项调查研究
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-02 DOI: 10.3352/jeehp.2024.21.1
C. Sohn, Sooyoun Kwon, Won Gyoung Kim, Kyung-Eun Lee, Sun-Young Lee, Seungmin Lee
{"title":"Importance, performance frequency, and predicted future importance of dietitians’ jobs by practicing dietitians in Korea: a survey study","authors":"C. Sohn, Sooyoun Kwon, Won Gyoung Kim, Kyung-Eun Lee, Sun-Young Lee, Seungmin Lee","doi":"10.3352/jeehp.2024.21.1","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.1","url":null,"abstract":"Purpose: This study aimed to explore the perceptions held by practicing dietitians of the importance of their tasks performed in current work environments, the frequency at which those tasks are performed, and predictions about the importance of those tasks in future work environments.Methods: This was a cross-sectional survey study. An online survey was administered to 350 practicing dietitians. They were asked to assess the importance, performance frequency, and predicted changes in the importance of 27 tasks using a 5-point scale. Descriptive statistics were calculated, and the means of the variables were compared across categorized work environments using analysis of variance.Results: The importance scores of all surveyed tasks were higher than 3.0, except for the marketing management task. Self-development, nutrition education/counseling, menu planning, food safety management, and documentation/data management were all rated higher than 4.0. The highest performance frequency score was related to documentation/data management. The importance scores of all duties, except for professional development, differed significantly by workplace. As for predictions about the future importance of the tasks surveyed, dietitians responded that the importance of all 27 tasks would either remain at current levels or increase in the future.Conclusion: Twenty-seven tasks were confirmed to represent dietitians’ job functions in various workplaces. These tasks can be used to improve the test specifications of the Korean Dietitian Licensing Examination and the curriculum of dietetic education programs.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"114 35","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139391119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of the 21st Century Skills Assessment Scale for public health students in Thailand: a methodological study.
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-12-10 DOI: 10.3352/jeehp.2024.21.37
Suphawadee Panthumas, Kaung Zaw, Wirin Kittipichai
{"title":"Validation of the 21st Century Skills Assessment Scale for public health students in Thailand: a methodological study.","authors":"Suphawadee Panthumas, Kaung Zaw, Wirin Kittipichai","doi":"10.3352/jeehp.2024.21.37","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.37","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to develop and validate the 21st Century Skills Assessment Scale (21CSAS) for Thai public health (PH) undergraduate students using the Partnership for 21st Century Skills framework.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted among 727 first- to fourth-year PH undergraduate students from 4 autonomous universities in Thailand. Data were collected using self-administered questionnaires between January and March 2023. Exploratory factor analysis (EFA) was used to explore the underlying dimensions of 21CSAS, while confirmatory factor analysis (CFA) was conducted to test the hypothesized factor structure using Mplus software (Muthén & Muthén). Reliability and item discrimination were assessed using Cronbach's α and the corrected item-total correlation, respectively.</p><p><strong>Results: </strong>EFA performed on a dataset of 300 students revealed a 20-item scale with a 6-factor structure: (1) creativity and innovation; (2) critical thinking and problem-solving; (3) information, media, and technology; (4) communication and collaboration; (5) initiative and self-direction; and (6) social and cross-cultural skills. The rotated eigenvalues ranged from 2.12 to 1.73. CFA performed on another dataset of 427 students confirmed a good model fit (χ2/degrees of freedom=2.67, comparative fit index=0.93, Tucker-Lewis index=0.91, root mean square error of approximation=0.06, standardized root mean square residual=0.06), explaining 34%-71% of variance in the items. Item loadings ranged from 0.58 to 0.84. The 21CSAS had a Cronbach's α of 0.92.</p><p><strong>Conclusion: </strong>The 21CSAS proved be a valid and reliable tool for assessing 21st century skills among Thai PH undergraduate students. These findings provide insights for educational system to inform policy, practice, and research regarding 21st-century skills among undergraduate students.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"37"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) through an innovative Bayesian questionnaire validation approach 通过创新的贝叶斯问卷验证方法验证混合式学习可用性评估--问卷(BLUE-Q):一项方法学研究
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-11-07 DOI: 10.3352/jeehp.2024.21.31
Anish Kumar Arora, Charo Rodriguez, Tamara Carver, Hao Zhang, Tibor Schuster
{"title":"Validation of the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) through an innovative Bayesian questionnaire validation approach","authors":"Anish Kumar Arora, Charo Rodriguez, Tamara Carver, Hao Zhang, Tibor Schuster","doi":"10.3352/jeehp.2024.21.31","DOIUrl":"10.3352/jeehp.2024.21.31","url":null,"abstract":"<p><strong>Purpose: </strong>The primary aim of this study is to validate the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) for use in the field of health professions education through a Bayesian approach. As Bayesian questionnaire validation remains elusive, a secondary aim of this article is to serve as a simplified tutorial for engaging in such validation practices in health professions education.</p><p><strong>Methods: </strong>A total of 10 health education-based experts in blended learning were recruited to participate in a 30-minute interviewer-administered survey. On a 5-point Likert scale, experts rated how well they perceived each item of the BLUE-Q to reflect its underlying usability domain (i.e., effectiveness, efficiency, satisfaction, accessibility, organization, and learner experience). Ratings were descriptively analyzed and converted into beta prior distributions. Participants were also given the option to provide qualitative comments for each item.</p><p><strong>Results: </strong>After reviewing the computed expert prior distributions, 31 quantitative items were identified as having a probability of “low endorsement” and were thus removed from the questionnaire. Additionally, qualitative comments were used to revise the phrasing and order of items to ensure clarity and logical flow. The BLUE-Q’s final version comprises 23 Likert-scale items and 6 open-ended items.</p><p><strong>Conclusion: </strong>Questionnaire validation can generally be a complex, time-consuming, and costly process, inhibiting many from engaging in proper validation practices. In this study, we demonstrate that a Bayesian questionnaire validation approach can be a simple, resource-efficient, yet rigorous solution to validating a tool for content and item-domain correlation through the elicitation of domain expert endorsement ratings.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"31"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The legality and appropriateness of keeping Korean Medical Licensing Examination items confidential: a comparative analysis and review of court rulings 韩国医疗执照考试项目保密的合法性和适当性:法院裁决的比较分析和回顾。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-10-15 DOI: 10.3352/jeehp.2024.21.28
Jae Sun Kim, Dae Un Hong, Ju Yoen Lee
{"title":"The legality and appropriateness of keeping Korean Medical Licensing Examination items confidential: a comparative analysis and review of court rulings","authors":"Jae Sun Kim, Dae Un Hong, Ju Yoen Lee","doi":"10.3352/jeehp.2024.21.28","DOIUrl":"10.3352/jeehp.2024.21.28","url":null,"abstract":"<p><p>This study examines the legality and appropriateness of keeping the multiple-choice question items of the Korean Medical Licensing Examination (KMLE) confidential. Through an analysis of cases from the United States, Canada, and Australia, where medical licensing exams are conducted using item banks and computer-based testing, we found that exam items are kept confidential to ensure fairness and prevent cheating. In Korea, the Korea Health Personnel Licensing Examination Institute (KHPLEI) has been disclosing KMLE questions despite concerns over exam integrity. Korean courts have consistently ruled that multiple-choice question items prepared by public institutions are non-public information under Article 9(1)(v) of the Korea Official Information Disclosure Act (KOIDA), which exempts disclosure if it significantly hinders the fairness of exams or research and development. The Constitutional Court of Korea has upheld this provision. Given the time and cost involved in developing high-quality items and the need to accurately assess examinees’ abilities, there are compelling reasons to keep KMLE items confidential. As a public institution responsible for selecting qualified medical practitioners, KHPLEI should establish its disclosure policy based on a balanced assessment of public interest, without influence from specific groups. We conclude that KMLE questions qualify as non-public information under KOIDA, and KHPLEI may choose to maintain their confidentiality to ensure exam fairness and efficiency.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"28"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and psychometric evaluation of a 360-degree evaluation instrument to assess medical students’ performance in clinical settings at the emergency medicine department in Iran: a methodological study 伊朗急诊科医学生临床表现360度评估工具的开发和心理测量学评估:一项方法学研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-04-01 DOI: 10.3352/jeehp.2024.21.7
Golnaz Azami, Sanaz Aazami, Boshra Ebrahimy, Payam Emami
{"title":"Development and psychometric evaluation of a 360-degree evaluation instrument to assess medical students’ performance in clinical settings at the emergency medicine department in Iran: a methodological study","authors":"Golnaz Azami, Sanaz Aazami, Boshra Ebrahimy, Payam Emami","doi":"10.3352/jeehp.2024.21.7","DOIUrl":"10.3352/jeehp.2024.21.7","url":null,"abstract":"<p><strong>Background: </strong>In the Iranian context, no 360-degree evaluation tool has been developed to assess the performance of prehospital medical emergency students in clinical settings. This article describes the development of a 360-degree evaluation tool and presents its first psychometric evaluation.</p><p><strong>Methods: </strong>There were 2 steps in this study: step 1 involved developing the instrument (i.e., generating the items) and step 2 constituted the psychometric evaluation of the instrument. We performed exploratory and confirmatory factor analyses and also evaluated the instrument’s face, content, and convergent validity and reliability.</p><p><strong>Results: </strong>The instrument contains 55 items across 6 domains, including leadership, management, and teamwork (19 items), consciousness and responsiveness (14 items), clinical and interpersonal communication skills (8 items), integrity (7 items), knowledge and accountability (4 items), and loyalty and transparency (3 items). The instrument was confirmed to be a valid measure, as the 6 domains had eigenvalues over Kaiser’s criterion of 1 and in combination explained 60.1% of the variance (Bartlett’s test of sphericity [1,485]=19,867.99, P<0.01). Furthermore, this study provided evidence for the instrument’s convergent validity and internal consistency (α=0.98), suggesting its suitability for assessing student performance.</p><p><strong>Conclusion: </strong>We found good evidence for the validity and reliability of the instrument. Our instrument can be used to make future evaluations of student performance in the clinical setting more structured, transparent, informative, and comparable.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"7"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of examination objectives for the Korean paramedic and emergency medical technician examination: a survey study. 韩国辅助医务人员和紧急医疗技术人员考试目标的制定:一项调查研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-06-12 DOI: 10.3352/jeehp.2024.21.13
Tai-Hwan Uhm, Heakyung Choi, Seok Hwan Hong, Hyungsub Kim, Minju Kang, Keunyoung Kim, Hyejin Seo, Eunyoung Ki, Hyeryeong Lee, Heejeong Ahn, Uk-jin Choi, Sang Woong Park
{"title":"Development of examination objectives for the Korean paramedic and emergency medical technician examination: a survey study.","authors":"Tai-Hwan Uhm, Heakyung Choi, Seok Hwan Hong, Hyungsub Kim, Minju Kang, Keunyoung Kim, Hyejin Seo, Eunyoung Ki, Hyeryeong Lee, Heejeong Ahn, Uk-jin Choi, Sang Woong Park","doi":"10.3352/jeehp.2024.21.13","DOIUrl":"10.3352/jeehp.2024.21.13","url":null,"abstract":"<p><strong>Purpose: </strong>The duties of paramedics and emergency medical technicians (P&EMTs) are continuously changing due to developments in medical systems. This study presents evaluation goals for P&EMTs by analyzing their work, especially the tasks that new P&EMTs (with less than 3 years’ experience) find difficult, to foster the training of P&EMTs who could adapt to emergency situations after graduation.</p><p><strong>Methods: </strong>A questionnaire was created based on prior job analyses of P&EMTs. The survey questions were reviewed through focus group interviews, from which 253 task elements were derived. A survey was conducted from July 10, 2023 to October 13, 2023 on the frequency, importance, and difficulty of the 6 occupations in which P&EMTs were employed.</p><p><strong>Results: </strong>The P&EMTs’ most common tasks involved obtaining patients’ medical histories and measuring vital signs, whereas the most important task was cardiopulmonary resuscitation (CPR). The task elements that the P&EMTs found most difficult were newborn delivery and infant CPR. New paramedics reported that treating patients with fractures, poisoning, and childhood fever was difficult, while new EMTs reported that they had difficulty keeping diaries, managing ambulances, and controlling infection.</p><p><strong>Conclusion: </strong>Communication was the most important item for P&EMTs, whereas CPR was the most important skill. It is important for P&EMTs to have knowledge of all tasks; however, they also need to master frequently performed tasks and those that pose difficulties in the field. By deriving goals for evaluating P&EMTs, changes could be made to their education, thereby making it possible to train more capable P&EMTs.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"13"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11239538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redesigning a faculty development program for clinical teachers in Indonesia: a before-and-after study. 重新设计印度尼西亚临床教师的师资发展计划:前后对比研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-06-13 DOI: 10.3352/jeehp.2024.21.14
Rita Mustika, Nadia Greviana, Dewi Anggraeni Kusumoningrum, Anyta Pinasthika
{"title":"Redesigning a faculty development program for clinical teachers in Indonesia: a before-and-after study.","authors":"Rita Mustika, Nadia Greviana, Dewi Anggraeni Kusumoningrum, Anyta Pinasthika","doi":"10.3352/jeehp.2024.21.14","DOIUrl":"10.3352/jeehp.2024.21.14","url":null,"abstract":"<p><strong>Purpose: </strong>Faculty development (FD) is important to support teaching, including for clinical teachers. Faculty of Medicine Universitas Indonesia (FMUI) has conducted a clinical teacher training program developed by the medical education department since 2008, both for FMUI teachers and for those at other centers in Indonesia. However, participation is often challenging due to clinical, administrative, and research obligations. The coronavirus disease 2019 pandemic amplified the urge to transform this program. This study aimed to redesign and evaluate an FD program for clinical teachers that focuses on their needs and current situation.</p><p><strong>Methods: </strong>A 5-step design thinking framework (empathizing, defining, ideating, prototyping, and testing) was used with a pre/post-test design. Design thinking made it possible to develop a participant-focused program, while the pre/post-test design enabled an assessment of the program’s effectiveness.</p><p><strong>Results: </strong>Seven medical educationalists and 4 senior and 4 junior clinical teachers participated in a group discussion in the empathize phase of design thinking. The research team formed a prototype of a 3-day blended learning course, with an asynchronous component using the Moodle learning management system and a synchronous component using the Zoom platform. Pre-post-testing was done in 2 rounds, with 107 and 330 participants, respectively. Evaluations of the first round provided feedback for improving the prototype for the second round.</p><p><strong>Conclusion: </strong>Design thinking enabled an innovative-creative process of redesigning FD that emphasized participants’ needs. The pre/ post-testing showed that the program was effective. Combining asynchronous and synchronous learning expands access and increases flexibility. This approach could also apply to other FD programs.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"14"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study. 韩国医学考试计算机自适应测试中基于测量标准误差的停止规则的真实数据和模拟数据分析比较:心理测量学研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-07-09 DOI: 10.3352/jeehp.2024.21.18
Dong Gi Seo, Jeongwook Choi, Jinha Kim
{"title":"Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study.","authors":"Dong Gi Seo, Jeongwook Choi, Jinha Kim","doi":"10.3352/jeehp.2024.21.18","DOIUrl":"10.3352/jeehp.2024.21.18","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under 2 stopping rules (standard error of measurement [SEM]=0.3 and 0.25) using both real and simulated data in medical examinations in Korea.</p><p><strong>Methods: </strong>This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees’ passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules.</p><p><strong>Results: </strong>Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/ fail outcomes between the 2 SEM conditions, with a high correlation (r=0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data.</p><p><strong>Conclusion: </strong>The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"18"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信