{"title":"Correlation between task-based checklists and global rating scores in undergraduate objective structured clinical examinations in Saudi Arabia: a 1-year comparative study.","authors":"Uzma Khan, Yasir Naseem Khan","doi":"10.3352/jeehp.2025.22.19","DOIUrl":"10.3352/jeehp.2025.22.19","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the correlation between task-based checklist scores and global rating scores (GRS) in objective structured clinical examinations (OSCEs) for fourth-year undergraduate medical students and aimed to determine whether both methods can be reliably used in a standard setting.</p><p><strong>Methods: </strong>A comparative observational study was conducted at Al Rayan College of Medicine, Saudi Arabia, involving 93 fourth-year students during the 2023-2024 academic year. OSCEs from 2 General Practice courses were analyzed, each comprising 10 stations assessing clinical competencies. Students were scored using both task-specific checklists and holistic 5-point GRS. Reliability was evaluated using Cronbach's α, and the relationship between the 2 scoring methods was assessed using the coefficient of determination (R2). Ethical approval and informed consent were obtained.</p><p><strong>Results: </strong>The mean OSCE score was 76.7 in Course 1 (Cronbach's α=0.85) and 73.0 in Course 2 (Cronbach's α=0.81). R2 values varied by station and competency. Strong correlations were observed in procedural and management skills (R2 up to 0.87), while weaker correlations appeared in history-taking stations (R2 as low as 0.35). The variability across stations highlighted the context-dependence of alignment between checklist and GRS methods.</p><p><strong>Conclusion: </strong>Both checklists and GRS exhibit reliable psychometric properties. Their combined use improves validity in OSCE scoring, but station-specific application is recommended. Checklists may anchor pass/fail decisions, while GRS may assist in assessing borderline performance. This hybrid model increases fairness and reflects clinical authenticity in competency-based assessment.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"19"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144776471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Ting Jillian Yeo, Dujeepa Dasharatha Samarasekera, Michael Dean
{"title":"Leveraging feedback mechanisms to improve the quality of objective structured clinical examinations in Singapore: an exploratory action research study.","authors":"Han Ting Jillian Yeo, Dujeepa Dasharatha Samarasekera, Michael Dean","doi":"10.3352/jeehp.2025.22.28","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.28","url":null,"abstract":"<p><strong>Purpose: </strong>Variability in examiner scoring threatens the fairness and reliability of objective structured clinical examinations (OSCEs). While examiner standardization exists, there is currently no structured, psychometric-informed, individualized feedback mechanism for examiners. This study explored the feasibility and perceived value of such a mechanism using an action research approach to co-design and iteratively refine examiner feedback reports.</p><p><strong>Methods: </strong>Two exploratory cycles were conducted between November 2023 and June 2024 with phase 4 OSCE examiners at the Yong Loo Lin School of Medicine. In cycle 1, psychometric analyses of examiner scoring for a phase 4 OSCE informed the design of individualized reports, which were evaluated through interviews. Revisions were made to the format of the report and implemented in cycle 2, where examiner responses were again collected. Data were analyzed thematically, supported by reflective logs and field notes.</p><p><strong>Results: </strong>Nine examiners participated in cycle 1 and 7 in cycle 2. In cycle 1, examiners highlighted challenges in interpreting complex terminology, leading to report refinements such as glossaries and visual graphs. In cycle 2, examiners demonstrated greater confidence in applying feedback, requested longitudinal reports, and shifted from initial resistance to reflective engagement. Across cycles, the reports improved credibility, neutrality, and examiner self-regulation.</p><p><strong>Conclusion: </strong>This exploratory study suggests that psychometric-informed feedback reports can facilitate examiner reflection and transparency in OSCEs. While the findings highlight feasibility and examiner acceptance, longitudinal delivery of feedback, collection of quantitative outcome data, and larger samples are needed to establish whether such reports improve scoring consistency and assessment fairness.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"28"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Jin Wong, Shaun Wen Huey Lee, Ronald Fook Seng Lee
{"title":"Pharmacy students’ perspective on remote flipped classrooms in Malaysia: a qualitative study.","authors":"Wei Jin Wong, Shaun Wen Huey Lee, Ronald Fook Seng Lee","doi":"10.3352/jeehp.2025.22.2","DOIUrl":"10.3352/jeehp.2025.22.2","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to explore pharmacy students’ perceptions of remote flipped classrooms in Malaysia, focusing on their learning experiences and identifying areas for potential improvement to inform future educational strategies.</p><p><strong>Methods: </strong>A qualitative approach was employed, utilizing inductive thematic analysis. Twenty Bachelor of Pharmacy students (18 women, 2 men; age range, 19–24 years) from Monash University participated in 8 focus group discussions over 2 rounds during the coronavirus disease 2019 pandemic. Participants were recruited via convenience sampling. The focus group discussions, led by experienced academics, were conducted in English via Zoom, recorded, and transcribed for analysis using NVivo. Themes were identified through emergent coding and iterative discussions to ensure thematic saturation.</p><p><strong>Results: </strong>Five major themes emerged: flexibility, communication, technological challenges, skill-based learning challenges, and time-based effects. Students appreciated the flexibility of accessing and reviewing pre-class materials at their convenience. Increased engagement through anonymous question submission was noted, yet communication difficulties and lack of non-verbal cues in remote workshops were significant drawbacks. Technological issues, such as internet connectivity problems, hindered learning, especially during assessments. Skill-based learning faced challenges in remote settings, including lab activities and clinical examinations. Additionally, prolonged remote learning led to feelings of isolation, fatigue, and a desire to return to in-person interactions.</p><p><strong>Conclusion: </strong>Remote flipped classrooms offer flexibility and engagement benefits but present notable challenges related to communication, technology, and skill-based learning. To improve remote education, institutions should integrate robust technological support, enhance communication strategies, and incorporate virtual simulations for practical skills. Balancing asynchronous and synchronous methods while addressing academic success and socioemotional wellness is essential for effective remote learning environments.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"2"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12055608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing genetic and genomic literacy concepts among Albanian nursing and midwifery students: a cross-sectional study.","authors":"Elona Gaxhja, Mitilda Gugu, Angelo Dante, Armelda Teta, Armela Kapaj, Liljana Ramasaco","doi":"10.3352/jeehp.2025.22.13","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.13","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to adapt and validate the Albanian version of the Genomic Nursing Concept Inventory (GNCI) and to assess the level of genomic literacy among nursing and midwifery students.</p><p><strong>Methods: </strong>Data were collected via a monocentric online cross-sectional study using the Albanian version of the GNCI. Participants included first-, second-, and third-year nursing and midwifery students. Demographic data such as age, sex, year level, and prior exposure to genetics were collected. The Kruskal-Wallis, Mann-Whitney U, and chi-square tests were used to compare demographic characteristics and GNCI scores between groups.</p><p><strong>Results: </strong>Among the 715 participants, most were female (88.5%) with a median age of 19 years. Most respondents (65%) had not taken a genetics course, and 83.5% had not attended any related training. The mean score was 7.49, corresponding to a scale difficulty of 24.38% correct responses.</p><p><strong>Conclusion: </strong>The findings reveal a low foundational knowledge of genetics/genomics among future nurses and midwives. It is essential to enhance learning strategies and update curricula to prepare a competent healthcare workforce in precision health.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"13"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Janghee Park, Mi Kyoung Yim, Sujin Shin, Rhayun Song, Jun-Ah Song, Inyoung Lee, Heejeong Kim, Minjae Lee
{"title":"Proposal for setting a passing score for the Korean Nursing Licensing Examination.","authors":"Janghee Park, Mi Kyoung Yim, Sujin Shin, Rhayun Song, Jun-Ah Song, Inyoung Lee, Heejeong Kim, Minjae Lee","doi":"10.3352/jeehp.2025.22.25","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.25","url":null,"abstract":"<p><strong>Purpose: </strong>The Korean Nursing Licensing Examination (KNLE) is planning to transition to a computer-based test (CBT). This study aims to propose a reasonable and efficient method for setting passing scores.</p><p><strong>Methods: </strong>A standard setting (passing score setting) analysis was conducted using an expert panel over the past 3 years of the national nursing examination. The standard-setting method was modified from Angoff, and the validity of the passing score was verified through the Hofstee method. The standard-setting workshop was conducted in 2 stages: first, a pilot workshop for 2 subjects, followed by a second workshop where 6 additional subjects were selected based on the pilot results. For items with an actual correct answer rate of 90% or higher, the estimated correct answer rate for minimum competency was calculated using the observed correct answer rate. A survey and discussion with the expert panel were also conducted regarding the standard-setting procedures and results.</p><p><strong>Results: </strong>The passing score for the national nursing examination was calculated using the new method, and the score was slightly higher than the existing score. The nursing subject had similar results,; however, the legal subjects varied.</p><p><strong>Conclusion: </strong>The modified Angoff and Hofstee methods were successfully applied to the KNLE. Using the actual correct answer rate as an indicator to derive expected minimum competency was shown to be effective. This approach could streamline future standard-setting processes, particularly when converting to CBT.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"25"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability and construct validation of the Blended Learning Usability Evaluation-Questionnaire with interprofessional clinicians in Canada: a methodological study.","authors":"Anish Kumar Arora, Jeff Myers, Tavis Apramian, Kulamakan Kulasegaram, Daryl Bainbridge, Hsien Seow","doi":"10.3352/jeehp.2025.22.5","DOIUrl":"10.3352/jeehp.2025.22.5","url":null,"abstract":"<p><strong>Purpose: </strong>To generate Cronbach's alpha and further mixed methods construct validity evidence for the Blended Learning Usability Evaluation-Questionnaire (BLUE-Q).</p><p><strong>Methods: </strong>Forty interprofessional clinicians completed the BLUE-Q after finishing a 3-month long blended learning professional development program in Ontario, Canada. Reliability was assessed with Cronbach's α for each of the 3 sections of the BLUE-Q and for all quantitative items together. Construct validity was evaluated through the Grand-Guillaume-Perrenoud et al. framework, which consists of 3 elements: congruence, convergence, and credibility. To compare quantitative and qualitative results, descriptive statistics, including means and standard deviations for each Likert scale item of the BLUE-Q were calculated.</p><p><strong>Results: </strong>Cronbach's α was 0.95 for the pedagogical usability section, 0.85 for the synchronous modality section, 0.93 for the asynchronous modality section, and 0.96 for all quantitative items together. Mean ratings (with standard deviations) were 4.77 (0.506) for pedagogy, 4.64 (0.654) for synchronous learning, and 4.75 (0.536) for asynchronous learning. Of the 239 qualitative comments received, 178 were identified as substantive, of which 88% were considered congruent and 79% were considered convergent with the high means. Among all congruent responses, 69% were considered confirming statements and 31% were considered clarifying statements, suggesting appropriate credibility. Analysis of the clarifying statements assisted in identifying 5 categories of suggestions for program improvement.</p><p><strong>Conclusion: </strong>The BLUE-Q demonstrates high reliability and appropriate construct validity in the context of a blended learning program with interprofessional clinicians, making it a valuable tool for comprehensive program evaluation, quality improvement, and evaluative research in health professions education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"5"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11955914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of accreditation on medical education quality improvement in 82 medical schools in Japan: a descriptive study.","authors":"Nobuo Nara","doi":"10.3352/jeehp.2025.22.22","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.22","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"22"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145114481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Napamon Pumsopa, Ann Jirapongsuwan, Surintorn Kalampakorn, Sukhontha Siri
{"title":"The effect of strengthening nurse practitioners' competency in occupational health services for agricultural workers exposed to pesticides in primary care units, Thailand: a before-and-after study.","authors":"Napamon Pumsopa, Ann Jirapongsuwan, Surintorn Kalampakorn, Sukhontha Siri","doi":"10.3352/jeehp.2025.22.14","DOIUrl":"10.3352/jeehp.2025.22.14","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the effect of the Strengthening Nurse Practitioners' Competency in Occupational Health Service (SNPCOHS) program. It was hypothesized that nurse practitioners (NPs) participating in the program would demonstrate increased competency in providing occupational health services to agricultural workers exposed to pesticides in primary care units (PCUs) compared to their baseline competency and to a comparison group.</p><p><strong>Methods: </strong>A quasi-experimental study was conducted between August and December 2023. The 4-week intervention included 5 hours of an e-learning program, 3 hours of online discussion, and 2 hours dedicated to completing an assignment. The program was evaluated at 3 time points: pre-intervention, post-intervention (week 4), and follow-up (week 8). Sixty NPs volunteered to participate, with 30 in the experimental group and 30 in the comparison group. Data on demographics, professional attributes, knowledge, skills, and perceived self-efficacy were collected using self-administered questionnaires via Google Forms. Data analysis involved descriptive statistics, independent t-tests, and repeated measures analysis of variance.</p><p><strong>Results: </strong>The experimental group demonstrated significantly higher mean scores in professional attributes, knowledge, skills, and perceived self-efficacy in providing occupational health services to agricultural workers exposed to pesticides compared to the comparison group at both week 4 and week 8 post-intervention.</p><p><strong>Conclusion: </strong>The SNPCOHS program is well-suited for self-directed learning for nurses in PCUs, supporting effective occupational health service delivery. It should be disseminated and supported as an e-learning resource for NPs in PCUs (Thai Clinical Trials Registry: TCTR20250115004).</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"14"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12138529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alba Virtudes Perez-Baena, Teodoro Rudolphi-Solero, Rocio Lorenzo-Alvarez, Dolores Dominguez-Pinos, Miguel Jose Ruiz-Gomez, Francisco Sendra-Portero
{"title":"Evaluation of a virtual objective structured clinical examination in the metaverse (Second Life) to assess the clinical skills in emergency radiology of medical students in Spain: a cross-sectional study.","authors":"Alba Virtudes Perez-Baena, Teodoro Rudolphi-Solero, Rocio Lorenzo-Alvarez, Dolores Dominguez-Pinos, Miguel Jose Ruiz-Gomez, Francisco Sendra-Portero","doi":"10.3352/jeehp.2025.22.12","DOIUrl":"10.3352/jeehp.2025.22.12","url":null,"abstract":"<p><strong>Purpose: </strong>The objective structured clinical examination (OSCE) is an effective but resource-intensive tool for assessing clinical competence. This study hypothesized that implementing a virtual OSCE in the Second Life (SL) platform in the metaverse as a cost-effective alternative will effectively assess and enhance clinical skills in emergency radiology while being feasible and well-received. The aim was to evaluate a virtual radiology OSCE in SL as a formative assessment, focusing on feasibility, educational impact, and students' perceptions.</p><p><strong>Methods: </strong>Two virtual 6-station OSCE rooms dedicated to emergency radiology were developed in SL. Sixth-year medical students completed the OSCE during a 1-hour session in 2022-2023, followed by feedback including a correction checklist, individual scores, and group comparisons. Students completed a questionnaire with Likert-scale questions, a 10-point rating, and open-ended comments. Quantitative data were analyzed using the Student t-test and the Mann-Whitney U test, and qualitative data through thematic analysis.</p><p><strong>Results: </strong>In total, 163 students participated, achieving mean scores of 5.1±1.4 and 4.9±1.3 (out of 10) in the 2 virtual OSCE rooms, respectively (P=0.287). One hundred seventeen students evaluated the OSCE, praising the teaching staff (9.3±1.0), project organization (8.8±1.2), OSCE environment (8.7±1.5), training usefulness (8.6±1.5), and formative self-assessment (8.5±1.4). Likert-scale questions and students' open-ended comments highlighted the virtual environment's attractiveness, case selection, self-evaluation usefulness, project excellence, and training impact. Technical difficulties were reported by 13 students (8%).</p><p><strong>Conclusion: </strong>This study demonstrated the feasibility of incorporating formative OSCEs in SL as a useful teaching tool for undergraduate radiology education, which was cost-effective and highly valued by students.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"12"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12202975/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration.","authors":"Hwanggyu Lim,Kyung Seok Kang","doi":"10.3352/jeehp.2024.21.23","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.23","url":null,"abstract":"Computerized adaptive testing (CAT) has become a widely adopted test design for high-stakes licensing and certification exams, particularly in the health professions in the United States, due to its ability to tailor test difficulty in real time, reducing testing time while providing precise ability estimates. A key component of CAT is item response theory (IRT), which facilitates the dynamic selection of items based on examinees' ability levels during a test. Accurate estimation of item and ability parameters is essential for successful CAT implementation, necessitating convenient and reliable software to ensure precise parameter estimation. This paper introduces the irtQ R package, which simplifies IRT-based analysis and item calibration under unidimensional IRT models. While it does not directly simulate CAT, it provides essential tools to support CAT development, including parameter estimation using marginal maximum likelihood estimation via the expectation-maximization algorithm, pretest item calibration through fixed item parameter calibration and fixed ability parameter calibration methods, and examinee ability estimation. The package also enables users to compute item and test characteristic curves and information functions necessary for evaluating the psychometric properties of a test. This paper illustrates the key features of the irtQ package through examples using simulated datasets, demonstrating its utility in IRT applications such as test data analysis and ability scoring. By providing a user-friendly environment for IRT analysis, irtQ significantly enhances the capacity for efficient adaptive testing research and operations. Finally, the paper highlights additional core functionalities of irtQ, emphasizing its broader applicability to the development and operation of IRT-based assessments.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"154 1","pages":"23"},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142176646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}