Comparison Between Serial and Independent Questions: A Psychometric and Methodological Approach.

IF 1.6 Q2 EDUCATION, SCIENTIFIC DISCIPLINES
Víctor Hugo Olmedo Canchola, José Gamaliel Velazco González, Gustavo Quiroga Martínez
{"title":"Comparison Between Serial and Independent Questions: A Psychometric and Methodological Approach.","authors":"Víctor Hugo Olmedo Canchola, José Gamaliel Velazco González, Gustavo Quiroga Martínez","doi":"10.1177/23821205251359701","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To determine if statistical and psychometric outcomes differ between tests composed of serial and independent questions. Specific goals include assessing which format provides better reliability and validity, understanding response patterns, and comparing difficulty and discrimination indices under classical test theory.</p><p><strong>Methodology: </strong>The study involved a single-group design with spiral counterbalance, allowing examinees to answer both formats within a single exam of 220 items. Of these, 200 were independent questions, and 20 were organized into 4 clinical cases with 5 related items each. The exam was administered by computer to anesthesiologists undergoing certification or recertification.</p><p><strong>Results: </strong>From 2109 candidates, the analysis showed significant differences in internal consistency, with Cronbach's alpha of .790 for independent questions and .527 for serial questions. A moderate positive correlation (<i>r</i> = .488) between scores in the 2 formats was observed. No significant difference was found in difficulty and discrimination indices between formats.</p><p><strong>Discussion: </strong>Independent questions showed higher reliability, likely due to their lack of dependency, making them more suitable for high-stakes exams. Serial questions, while valuable for assessing integrative reasoning, introduce dependency that affects consistency and may skew outcomes when the initial question is answered incorrectly. Despite similar difficulty and discrimination indices, the unique dependency in serial questions affects their suitability for high-stakes testing.</p><p><strong>Conclusions: </strong>Independent questions provide a more reliable format for high-stakes exams, but serial questions can enhance assessments by probing various aspects of clinical reasoning within a single case. A balanced approach incorporating both formats may optimize the reliability and validity of medical certification exams, leveraging the strengths of each question type.</p>","PeriodicalId":45121,"journal":{"name":"Journal of Medical Education and Curricular Development","volume":"12 ","pages":"23821205251359701"},"PeriodicalIF":1.6000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12267950/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Education and Curricular Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/23821205251359701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: To determine if statistical and psychometric outcomes differ between tests composed of serial and independent questions. Specific goals include assessing which format provides better reliability and validity, understanding response patterns, and comparing difficulty and discrimination indices under classical test theory.

Methodology: The study involved a single-group design with spiral counterbalance, allowing examinees to answer both formats within a single exam of 220 items. Of these, 200 were independent questions, and 20 were organized into 4 clinical cases with 5 related items each. The exam was administered by computer to anesthesiologists undergoing certification or recertification.

Results: From 2109 candidates, the analysis showed significant differences in internal consistency, with Cronbach's alpha of .790 for independent questions and .527 for serial questions. A moderate positive correlation (r = .488) between scores in the 2 formats was observed. No significant difference was found in difficulty and discrimination indices between formats.

Discussion: Independent questions showed higher reliability, likely due to their lack of dependency, making them more suitable for high-stakes exams. Serial questions, while valuable for assessing integrative reasoning, introduce dependency that affects consistency and may skew outcomes when the initial question is answered incorrectly. Despite similar difficulty and discrimination indices, the unique dependency in serial questions affects their suitability for high-stakes testing.

Conclusions: Independent questions provide a more reliable format for high-stakes exams, but serial questions can enhance assessments by probing various aspects of clinical reasoning within a single case. A balanced approach incorporating both formats may optimize the reliability and validity of medical certification exams, leveraging the strengths of each question type.

Abstract Image

Abstract Image

Abstract Image

系列问题和独立问题的比较:心理测量学和方法论方法。
目的:确定由系列问题和独立问题组成的测试的统计和心理测量结果是否不同。具体目标包括评估哪种格式具有更好的信度和效度,理解反应模式,比较经典测试理论下的难度和区别指标。方法:该研究采用螺旋平衡的单组设计,允许考生在220个单项考试中同时回答两种格式。其中200个为独立问题,20个为4个临床病例,每个病例5个相关问题。该考试由计算机对正在进行认证或重新认证的麻醉师进行管理。结果:从2109个候选人中,分析显示内部一致性有显著差异,独立问题的Cronbach's alpha为0.790,连续问题的Cronbach's alpha为0.527。在两种格式的得分之间观察到中度正相关(r = .488)。不同格式的难度和区分指标均无显著差异。讨论:独立题表现出更高的可靠性,可能是由于它们缺乏依赖性,使它们更适合高风险的考试。系列问题虽然对评估综合推理很有价值,但会引入依赖性,影响一致性,当最初的问题回答错误时,可能会扭曲结果。尽管相似的难度和区别指数,独特的依赖性系列问题影响其适合高风险测试。结论:独立问题为高风险考试提供了更可靠的形式,但系列问题可以通过在单个病例中探索临床推理的各个方面来增强评估。结合两种格式的平衡方法可以优化医疗认证考试的可靠性和有效性,充分利用每种问题类型的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Education and Curricular Development
Journal of Medical Education and Curricular Development EDUCATION, SCIENTIFIC DISCIPLINES-
自引率
0.00%
发文量
62
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信