Comparing Holistic and Mixed-Approach Rubrics for Academic Poster Quality

IF 3.8 4区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Michael J. Peeters , M. Ken Cor , Ashley N. Castleberry , Michael J. Gonyeau
{"title":"Comparing Holistic and Mixed-Approach Rubrics for Academic Poster Quality","authors":"Michael J. Peeters ,&nbsp;M. Ken Cor ,&nbsp;Ashley N. Castleberry ,&nbsp;Michael J. Gonyeau","doi":"10.1016/j.ajpe.2025.101379","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Poster quality at academic conferences has varied. Furthermore, the few poster-quality rubrics in the literature have limited psychometric evidence. Thus, we compared holistic vs mixed-approach scoring using a recently created poster rubric, scored by multiple raters, to evaluate validation evidence and time-to-score utility.</div></div><div><h3>Methods</h3><div>Sixty research posters were randomly selected from an academic conference’s online poster repository. Using a previously created rubric (and without rubric training), 4 pharmacy education faculty members with varying levels of poster-related experience scored each poster. Initially, each rater holistically scored the posters, providing a single overall score for each. Approximately 1 month later, the raters scored the posters again using a mixed approach, assigning 4 sub-scores and a new overall score. We used the Generalizability Theory to assess the effect of rater experience and the Rasch Measurement Model to examine rating scale effectiveness and construct validation. Time-to-score for each poster was also compared.</div></div><div><h3>Results</h3><div>Generalizability Theory showed greater reliability with more experienced raters or when using the mixed approach. Rasch analysis indicated that rating scales functioned better with the mixed approach, and Wright maps of the construct provided useful measurement validation evidence. Raters reported scoring more quickly (30–60 s per poster) with holistic scoring, though differences in rater experience affected reliability. Meanwhile, mixed-approach scoring was slightly slower (60–90 s per poster), but the impact of the rater experience was reduced.</div></div><div><h3>Conclusion</h3><div>Scoring was slightly faster with the holistic approach than with the mixed-approach rubric; however, differences in rater experience were lessened using the mixed-approach. The mixed approach was preferable because it allowed for quick scoring while reducing the need for prior training. This rubric could be used by students and new faculty when creating posters or by poster-competition judges. Furthermore, mixed-approach rubrics may be applied beyond posters, including oral presentations or objective structured clinical examination stations.</div></div>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":"89 4","pages":"Article 101379"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0002945925000245","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

Poster quality at academic conferences has varied. Furthermore, the few poster-quality rubrics in the literature have limited psychometric evidence. Thus, we compared holistic vs mixed-approach scoring using a recently created poster rubric, scored by multiple raters, to evaluate validation evidence and time-to-score utility.

Methods

Sixty research posters were randomly selected from an academic conference’s online poster repository. Using a previously created rubric (and without rubric training), 4 pharmacy education faculty members with varying levels of poster-related experience scored each poster. Initially, each rater holistically scored the posters, providing a single overall score for each. Approximately 1 month later, the raters scored the posters again using a mixed approach, assigning 4 sub-scores and a new overall score. We used the Generalizability Theory to assess the effect of rater experience and the Rasch Measurement Model to examine rating scale effectiveness and construct validation. Time-to-score for each poster was also compared.

Results

Generalizability Theory showed greater reliability with more experienced raters or when using the mixed approach. Rasch analysis indicated that rating scales functioned better with the mixed approach, and Wright maps of the construct provided useful measurement validation evidence. Raters reported scoring more quickly (30–60 s per poster) with holistic scoring, though differences in rater experience affected reliability. Meanwhile, mixed-approach scoring was slightly slower (60–90 s per poster), but the impact of the rater experience was reduced.

Conclusion

Scoring was slightly faster with the holistic approach than with the mixed-approach rubric; however, differences in rater experience were lessened using the mixed-approach. The mixed approach was preferable because it allowed for quick scoring while reducing the need for prior training. This rubric could be used by students and new faculty when creating posters or by poster-competition judges. Furthermore, mixed-approach rubrics may be applied beyond posters, including oral presentations or objective structured clinical examination stations.
目的学术会议上的海报质量参差不齐。此外,文献中为数不多的海报质量评分标准的心理测量学证据也很有限。因此,我们比较了由多位评分者使用最近创建的海报评分标准进行整体评分和混合评分的情况,以获得验证证据和评分时间:从一个学术会议的在线海报库中随机抽取了 60 张研究海报。四位具有不同水平海报相关经验的药学教育教师使用之前创建的评分标准(未经评分标准培训)对每张海报进行评分。最初,每位评分者对每张海报进行整体评分,以提供一个总分。大约一个月后,评分者再次进行评分,但采用了混合评分法(提供四个子评分和一个新的总评分)。我们使用 "可推广性理论"(Generalizability Theory)来说明评分者经验的影响。我们使用 Rasch 测量模型来检验评分量表的有效性和结构的有效性。我们还比较了每张海报的评分时间:结果:G-理论在使用更专业的评分者或混合方法时显示出更高的可靠性。Rasch 显示,采用混合方法时,评分量表的功能更佳,而结构的赖特地图似乎有助于测量(验证证据)。尽管评分者经验的差异会影响评分的可靠性,但评分者称采用整体评分法评分更快(30-60 秒)。同时,混合方法评分稍慢(60-90 秒),但评分者经验的影响较小:结论:采用整体评分法与混合评分法相比,评分速度稍快;但采用混合评分法时,评分者之间的经验差异会减小。混合方法更可取(快速评分和限制对事先培训的需求)。学生和新教师在制作海报时或海报竞赛评委可以使用这种评分标准。此外,混合方法评分标准还可应用于海报以外的领域(包括口头报告或 OSCE 演讲)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.30
自引率
15.20%
发文量
114
期刊介绍: The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors. After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信