{"title":"Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality","authors":"Michael J. Peeters , Michael J. Gonyeau","doi":"10.1016/j.ajpe.2025.101372","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Although there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared the use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess the quality of research posters at an academic conference.</div></div><div><h3>Methods</h3><div>A previous systematic review identified 12 rubrics. We compared 2 notable ARs (AR1 and AR2) with a newer MAR. Sixty randomly selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2, and MAR. The time to score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations and modern/advanced Rasch measurement were examined and compared between AR1, AR2, and MAR.</div></div><div><h3>Results</h3><div>The scores for poster quality varied using all rubrics. For traditional indexes of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, whereas AR1 and AR2 were slightly higher using consistency. The modern Rasch measurement showed that the single-item MAR reliably separated posters into 2 distinct groups (low quality vs high quality), which is the same as the 9-item AR2 but better than the 9-item AR1. Furthermore, the MAR’s single-item rating scale functioned well, whereas AR1 had 1 misfunctioning item rating scale and AR2 had 4 misfunctioning item rating scales. Notably, the MAR was quicker to score than the AR1 or AR2.</div></div><div><h3>Conclusion</h3><div>This MAR measured similar or better than 2 ARs and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.</div></div>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":"89 3","pages":"Article 101372"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0002945925000178","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
Although there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared the use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess the quality of research posters at an academic conference.
Methods
A previous systematic review identified 12 rubrics. We compared 2 notable ARs (AR1 and AR2) with a newer MAR. Sixty randomly selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2, and MAR. The time to score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations and modern/advanced Rasch measurement were examined and compared between AR1, AR2, and MAR.
Results
The scores for poster quality varied using all rubrics. For traditional indexes of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, whereas AR1 and AR2 were slightly higher using consistency. The modern Rasch measurement showed that the single-item MAR reliably separated posters into 2 distinct groups (low quality vs high quality), which is the same as the 9-item AR2 but better than the 9-item AR1. Furthermore, the MAR’s single-item rating scale functioned well, whereas AR1 had 1 misfunctioning item rating scale and AR2 had 4 misfunctioning item rating scales. Notably, the MAR was quicker to score than the AR1 or AR2.
Conclusion
This MAR measured similar or better than 2 ARs and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.
期刊介绍:
The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors.
After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.