Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests.

IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY
Yahir Hernández-Mier, Marco Aurelio Nuño-Maganda, Said Polanco-Martagón, Guadalupe Acosta-Villarreal, Rubén Posada-Gómez
{"title":"Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests.","authors":"Yahir Hernández-Mier, Marco Aurelio Nuño-Maganda, Said Polanco-Martagón, Guadalupe Acosta-Villarreal, Rubén Posada-Gómez","doi":"10.3390/jimaging11090308","DOIUrl":null,"url":null,"abstract":"<p><p>The large-scale evaluation of multiple-choice tests is a challenging task from the perspective of image processing. A typical instrument is a multiple-choice question test that employs an answer sheet with circles or squares. Once students have finished the test, the answer sheets are digitized and sent to a processing center for scoring. Operators compute each exam score manually, but this task requires considerable time. While it is true that mature algorithms exist for detecting circles under controlled conditions, they may fail in real-life applications, even when using controlled conditions for image acquisition of the answer sheets. This paper proposes a desktop application for optical mark recognition (OMR) on the scanned multiple-choice question (MCQ) test answer sheets. First, we compiled a set of answer sheet images corresponding to 6029 exams (totaling 564,040 four-option answers) applied in 2024 in Tamaulipas, Mexico. Subsequently, we developed an image-processing module that extracts answers from the answer sheets and an interface for operators to perform analysis by selecting the folder containing the exams and generating results in a tabulated format. We evaluated the image-processing module, achieving a percentage of 96.15% of exams graded without error and 99.95% of 4-option answers classified correctly. We obtained these percentages by comparing the answers generated through our system with those generated by human operators, who took an average of 2 min to produce the answers for a single answer sheet, while the automated version took an average of 1.04 s.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470569/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/jimaging11090308","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The large-scale evaluation of multiple-choice tests is a challenging task from the perspective of image processing. A typical instrument is a multiple-choice question test that employs an answer sheet with circles or squares. Once students have finished the test, the answer sheets are digitized and sent to a processing center for scoring. Operators compute each exam score manually, but this task requires considerable time. While it is true that mature algorithms exist for detecting circles under controlled conditions, they may fail in real-life applications, even when using controlled conditions for image acquisition of the answer sheets. This paper proposes a desktop application for optical mark recognition (OMR) on the scanned multiple-choice question (MCQ) test answer sheets. First, we compiled a set of answer sheet images corresponding to 6029 exams (totaling 564,040 four-option answers) applied in 2024 in Tamaulipas, Mexico. Subsequently, we developed an image-processing module that extracts answers from the answer sheets and an interface for operators to perform analysis by selecting the folder containing the exams and generating results in a tabulated format. We evaluated the image-processing module, achieving a percentage of 96.15% of exams graded without error and 99.95% of 4-option answers classified correctly. We obtained these percentages by comparing the answers generated through our system with those generated by human operators, who took an average of 2 min to produce the answers for a single answer sheet, while the automated version took an average of 1.04 s.

大规模印刷多项选择题答题纸的无监督光学标记识别。
从图像处理的角度来看,选择题的大规模评价是一项具有挑战性的任务。一个典型的工具是选择题测试,它使用圆形或正方形的答题卡。一旦学生完成测试,答案纸将被数字化并发送到处理中心进行评分。操作员手动计算每次考试的分数,但这项任务需要相当长的时间。虽然在受控条件下检测圆的成熟算法确实存在,但它们在实际应用中可能会失败,即使在使用受控条件对答卷进行图像采集时也是如此。提出了一种基于多选择题扫描答题纸的光学标记识别(OMR)桌面应用程序。首先,我们编制了一组对应于2024年墨西哥塔毛利帕斯州6029场考试(共564,040个四选项答案)的答题卡图片。随后,我们开发了一个图像处理模块,从答题卡中提取答案,并为操作员提供了一个界面,通过选择包含考试的文件夹并以表格格式生成结果来进行分析。我们对图像处理模块进行了评估,达到96.15%的考试无错误评分,99.95%的4选项答案分类正确。我们将系统生成的答案与人工生成的答案进行了比较,得出了这些百分比。人工生成一张答题卡的平均时间为2分钟,而自动生成答题卡的平均时间为1.04秒。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Imaging
Journal of Imaging Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
5.90
自引率
6.20%
发文量
303
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信