{"title":"Quality of multiple choice question items: item analysis","authors":"Ayenew Takele Alemu, Hiwot Tesfa, Addisu Mulugeta, E. Fenta, Mahider Awoke Belay","doi":"10.18203/issn.2454-2156.intjscirep20241316","DOIUrl":null,"url":null,"abstract":"Background: There are different types of exam formats for educational assessment. Multiple choice questions (MCQs) are frequently utilized assessment tools in health education. Considering the reliability and validity in developing MCQ items is vital. Educators often face the difficulty of developing credible distractors in MCQ items. Poorly constructed MCQ items make an exam easier or too difficult to be answered correctly by students as intended learning objectives. Checking the quality of MCQ items is overlooked and too little is known about it. Therefore, this study aimed to assess the quality of MCQ items using the item response theory model. \nMethods: A descriptive cross-sectional study was conducted among MCQ items of public health courses administered to 2nd year nursing students at Injibara university. A total of 50 MCQ items and 200 alternatives were evaluated for statistical item analysis. The quality of MCQ items was assessed by difficulty index (DIF), discrimination index (DI), and distractor efficiency (DE) using students’ exam responses. Microsoft excel sheet and SPSS version 26 were used for data management and analysis. \nResults: Post-exam item analysis showed that 11 (22%) and 22 (44%) MCQs had too difficult and poor ranges for difficulty and discriminating powers respectively. The overall DE was 71.3%. About forty (20%) distractors were non-functional. Only 8 (16%) MCQs fulfilled the recommended criteria for all-DIF, DI, and DE parameters. \nConclusions: The desirable criteria for quality parameters of MCQ items were satisfied only in a few items. The result implies the need for quality improvement. Continuous trainings are required to improve the instructors’ skills to construct quality educational assessment tools.","PeriodicalId":14297,"journal":{"name":"International Journal of Scientific Reports","volume":"54 20","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Scientific Reports","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18203/issn.2454-2156.intjscirep20241316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: There are different types of exam formats for educational assessment. Multiple choice questions (MCQs) are frequently utilized assessment tools in health education. Considering the reliability and validity in developing MCQ items is vital. Educators often face the difficulty of developing credible distractors in MCQ items. Poorly constructed MCQ items make an exam easier or too difficult to be answered correctly by students as intended learning objectives. Checking the quality of MCQ items is overlooked and too little is known about it. Therefore, this study aimed to assess the quality of MCQ items using the item response theory model.
Methods: A descriptive cross-sectional study was conducted among MCQ items of public health courses administered to 2nd year nursing students at Injibara university. A total of 50 MCQ items and 200 alternatives were evaluated for statistical item analysis. The quality of MCQ items was assessed by difficulty index (DIF), discrimination index (DI), and distractor efficiency (DE) using students’ exam responses. Microsoft excel sheet and SPSS version 26 were used for data management and analysis.
Results: Post-exam item analysis showed that 11 (22%) and 22 (44%) MCQs had too difficult and poor ranges for difficulty and discriminating powers respectively. The overall DE was 71.3%. About forty (20%) distractors were non-functional. Only 8 (16%) MCQs fulfilled the recommended criteria for all-DIF, DI, and DE parameters.
Conclusions: The desirable criteria for quality parameters of MCQ items were satisfied only in a few items. The result implies the need for quality improvement. Continuous trainings are required to improve the instructors’ skills to construct quality educational assessment tools.