{"title":"阅读理解考试试题分析教师队伍与培训教育","authors":"Viator Lumban Raja","doi":"10.54367/kairos.v4i1.847","DOIUrl":null,"url":null,"abstract":"It is not uncommon to put a blame on the students when they fail in the semester examination. The examiner or the one who constructs the test is rarely blamed or questioned why such a thing can happen. There is never a question whether the test is valid or reliable. In other words, the test itself is never evaluated in order to know if it meets the level of difficulty and power of discrimination. Madsen (1983: 180) says that item analysis tells us three things: (1) how difficult each item is, (2)whether or not the question discriminated or tells the difference between high and low students, (3) which distracters are working as they should.  This reading comprehension examination consists of 44 items, 35 items of reading comprehension and 9 items of vocabulary. The number of test takers are 18 students. The result of the analysis shows that only 5 students (27.7%) can do the test within average, meaning they can answer the test 50% correct of the total test items. This belongs to moderate category, not high nor excellent. Of the 44 test items, 33(75%) are bad items in that they do not fulfill one or both of the requirements concerning the level of difficulty and power of discrimination. And only 11 items (25%) meet the requirements of level of difficulty and power of discrimination. Regarding the distracters, there are 20 items (45.45%) whose distracters are not chosen either one or two. There are two items (4.54%), 25 and 34, the correct answer of which is not chosen by the test takers, including the high and low group. In short, these 20 items needs revising in term of distracters. Revision is made to those items whose distracters are not chosen and those which do not fulfill the requirements of level of difficulty and power of discrimination. Distracters which look too easy are changed, and those which are not totally chosen are revised. ","PeriodicalId":184113,"journal":{"name":"Kairos English Language Teaching Journal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TEST ITEM ANALYSIS OF READING COMPREHENSION EXAMINATION FACULTY OF TEACHERS AND TRAINING EDUCATION\",\"authors\":\"Viator Lumban Raja\",\"doi\":\"10.54367/kairos.v4i1.847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is not uncommon to put a blame on the students when they fail in the semester examination. The examiner or the one who constructs the test is rarely blamed or questioned why such a thing can happen. There is never a question whether the test is valid or reliable. In other words, the test itself is never evaluated in order to know if it meets the level of difficulty and power of discrimination. Madsen (1983: 180) says that item analysis tells us three things: (1) how difficult each item is, (2)whether or not the question discriminated or tells the difference between high and low students, (3) which distracters are working as they should.  This reading comprehension examination consists of 44 items, 35 items of reading comprehension and 9 items of vocabulary. The number of test takers are 18 students. The result of the analysis shows that only 5 students (27.7%) can do the test within average, meaning they can answer the test 50% correct of the total test items. This belongs to moderate category, not high nor excellent. Of the 44 test items, 33(75%) are bad items in that they do not fulfill one or both of the requirements concerning the level of difficulty and power of discrimination. And only 11 items (25%) meet the requirements of level of difficulty and power of discrimination. Regarding the distracters, there are 20 items (45.45%) whose distracters are not chosen either one or two. There are two items (4.54%), 25 and 34, the correct answer of which is not chosen by the test takers, including the high and low group. In short, these 20 items needs revising in term of distracters. Revision is made to those items whose distracters are not chosen and those which do not fulfill the requirements of level of difficulty and power of discrimination. Distracters which look too easy are changed, and those which are not totally chosen are revised. \",\"PeriodicalId\":184113,\"journal\":{\"name\":\"Kairos English Language Teaching Journal\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Kairos English Language Teaching Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54367/kairos.v4i1.847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kairos English Language Teaching Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54367/kairos.v4i1.847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TEST ITEM ANALYSIS OF READING COMPREHENSION EXAMINATION FACULTY OF TEACHERS AND TRAINING EDUCATION
It is not uncommon to put a blame on the students when they fail in the semester examination. The examiner or the one who constructs the test is rarely blamed or questioned why such a thing can happen. There is never a question whether the test is valid or reliable. In other words, the test itself is never evaluated in order to know if it meets the level of difficulty and power of discrimination. Madsen (1983: 180) says that item analysis tells us three things: (1) how difficult each item is, (2)whether or not the question discriminated or tells the difference between high and low students, (3) which distracters are working as they should. Â This reading comprehension examination consists of 44 items, 35 items of reading comprehension and 9 items of vocabulary. The number of test takers are 18 students. The result of the analysis shows that only 5 students (27.7%) can do the test within average, meaning they can answer the test 50% correct of the total test items. This belongs to moderate category, not high nor excellent. Of the 44 test items, 33(75%) are bad items in that they do not fulfill one or both of the requirements concerning the level of difficulty and power of discrimination. And only 11 items (25%) meet the requirements of level of difficulty and power of discrimination. Regarding the distracters, there are 20 items (45.45%) whose distracters are not chosen either one or two. There are two items (4.54%), 25 and 34, the correct answer of which is not chosen by the test takers, including the high and low group. In short, these 20 items needs revising in term of distracters. Revision is made to those items whose distracters are not chosen and those which do not fulfill the requirements of level of difficulty and power of discrimination. Distracters which look too easy are changed, and those which are not totally chosen are revised.Â