{"title":"正式书面语言测试项目的修订布鲁姆分类法评价","authors":"Y. Setyowati, S. Susanto, A. Munir","doi":"10.18844/wjet.v14i5.7296","DOIUrl":null,"url":null,"abstract":"This paper aims to portray the appropriateness of test items in language tests according to Bloom's Taxonomy. Thirty written language tests created by EFL lecturers were analyzed. Document analysis was applied, the data were categorized and examined. In the test for remembering, ‘crucial questions was applied, finding specific examples or data, general concepts or ideas, and abstracting themes in comprehension test. Completing particular projects or solve issues in the applying test, whereas SWOT analysis conducted in analyzing test, and strategic plan should be demonstrated in evaluation test, and last, in creating test, new things or idea should be created, generalizing and make conclusion. The findings demonstrated test item using remembering mental level stood at 66%, understanding 16%, applying 2%. While analyzing level gets 9%, evaluating 2%, and creating group 5%. This addresses disparity between LOTs and HOTs usage. Hence, Bloom taxonomy was not distributed well in the language tests.\nKeywords: test items, formal written language tests, Revised Bloom taxonomy","PeriodicalId":36811,"journal":{"name":"World Journal on Educational Technology: Current Issues","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A revised bloom's taxonomy evaluation of formal written language test items\",\"authors\":\"Y. Setyowati, S. Susanto, A. Munir\",\"doi\":\"10.18844/wjet.v14i5.7296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper aims to portray the appropriateness of test items in language tests according to Bloom's Taxonomy. Thirty written language tests created by EFL lecturers were analyzed. Document analysis was applied, the data were categorized and examined. In the test for remembering, ‘crucial questions was applied, finding specific examples or data, general concepts or ideas, and abstracting themes in comprehension test. Completing particular projects or solve issues in the applying test, whereas SWOT analysis conducted in analyzing test, and strategic plan should be demonstrated in evaluation test, and last, in creating test, new things or idea should be created, generalizing and make conclusion. The findings demonstrated test item using remembering mental level stood at 66%, understanding 16%, applying 2%. While analyzing level gets 9%, evaluating 2%, and creating group 5%. This addresses disparity between LOTs and HOTs usage. Hence, Bloom taxonomy was not distributed well in the language tests.\\nKeywords: test items, formal written language tests, Revised Bloom taxonomy\",\"PeriodicalId\":36811,\"journal\":{\"name\":\"World Journal on Educational Technology: Current Issues\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Journal on Educational Technology: Current Issues\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18844/wjet.v14i5.7296\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Journal on Educational Technology: Current Issues","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18844/wjet.v14i5.7296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
A revised bloom's taxonomy evaluation of formal written language test items
This paper aims to portray the appropriateness of test items in language tests according to Bloom's Taxonomy. Thirty written language tests created by EFL lecturers were analyzed. Document analysis was applied, the data were categorized and examined. In the test for remembering, ‘crucial questions was applied, finding specific examples or data, general concepts or ideas, and abstracting themes in comprehension test. Completing particular projects or solve issues in the applying test, whereas SWOT analysis conducted in analyzing test, and strategic plan should be demonstrated in evaluation test, and last, in creating test, new things or idea should be created, generalizing and make conclusion. The findings demonstrated test item using remembering mental level stood at 66%, understanding 16%, applying 2%. While analyzing level gets 9%, evaluating 2%, and creating group 5%. This addresses disparity between LOTs and HOTs usage. Hence, Bloom taxonomy was not distributed well in the language tests.
Keywords: test items, formal written language tests, Revised Bloom taxonomy