{"title":"教学改进与图书馆价值展示的信息素养评估:本地测试与商业测试的比较","authors":"Kathy E Clarke, C. Radcliff","doi":"10.29242/lac.2018.74","DOIUrl":null,"url":null,"abstract":"This paper describes two types of fixed-choice information literacy tests, one locally created and one nationally developed. The Madison Research Essentials Skills Test (MREST) is part of a tutorial-test model for first-year library instruction at James Madison University. Students must pass the test before they can move to sophomore status. This testing process relies on a collaborative model between JMU Libraries, the General Education program, and the Center for Assessment Research Studies (CARS). On the national level, the recently created Threshold Achievement Test for Information Literacy (TATIL) is based on the ACRL Framework for Information Literacy and in four test modules measures both information literacy knowledge and dispositions. TATIL was created by librarians and other educators and can be used to guide instructional program changes, for external and internal reporting and to give students recommendations for improving their information literacy. The decision to use a test and to choose which approach to take can be informed by comparing the benefits and limitations of these testing options. Tests have been used to assess information literacy for many years. Whether it is a quick test created for local use after instructional sessions, an institutional test to ensure that skills have been acquired or for longitudinal study of student knowledge, or a standardized test offering multi-institutional comparisons of results, this assessment method has a long history and a strong presence in library assessment. This paper explores two types of fixed-choice tests, one locally created and one commercially sponsored, which can be used for program improvement. Fixed-choice tests are one method among many for assessing achievement and ability. The benefits and limitations of standardized tests are well documented.1 Despite criticisms, tests are in wide use by colleges and universities, professional organizations, and testing companies. Well-written tests are effective, versatile, and can measure both lower-order and higher-order thinking skills.2 Fixed-choice tests are relatively easy to administer and use a format that students are familiar with. They offer an efficient way to conduct large-scale assessment and typically provide results both for individual students and for groups of students such as seniors, science majors, or student athletes. Test results facilitate comparisons among groups and across time and ideally suggest improvements to instruction programs that will lead to improved learning outcomes. Fixed-choice tests come with challenges and assumptions as well. For information literacy testing that is not graded as part of a course, test-takers may lack the motivation to try their best, thereby producing results that do not fully reflect their knowledge and abilities. Test designers can address this challenge with appropriate messages and other techniques. Costs associated with testing can act as a barrier, whether those costs are time, expertise, or money.","PeriodicalId":193553,"journal":{"name":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Information Literacy Assessment for Instruction Improvement and Demonstration of Library Value: Comparing Locally-Grown and Commercially-Created Tests\",\"authors\":\"Kathy E Clarke, C. Radcliff\",\"doi\":\"10.29242/lac.2018.74\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes two types of fixed-choice information literacy tests, one locally created and one nationally developed. The Madison Research Essentials Skills Test (MREST) is part of a tutorial-test model for first-year library instruction at James Madison University. Students must pass the test before they can move to sophomore status. This testing process relies on a collaborative model between JMU Libraries, the General Education program, and the Center for Assessment Research Studies (CARS). On the national level, the recently created Threshold Achievement Test for Information Literacy (TATIL) is based on the ACRL Framework for Information Literacy and in four test modules measures both information literacy knowledge and dispositions. TATIL was created by librarians and other educators and can be used to guide instructional program changes, for external and internal reporting and to give students recommendations for improving their information literacy. The decision to use a test and to choose which approach to take can be informed by comparing the benefits and limitations of these testing options. Tests have been used to assess information literacy for many years. Whether it is a quick test created for local use after instructional sessions, an institutional test to ensure that skills have been acquired or for longitudinal study of student knowledge, or a standardized test offering multi-institutional comparisons of results, this assessment method has a long history and a strong presence in library assessment. This paper explores two types of fixed-choice tests, one locally created and one commercially sponsored, which can be used for program improvement. Fixed-choice tests are one method among many for assessing achievement and ability. The benefits and limitations of standardized tests are well documented.1 Despite criticisms, tests are in wide use by colleges and universities, professional organizations, and testing companies. Well-written tests are effective, versatile, and can measure both lower-order and higher-order thinking skills.2 Fixed-choice tests are relatively easy to administer and use a format that students are familiar with. They offer an efficient way to conduct large-scale assessment and typically provide results both for individual students and for groups of students such as seniors, science majors, or student athletes. Test results facilitate comparisons among groups and across time and ideally suggest improvements to instruction programs that will lead to improved learning outcomes. Fixed-choice tests come with challenges and assumptions as well. For information literacy testing that is not graded as part of a course, test-takers may lack the motivation to try their best, thereby producing results that do not fully reflect their knowledge and abilities. Test designers can address this challenge with appropriate messages and other techniques. Costs associated with testing can act as a barrier, whether those costs are time, expertise, or money.\",\"PeriodicalId\":193553,\"journal\":{\"name\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29242/lac.2018.74\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29242/lac.2018.74","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Information Literacy Assessment for Instruction Improvement and Demonstration of Library Value: Comparing Locally-Grown and Commercially-Created Tests
This paper describes two types of fixed-choice information literacy tests, one locally created and one nationally developed. The Madison Research Essentials Skills Test (MREST) is part of a tutorial-test model for first-year library instruction at James Madison University. Students must pass the test before they can move to sophomore status. This testing process relies on a collaborative model between JMU Libraries, the General Education program, and the Center for Assessment Research Studies (CARS). On the national level, the recently created Threshold Achievement Test for Information Literacy (TATIL) is based on the ACRL Framework for Information Literacy and in four test modules measures both information literacy knowledge and dispositions. TATIL was created by librarians and other educators and can be used to guide instructional program changes, for external and internal reporting and to give students recommendations for improving their information literacy. The decision to use a test and to choose which approach to take can be informed by comparing the benefits and limitations of these testing options. Tests have been used to assess information literacy for many years. Whether it is a quick test created for local use after instructional sessions, an institutional test to ensure that skills have been acquired or for longitudinal study of student knowledge, or a standardized test offering multi-institutional comparisons of results, this assessment method has a long history and a strong presence in library assessment. This paper explores two types of fixed-choice tests, one locally created and one commercially sponsored, which can be used for program improvement. Fixed-choice tests are one method among many for assessing achievement and ability. The benefits and limitations of standardized tests are well documented.1 Despite criticisms, tests are in wide use by colleges and universities, professional organizations, and testing companies. Well-written tests are effective, versatile, and can measure both lower-order and higher-order thinking skills.2 Fixed-choice tests are relatively easy to administer and use a format that students are familiar with. They offer an efficient way to conduct large-scale assessment and typically provide results both for individual students and for groups of students such as seniors, science majors, or student athletes. Test results facilitate comparisons among groups and across time and ideally suggest improvements to instruction programs that will lead to improved learning outcomes. Fixed-choice tests come with challenges and assumptions as well. For information literacy testing that is not graded as part of a course, test-takers may lack the motivation to try their best, thereby producing results that do not fully reflect their knowledge and abilities. Test designers can address this challenge with appropriate messages and other techniques. Costs associated with testing can act as a barrier, whether those costs are time, expertise, or money.