Information Literacy Assessment for Instruction Improvement and Demonstration of Library Value: Comparing Locally-Grown and Commercially-Created Tests

Kathy E Clarke, C. Radcliff
{"title":"Information Literacy Assessment for Instruction Improvement and Demonstration of Library Value: Comparing Locally-Grown and Commercially-Created Tests","authors":"Kathy E Clarke, C. Radcliff","doi":"10.29242/lac.2018.74","DOIUrl":null,"url":null,"abstract":"This paper describes two types of fixed-choice information literacy tests, one locally created and one nationally developed. The Madison Research Essentials Skills Test (MREST) is part of a tutorial-test model for first-year library instruction at James Madison University. Students must pass the test before they can move to sophomore status. This testing process relies on a collaborative model between JMU Libraries, the General Education program, and the Center for Assessment Research Studies (CARS). On the national level, the recently created Threshold Achievement Test for Information Literacy (TATIL) is based on the ACRL Framework for Information Literacy and in four test modules measures both information literacy knowledge and dispositions. TATIL was created by librarians and other educators and can be used to guide instructional program changes, for external and internal reporting and to give students recommendations for improving their information literacy. The decision to use a test and to choose which approach to take can be informed by comparing the benefits and limitations of these testing options. Tests have been used to assess information literacy for many years. Whether it is a quick test created for local use after instructional sessions, an institutional test to ensure that skills have been acquired or for longitudinal study of student knowledge, or a standardized test offering multi-institutional comparisons of results, this assessment method has a long history and a strong presence in library assessment. This paper explores two types of fixed-choice tests, one locally created and one commercially sponsored, which can be used for program improvement. Fixed-choice tests are one method among many for assessing achievement and ability. The benefits and limitations of standardized tests are well documented.1 Despite criticisms, tests are in wide use by colleges and universities, professional organizations, and testing companies. Well-written tests are effective, versatile, and can measure both lower-order and higher-order thinking skills.2 Fixed-choice tests are relatively easy to administer and use a format that students are familiar with. They offer an efficient way to conduct large-scale assessment and typically provide results both for individual students and for groups of students such as seniors, science majors, or student athletes. Test results facilitate comparisons among groups and across time and ideally suggest improvements to instruction programs that will lead to improved learning outcomes. Fixed-choice tests come with challenges and assumptions as well. For information literacy testing that is not graded as part of a course, test-takers may lack the motivation to try their best, thereby producing results that do not fully reflect their knowledge and abilities. Test designers can address this challenge with appropriate messages and other techniques. Costs associated with testing can act as a barrier, whether those costs are time, expertise, or money.","PeriodicalId":193553,"journal":{"name":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29242/lac.2018.74","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper describes two types of fixed-choice information literacy tests, one locally created and one nationally developed. The Madison Research Essentials Skills Test (MREST) is part of a tutorial-test model for first-year library instruction at James Madison University. Students must pass the test before they can move to sophomore status. This testing process relies on a collaborative model between JMU Libraries, the General Education program, and the Center for Assessment Research Studies (CARS). On the national level, the recently created Threshold Achievement Test for Information Literacy (TATIL) is based on the ACRL Framework for Information Literacy and in four test modules measures both information literacy knowledge and dispositions. TATIL was created by librarians and other educators and can be used to guide instructional program changes, for external and internal reporting and to give students recommendations for improving their information literacy. The decision to use a test and to choose which approach to take can be informed by comparing the benefits and limitations of these testing options. Tests have been used to assess information literacy for many years. Whether it is a quick test created for local use after instructional sessions, an institutional test to ensure that skills have been acquired or for longitudinal study of student knowledge, or a standardized test offering multi-institutional comparisons of results, this assessment method has a long history and a strong presence in library assessment. This paper explores two types of fixed-choice tests, one locally created and one commercially sponsored, which can be used for program improvement. Fixed-choice tests are one method among many for assessing achievement and ability. The benefits and limitations of standardized tests are well documented.1 Despite criticisms, tests are in wide use by colleges and universities, professional organizations, and testing companies. Well-written tests are effective, versatile, and can measure both lower-order and higher-order thinking skills.2 Fixed-choice tests are relatively easy to administer and use a format that students are familiar with. They offer an efficient way to conduct large-scale assessment and typically provide results both for individual students and for groups of students such as seniors, science majors, or student athletes. Test results facilitate comparisons among groups and across time and ideally suggest improvements to instruction programs that will lead to improved learning outcomes. Fixed-choice tests come with challenges and assumptions as well. For information literacy testing that is not graded as part of a course, test-takers may lack the motivation to try their best, thereby producing results that do not fully reflect their knowledge and abilities. Test designers can address this challenge with appropriate messages and other techniques. Costs associated with testing can act as a barrier, whether those costs are time, expertise, or money.
教学改进与图书馆价值展示的信息素养评估:本地测试与商业测试的比较
本文介绍了两种类型的固定选择信息素养测试,一种是地方创建的,一种是国家开发的。麦迪逊研究基本技能测试(MREST)是詹姆斯·麦迪逊大学一年级图书馆教学辅导测试模式的一部分。学生必须通过考试才能升入大二。这个测试过程依赖于JMU图书馆、通识教育计划和评估研究中心(CARS)之间的协作模式。在国家层面上,最近创建的信息素养门槛成就测试(TATIL)基于ACRL信息素养框架,并在四个测试模块中测量信息素养知识和倾向。TATIL是由图书馆员和其他教育工作者创建的,可用于指导教学计划的变化,用于外部和内部报告,并为提高学生的信息素养提供建议。可以通过比较这些测试选项的优点和局限性来决定使用测试和选择哪种方法。多年来,人们一直使用测试来评估信息素养。无论是在教学课程结束后为当地使用而创建的快速测试,还是确保学生掌握技能或对学生知识进行纵向研究的机构测试,还是提供多机构结果比较的标准化测试,这种评估方法历史悠久,在图书馆评估中占有重要地位。本文探讨了两种类型的固定选择测试,一种是本地创建的,一种是商业赞助的,它们可以用于程序改进。固定选择测验是许多评估成绩和能力的方法之一。标准化考试的好处和局限性是有据可查的尽管有批评,但考试被学院和大学、专业组织和考试公司广泛使用。写得好的测试是有效的,通用的,可以测试低阶和高阶思维技能固定选项测试相对容易管理,并且使用学生熟悉的形式。它们提供了一种进行大规模评估的有效方法,通常既可以为学生个人提供结果,也可以为高年级学生、科学专业学生或学生运动员等学生群体提供结果。测试结果有助于小组之间和时间之间的比较,并理想地建议改进教学计划,从而提高学习效果。固定选择考试也会带来挑战和假设。如果信息素养测试不作为课程的一部分进行评分,考生可能会缺乏努力的动力,从而产生不能充分反映其知识和能力的结果。测试设计人员可以使用适当的消息和其他技术来解决这个挑战。与测试相关的成本可能成为障碍,无论这些成本是时间、专业知识还是金钱。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信