Mnemonic evaluative frameworks in scholarly publications: A cited reference analysis across disciplines and AI-mediated contexts

IF 2.3 3区 管理学 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE
Robert Tomaszewski
{"title":"Mnemonic evaluative frameworks in scholarly publications: A cited reference analysis across disciplines and AI-mediated contexts","authors":"Robert Tomaszewski","doi":"10.1016/j.acalib.2025.103113","DOIUrl":null,"url":null,"abstract":"<div><div>Mnemonic evaluative frameworks have become central to information literacy instruction for assessing information credibility. Widely used tools such as CRAAP, CARS, ACT UP, and SIFT remain underrepresented in scholarly literature and insufficiently aligned with emerging information challenges. This study uses cited reference analysis in the Scopus database to examine 16 mnemonic evaluative frameworks across 280 peer-reviewed journal articles, conference papers, and review articles. Citation patterns were analyzed by year, discipline, institutional affiliation, and source title. Findings reveal that while legacy models like CRAAP and CARS retain the most citations, newer frameworks such as SIFT and RADAR are proportionally more cited in AI-related literature. A subset of 49 AI-focused citing documents indicates a disciplinary shift from Library and Information Sciences toward Computer Science, Engineering, Business, and Decision Sciences since 2022. These results highlight the need for adaptive, systems-aware models that address credibility challenges associated with generative AI and algorithmic curation. In response, this study introduces the CAT Test (<em>Check, Ask, Think</em>), a three-part evaluative framework designed to help learners assess AI-generated content by corroborating claims, interrogating model reasoning, and reflecting on platform influence. The findings inform instructional design and contribute to ongoing conversations about algorithmic transparency and credibility in academic librarianship.</div></div>","PeriodicalId":47762,"journal":{"name":"Journal of Academic Librarianship","volume":"51 5","pages":"Article 103113"},"PeriodicalIF":2.3000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Academic Librarianship","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0099133325001090","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Mnemonic evaluative frameworks have become central to information literacy instruction for assessing information credibility. Widely used tools such as CRAAP, CARS, ACT UP, and SIFT remain underrepresented in scholarly literature and insufficiently aligned with emerging information challenges. This study uses cited reference analysis in the Scopus database to examine 16 mnemonic evaluative frameworks across 280 peer-reviewed journal articles, conference papers, and review articles. Citation patterns were analyzed by year, discipline, institutional affiliation, and source title. Findings reveal that while legacy models like CRAAP and CARS retain the most citations, newer frameworks such as SIFT and RADAR are proportionally more cited in AI-related literature. A subset of 49 AI-focused citing documents indicates a disciplinary shift from Library and Information Sciences toward Computer Science, Engineering, Business, and Decision Sciences since 2022. These results highlight the need for adaptive, systems-aware models that address credibility challenges associated with generative AI and algorithmic curation. In response, this study introduces the CAT Test (Check, Ask, Think), a three-part evaluative framework designed to help learners assess AI-generated content by corroborating claims, interrogating model reasoning, and reflecting on platform influence. The findings inform instructional design and contribute to ongoing conversations about algorithmic transparency and credibility in academic librarianship.
学术出版物中的助记评估框架:跨学科和人工智能介导背景下的被引参考分析
助记评价框架已成为评估信息可信度的信息素养指导的核心。广泛使用的工具,如CRAAP、CARS、ACT UP和SIFT,在学术文献中仍然没有得到充分的代表,也没有充分地与新兴的信息挑战相一致。本研究使用Scopus数据库中的引用参考分析,对280篇同行评议的期刊文章、会议论文和评论文章中的16个助记法评估框架进行了研究。引用模式按年份、学科、机构隶属关系和来源标题进行分析。研究结果显示,尽管像CRAAP和CARS这样的传统模型保留了最多的引用,但像SIFT和RADAR这样的新框架在人工智能相关文献中被引用的比例更高。49个以人工智能为重点的引用文件的子集表明,自2022年以来,学科从图书馆和信息科学转向计算机科学、工程、商业和决策科学。这些结果强调了对自适应、系统感知模型的需求,这些模型可以解决与生成式人工智能和算法管理相关的可信度挑战。作为回应,本研究引入了CAT测试(检查、询问、思考),这是一个由三部分组成的评估框架,旨在帮助学习者通过证实主张、质疑模型推理和反思平台影响来评估人工智能生成的内容。这些发现为教学设计提供了信息,并有助于对学术图书馆中算法透明度和可信度的持续讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Academic Librarianship
Journal of Academic Librarianship INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
5.30
自引率
15.40%
发文量
120
审稿时长
29 days
期刊介绍: The Journal of Academic Librarianship, an international and refereed journal, publishes articles that focus on problems and issues germane to college and university libraries. JAL provides a forum for authors to present research findings and, where applicable, their practical applications and significance; analyze policies, practices, issues, and trends; speculate about the future of academic librarianship; present analytical bibliographic essays and philosophical treatises. JAL also brings to the attention of its readers information about hundreds of new and recently published books in library and information science, management, scholarly communication, and higher education. JAL, in addition, covers management and discipline-based software and information policy developments.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信