{"title":"学术出版物中的助记评估框架:跨学科和人工智能介导背景下的被引参考分析","authors":"Robert Tomaszewski","doi":"10.1016/j.acalib.2025.103113","DOIUrl":null,"url":null,"abstract":"<div><div>Mnemonic evaluative frameworks have become central to information literacy instruction for assessing information credibility. Widely used tools such as CRAAP, CARS, ACT UP, and SIFT remain underrepresented in scholarly literature and insufficiently aligned with emerging information challenges. This study uses cited reference analysis in the Scopus database to examine 16 mnemonic evaluative frameworks across 280 peer-reviewed journal articles, conference papers, and review articles. Citation patterns were analyzed by year, discipline, institutional affiliation, and source title. Findings reveal that while legacy models like CRAAP and CARS retain the most citations, newer frameworks such as SIFT and RADAR are proportionally more cited in AI-related literature. A subset of 49 AI-focused citing documents indicates a disciplinary shift from Library and Information Sciences toward Computer Science, Engineering, Business, and Decision Sciences since 2022. These results highlight the need for adaptive, systems-aware models that address credibility challenges associated with generative AI and algorithmic curation. In response, this study introduces the CAT Test (<em>Check, Ask, Think</em>), a three-part evaluative framework designed to help learners assess AI-generated content by corroborating claims, interrogating model reasoning, and reflecting on platform influence. The findings inform instructional design and contribute to ongoing conversations about algorithmic transparency and credibility in academic librarianship.</div></div>","PeriodicalId":47762,"journal":{"name":"Journal of Academic Librarianship","volume":"51 5","pages":"Article 103113"},"PeriodicalIF":2.3000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mnemonic evaluative frameworks in scholarly publications: A cited reference analysis across disciplines and AI-mediated contexts\",\"authors\":\"Robert Tomaszewski\",\"doi\":\"10.1016/j.acalib.2025.103113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Mnemonic evaluative frameworks have become central to information literacy instruction for assessing information credibility. Widely used tools such as CRAAP, CARS, ACT UP, and SIFT remain underrepresented in scholarly literature and insufficiently aligned with emerging information challenges. This study uses cited reference analysis in the Scopus database to examine 16 mnemonic evaluative frameworks across 280 peer-reviewed journal articles, conference papers, and review articles. Citation patterns were analyzed by year, discipline, institutional affiliation, and source title. Findings reveal that while legacy models like CRAAP and CARS retain the most citations, newer frameworks such as SIFT and RADAR are proportionally more cited in AI-related literature. A subset of 49 AI-focused citing documents indicates a disciplinary shift from Library and Information Sciences toward Computer Science, Engineering, Business, and Decision Sciences since 2022. These results highlight the need for adaptive, systems-aware models that address credibility challenges associated with generative AI and algorithmic curation. In response, this study introduces the CAT Test (<em>Check, Ask, Think</em>), a three-part evaluative framework designed to help learners assess AI-generated content by corroborating claims, interrogating model reasoning, and reflecting on platform influence. The findings inform instructional design and contribute to ongoing conversations about algorithmic transparency and credibility in academic librarianship.</div></div>\",\"PeriodicalId\":47762,\"journal\":{\"name\":\"Journal of Academic Librarianship\",\"volume\":\"51 5\",\"pages\":\"Article 103113\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Academic Librarianship\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0099133325001090\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Academic Librarianship","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0099133325001090","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Mnemonic evaluative frameworks in scholarly publications: A cited reference analysis across disciplines and AI-mediated contexts
Mnemonic evaluative frameworks have become central to information literacy instruction for assessing information credibility. Widely used tools such as CRAAP, CARS, ACT UP, and SIFT remain underrepresented in scholarly literature and insufficiently aligned with emerging information challenges. This study uses cited reference analysis in the Scopus database to examine 16 mnemonic evaluative frameworks across 280 peer-reviewed journal articles, conference papers, and review articles. Citation patterns were analyzed by year, discipline, institutional affiliation, and source title. Findings reveal that while legacy models like CRAAP and CARS retain the most citations, newer frameworks such as SIFT and RADAR are proportionally more cited in AI-related literature. A subset of 49 AI-focused citing documents indicates a disciplinary shift from Library and Information Sciences toward Computer Science, Engineering, Business, and Decision Sciences since 2022. These results highlight the need for adaptive, systems-aware models that address credibility challenges associated with generative AI and algorithmic curation. In response, this study introduces the CAT Test (Check, Ask, Think), a three-part evaluative framework designed to help learners assess AI-generated content by corroborating claims, interrogating model reasoning, and reflecting on platform influence. The findings inform instructional design and contribute to ongoing conversations about algorithmic transparency and credibility in academic librarianship.
期刊介绍:
The Journal of Academic Librarianship, an international and refereed journal, publishes articles that focus on problems and issues germane to college and university libraries. JAL provides a forum for authors to present research findings and, where applicable, their practical applications and significance; analyze policies, practices, issues, and trends; speculate about the future of academic librarianship; present analytical bibliographic essays and philosophical treatises. JAL also brings to the attention of its readers information about hundreds of new and recently published books in library and information science, management, scholarly communication, and higher education. JAL, in addition, covers management and discipline-based software and information policy developments.