{"title":"AILIS 1.0: A new framework to measure AI literacy in library and information science (LIS)","authors":"Michela Montesi , Belén Álvarez Bornstein , Núria Bautista Puig , Manuel Blázquez Ochando , Alicia Sánchez Díez","doi":"10.1016/j.acalib.2025.103118","DOIUrl":null,"url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes more embedded in academic and professional settings, assessing and improving AI literacy among current and future information professionals is increasingly important. However, research in this area within Library and Information Science (LIS) remains exploratory, and more evidence is needed to guide training and curriculum design. This study assesses AI literacy among LIS students and librarians, highlighting key areas and groups for targeted training.</div><div>To this end, the AILIS 1.0 questionnaire was developed from existing AI literacy tools in higher education and adapted to the LIS context with expert input. It was administered to 163 respondents at the Complutense University of Madrid (Spain). Descriptive statistics and non-parametric tests were used to examine gender and group differences. To further validate the findings, three focus groups with LIS undergraduates were conducted.</div><div>Functioning, Ethics, and Evaluation emerged as core dimensions of AI literacy. Functioning scores correlated strongly with all other dimensions except self-assessed Usage. Overall, library professionals outperformed students, particularly in Ethics and Usage. However, students, especially first-years, reported higher self-efficacy despite lower performance, indicating a tendency to overestimate their AI literacy, as confirmed by focus groups.</div><div>The research underscores the need for educational strategies in AI literacy and greater involvement of educators and professionals. The higher AI literacy shown by librarians should encourage professionals to take more active roles in AI literacy training. Finally, results highlight the potential of AILIS 1.0 as a diagnostic tool, but also as a framework to evaluate AI literacy within LIS.</div></div>","PeriodicalId":47762,"journal":{"name":"Journal of Academic Librarianship","volume":"51 5","pages":"Article 103118"},"PeriodicalIF":2.3000,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Academic Librarianship","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0099133325001144","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As artificial intelligence (AI) becomes more embedded in academic and professional settings, assessing and improving AI literacy among current and future information professionals is increasingly important. However, research in this area within Library and Information Science (LIS) remains exploratory, and more evidence is needed to guide training and curriculum design. This study assesses AI literacy among LIS students and librarians, highlighting key areas and groups for targeted training.
To this end, the AILIS 1.0 questionnaire was developed from existing AI literacy tools in higher education and adapted to the LIS context with expert input. It was administered to 163 respondents at the Complutense University of Madrid (Spain). Descriptive statistics and non-parametric tests were used to examine gender and group differences. To further validate the findings, three focus groups with LIS undergraduates were conducted.
Functioning, Ethics, and Evaluation emerged as core dimensions of AI literacy. Functioning scores correlated strongly with all other dimensions except self-assessed Usage. Overall, library professionals outperformed students, particularly in Ethics and Usage. However, students, especially first-years, reported higher self-efficacy despite lower performance, indicating a tendency to overestimate their AI literacy, as confirmed by focus groups.
The research underscores the need for educational strategies in AI literacy and greater involvement of educators and professionals. The higher AI literacy shown by librarians should encourage professionals to take more active roles in AI literacy training. Finally, results highlight the potential of AILIS 1.0 as a diagnostic tool, but also as a framework to evaluate AI literacy within LIS.
期刊介绍:
The Journal of Academic Librarianship, an international and refereed journal, publishes articles that focus on problems and issues germane to college and university libraries. JAL provides a forum for authors to present research findings and, where applicable, their practical applications and significance; analyze policies, practices, issues, and trends; speculate about the future of academic librarianship; present analytical bibliographic essays and philosophical treatises. JAL also brings to the attention of its readers information about hundreds of new and recently published books in library and information science, management, scholarly communication, and higher education. JAL, in addition, covers management and discipline-based software and information policy developments.