“非专家人工智能素养评估量表”的编制——探索性因素分析

IF 5.8 Q1 PSYCHOLOGY, EXPERIMENTAL
Matthias Carl Laupichler , Alexandra Aster , Nicolas Haverkamp , Tobias Raupach
{"title":"“非专家人工智能素养评估量表”的编制——探索性因素分析","authors":"Matthias Carl Laupichler ,&nbsp;Alexandra Aster ,&nbsp;Nicolas Haverkamp ,&nbsp;Tobias Raupach","doi":"10.1016/j.chbr.2023.100338","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial Intelligence competencies will become increasingly important in the near future. Therefore, it is essential that the AI literacy of individuals can be assessed in a valid and reliable way. This study presents the development of the “Scale for the assessment of non-experts' AI literacy” (SNAIL). An existing AI literacy item set was distributed as an online questionnaire to a heterogeneous group of non-experts (i.e., individuals without a formal AI or computer science education). Based on the data collected, an exploratory factor analysis was conducted to investigate the underlying latent factor structure. The results indicated that a three-factor model had the best model fit. The individual factors reflected AI competencies in the areas of “Technical Understanding”, “Critical Appraisal”, and “Practical Application”. In addition, eight items from the original questionnaire were deleted based on high intercorrelations and low communalities to reduce the length of the questionnaire. The final SNAIL-questionnaire consists of 31 items that can be used to assess the AI literacy of individual non-experts or specific groups and is also designed to enable the evaluation of AI literacy courses’ teaching effectiveness.</p></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"12 ","pages":"Article 100338"},"PeriodicalIF":5.8000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Development of the “Scale for the assessment of non-experts’ AI literacy” – An exploratory factor analysis\",\"authors\":\"Matthias Carl Laupichler ,&nbsp;Alexandra Aster ,&nbsp;Nicolas Haverkamp ,&nbsp;Tobias Raupach\",\"doi\":\"10.1016/j.chbr.2023.100338\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Artificial Intelligence competencies will become increasingly important in the near future. Therefore, it is essential that the AI literacy of individuals can be assessed in a valid and reliable way. This study presents the development of the “Scale for the assessment of non-experts' AI literacy” (SNAIL). An existing AI literacy item set was distributed as an online questionnaire to a heterogeneous group of non-experts (i.e., individuals without a formal AI or computer science education). Based on the data collected, an exploratory factor analysis was conducted to investigate the underlying latent factor structure. The results indicated that a three-factor model had the best model fit. The individual factors reflected AI competencies in the areas of “Technical Understanding”, “Critical Appraisal”, and “Practical Application”. In addition, eight items from the original questionnaire were deleted based on high intercorrelations and low communalities to reduce the length of the questionnaire. The final SNAIL-questionnaire consists of 31 items that can be used to assess the AI literacy of individual non-experts or specific groups and is also designed to enable the evaluation of AI literacy courses’ teaching effectiveness.</p></div>\",\"PeriodicalId\":72681,\"journal\":{\"name\":\"Computers in human behavior reports\",\"volume\":\"12 \",\"pages\":\"Article 100338\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2023-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in human behavior reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2451958823000714\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958823000714","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

在不久的将来,人工智能能力将变得越来越重要。因此,以有效和可靠的方式评估个人的人工智能素养至关重要。本研究介绍了“非专家人工智能素养评估量表”(SNAIL)的开发。现有的人工智能素养项目集作为在线问卷分发给非专家的异质群体(即没有受过正式人工智能或计算机科学教育的个人)。基于收集的数据,进行了探索性因素分析,以调查潜在的因素结构。结果表明,三因素模型具有最佳的模型拟合度。个体因素反映了人工智能在“技术理解”、“关键评估”和“实际应用”领域的能力。此外,基于高度相关性和低社区性,删除了原始问卷中的八个项目,以缩短问卷的长度。最终的SNAIL问卷由31个项目组成,可用于评估个人非专家或特定群体的人工智能素养,也可用于评估人工智能素养课程的教学效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Development of the “Scale for the assessment of non-experts’ AI literacy” – An exploratory factor analysis

Artificial Intelligence competencies will become increasingly important in the near future. Therefore, it is essential that the AI literacy of individuals can be assessed in a valid and reliable way. This study presents the development of the “Scale for the assessment of non-experts' AI literacy” (SNAIL). An existing AI literacy item set was distributed as an online questionnaire to a heterogeneous group of non-experts (i.e., individuals without a formal AI or computer science education). Based on the data collected, an exploratory factor analysis was conducted to investigate the underlying latent factor structure. The results indicated that a three-factor model had the best model fit. The individual factors reflected AI competencies in the areas of “Technical Understanding”, “Critical Appraisal”, and “Practical Application”. In addition, eight items from the original questionnaire were deleted based on high intercorrelations and low communalities to reduce the length of the questionnaire. The final SNAIL-questionnaire consists of 31 items that can be used to assess the AI literacy of individual non-experts or specific groups and is also designed to enable the evaluation of AI literacy courses’ teaching effectiveness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信