MAILS - Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies
Astrid Carolus , Martin J. Koch , Samantha Straka , Marc Erich Latoschik , Carolin Wienrich
{"title":"MAILS - Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies","authors":"Astrid Carolus , Martin J. Koch , Samantha Straka , Marc Erich Latoschik , Carolin Wienrich","doi":"10.1016/j.chbah.2023.100014","DOIUrl":null,"url":null,"abstract":"<div><p>Valid measurement of AI literacy is important for the selection of personnel, identification of shortages in skill and knowledge, and evaluation of AI literacy interventions. A questionnaire is missing that is deeply grounded in the existing literature on AI literacy, is modularly applicable depending on the goals, and includes further psychological competencies in addition to the typical facets of AIL. This paper presents the development and validation of a questionnaire considering the desiderata described above. We derived items to represent different facets of AI literacy and psychological competencies, such as problem-solving, learning, and emotion regulation in regard to AI. We collected data from 300 German-speaking adults to confirm the factorial structure. The result is the Meta AI Literacy Scale (MAILS) for AI literacy with the facets Use & apply AI, Understand AI, Detect AI, and AI Ethics and the ability to Create AI as a separate construct, and AI Self-efficacy in learning and problem-solving and AI Self-management (i.e., AI persuasion literacy and emotion regulation). This study contributes to the research on AI literacy by providing a measurement instrument relying on profound competency models. Psychological competencies are included particularly important in the context of pervasive change through AI systems.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100014"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882123000142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Valid measurement of AI literacy is important for the selection of personnel, identification of shortages in skill and knowledge, and evaluation of AI literacy interventions. A questionnaire is missing that is deeply grounded in the existing literature on AI literacy, is modularly applicable depending on the goals, and includes further psychological competencies in addition to the typical facets of AIL. This paper presents the development and validation of a questionnaire considering the desiderata described above. We derived items to represent different facets of AI literacy and psychological competencies, such as problem-solving, learning, and emotion regulation in regard to AI. We collected data from 300 German-speaking adults to confirm the factorial structure. The result is the Meta AI Literacy Scale (MAILS) for AI literacy with the facets Use & apply AI, Understand AI, Detect AI, and AI Ethics and the ability to Create AI as a separate construct, and AI Self-efficacy in learning and problem-solving and AI Self-management (i.e., AI persuasion literacy and emotion regulation). This study contributes to the research on AI literacy by providing a measurement instrument relying on profound competency models. Psychological competencies are included particularly important in the context of pervasive change through AI systems.