Mohammad Nayef Ayasrah, Mohamad Ahmad Saleem Khasawneh, Mazen Omar Almulla, Amoura Hassan Aboutaleb
{"title":"约旦中学生ai整合元认知学习弹性量表(AIIMLR)的设计与验证:来自网络分析视角的见解","authors":"Mohammad Nayef Ayasrah, Mohamad Ahmad Saleem Khasawneh, Mazen Omar Almulla, Amoura Hassan Aboutaleb","doi":"10.1111/jcal.70127","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>One area that has been dramatically changed by artificial intelligence (AI) is educational environments. Chatbots, Recommender Systems, Adaptive Learning Systems and Large Language Models have been emerging as practical tools for facilitating learning. However, using such tools appropriately is challenging. In this regard, the construct of metacognitive learning resilience has been receiving growing attention, especially in the face of uncertainties and adversities associated with AI-supported learning.</p>\n </section>\n \n <section>\n \n <h3> Objectives</h3>\n \n <p>The current research aimed to develop and evaluate the psychometric properties of the AI-Integrated Metacognitive Learning Resilience Scale (AIIMLR Scale). This scale was developed to assess students' ability to cognitively and emotionally manage learning challenges in AI-enhanced learning settings.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This study, which had a mixed-method research design, was performed in Jordan in 2025. A pool of items, developed based on a systematic review of theoretical literature and semi-structured interviews, was used. Then, content validation and the pilot phase were used to modify items. Exploratory factor analysis (EFA), confirmatory factor analysis (CFA), exploratory graph analysis (EGA) and Random Forest Modelling (RFM) were used to assess construct validity of this scale. In addition, Cronbach's alpha (<i>α</i>) and McDonald's omega (<i>ω</i>) were used to assess reliability. Finally, the intraclass correlation coefficient (ICC) was performed in addition to evaluating test–retest reliability.</p>\n </section>\n \n <section>\n \n <h3> Results and Conclusions</h3>\n \n <p>EFA results revealed six factors: Self-Awareness and Metacognitive Regulation in AI-Mediated Learning; Cognitive Adaptability in Dynamic AI-Based Learning Contexts; Emotional Stability During AI-Integrated Learning Challenges; Strategic Perseverance in AI-Supported Problem-Solving; Motivational Resilience Amid AI-Driven Learning Difficulties; and Reflective Recalibration of Learning through AI Feedback. These six factors collectively explained 66.21% of the total variance. CFA fit indices (CFI = 0.917, RMSEA = 0.079) and reliability indicators, including Cronbach's alpha (0.897–0.948), McDonald's omega (0.892–0.950) and Composite Reliability (CR: 0.888–0.954), were all within acceptable ranges. Moreover, convergent and discriminant validity were confirmed using the Average Variance Extracted (AVE). The measurement invariance test across gender indicated that the scale maintains stable measurement properties for both males and females. Findings suggest that the AIIMLR Scale is a valid and reliable tool for assessing metacognitive learning resilience in AI-enhanced educational settings.</p>\n </section>\n </div>","PeriodicalId":48071,"journal":{"name":"Journal of Computer Assisted Learning","volume":"41 5","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Design and Validation of the AI-Integrated Metacognitive Learning Resilience Scale (AIIMLR Scale) for Secondary School Students in Jordan: Insights From the Network Analysis Perspective\",\"authors\":\"Mohammad Nayef Ayasrah, Mohamad Ahmad Saleem Khasawneh, Mazen Omar Almulla, Amoura Hassan Aboutaleb\",\"doi\":\"10.1111/jcal.70127\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>One area that has been dramatically changed by artificial intelligence (AI) is educational environments. Chatbots, Recommender Systems, Adaptive Learning Systems and Large Language Models have been emerging as practical tools for facilitating learning. However, using such tools appropriately is challenging. In this regard, the construct of metacognitive learning resilience has been receiving growing attention, especially in the face of uncertainties and adversities associated with AI-supported learning.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Objectives</h3>\\n \\n <p>The current research aimed to develop and evaluate the psychometric properties of the AI-Integrated Metacognitive Learning Resilience Scale (AIIMLR Scale). This scale was developed to assess students' ability to cognitively and emotionally manage learning challenges in AI-enhanced learning settings.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>This study, which had a mixed-method research design, was performed in Jordan in 2025. A pool of items, developed based on a systematic review of theoretical literature and semi-structured interviews, was used. Then, content validation and the pilot phase were used to modify items. Exploratory factor analysis (EFA), confirmatory factor analysis (CFA), exploratory graph analysis (EGA) and Random Forest Modelling (RFM) were used to assess construct validity of this scale. In addition, Cronbach's alpha (<i>α</i>) and McDonald's omega (<i>ω</i>) were used to assess reliability. Finally, the intraclass correlation coefficient (ICC) was performed in addition to evaluating test–retest reliability.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results and Conclusions</h3>\\n \\n <p>EFA results revealed six factors: Self-Awareness and Metacognitive Regulation in AI-Mediated Learning; Cognitive Adaptability in Dynamic AI-Based Learning Contexts; Emotional Stability During AI-Integrated Learning Challenges; Strategic Perseverance in AI-Supported Problem-Solving; Motivational Resilience Amid AI-Driven Learning Difficulties; and Reflective Recalibration of Learning through AI Feedback. These six factors collectively explained 66.21% of the total variance. CFA fit indices (CFI = 0.917, RMSEA = 0.079) and reliability indicators, including Cronbach's alpha (0.897–0.948), McDonald's omega (0.892–0.950) and Composite Reliability (CR: 0.888–0.954), were all within acceptable ranges. Moreover, convergent and discriminant validity were confirmed using the Average Variance Extracted (AVE). The measurement invariance test across gender indicated that the scale maintains stable measurement properties for both males and females. Findings suggest that the AIIMLR Scale is a valid and reliable tool for assessing metacognitive learning resilience in AI-enhanced educational settings.</p>\\n </section>\\n </div>\",\"PeriodicalId\":48071,\"journal\":{\"name\":\"Journal of Computer Assisted Learning\",\"volume\":\"41 5\",\"pages\":\"\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computer Assisted Learning\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jcal.70127\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computer Assisted Learning","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jcal.70127","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Design and Validation of the AI-Integrated Metacognitive Learning Resilience Scale (AIIMLR Scale) for Secondary School Students in Jordan: Insights From the Network Analysis Perspective
Background
One area that has been dramatically changed by artificial intelligence (AI) is educational environments. Chatbots, Recommender Systems, Adaptive Learning Systems and Large Language Models have been emerging as practical tools for facilitating learning. However, using such tools appropriately is challenging. In this regard, the construct of metacognitive learning resilience has been receiving growing attention, especially in the face of uncertainties and adversities associated with AI-supported learning.
Objectives
The current research aimed to develop and evaluate the psychometric properties of the AI-Integrated Metacognitive Learning Resilience Scale (AIIMLR Scale). This scale was developed to assess students' ability to cognitively and emotionally manage learning challenges in AI-enhanced learning settings.
Methods
This study, which had a mixed-method research design, was performed in Jordan in 2025. A pool of items, developed based on a systematic review of theoretical literature and semi-structured interviews, was used. Then, content validation and the pilot phase were used to modify items. Exploratory factor analysis (EFA), confirmatory factor analysis (CFA), exploratory graph analysis (EGA) and Random Forest Modelling (RFM) were used to assess construct validity of this scale. In addition, Cronbach's alpha (α) and McDonald's omega (ω) were used to assess reliability. Finally, the intraclass correlation coefficient (ICC) was performed in addition to evaluating test–retest reliability.
Results and Conclusions
EFA results revealed six factors: Self-Awareness and Metacognitive Regulation in AI-Mediated Learning; Cognitive Adaptability in Dynamic AI-Based Learning Contexts; Emotional Stability During AI-Integrated Learning Challenges; Strategic Perseverance in AI-Supported Problem-Solving; Motivational Resilience Amid AI-Driven Learning Difficulties; and Reflective Recalibration of Learning through AI Feedback. These six factors collectively explained 66.21% of the total variance. CFA fit indices (CFI = 0.917, RMSEA = 0.079) and reliability indicators, including Cronbach's alpha (0.897–0.948), McDonald's omega (0.892–0.950) and Composite Reliability (CR: 0.888–0.954), were all within acceptable ranges. Moreover, convergent and discriminant validity were confirmed using the Average Variance Extracted (AVE). The measurement invariance test across gender indicated that the scale maintains stable measurement properties for both males and females. Findings suggest that the AIIMLR Scale is a valid and reliable tool for assessing metacognitive learning resilience in AI-enhanced educational settings.
期刊介绍:
The Journal of Computer Assisted Learning is an international peer-reviewed journal which covers the whole range of uses of information and communication technology to support learning and knowledge exchange. It aims to provide a medium for communication among researchers as well as a channel linking researchers, practitioners, and policy makers. JCAL is also a rich source of material for master and PhD students in areas such as educational psychology, the learning sciences, instructional technology, instructional design, collaborative learning, intelligent learning systems, learning analytics, open, distance and networked learning, and educational evaluation and assessment. This is the case for formal (e.g., schools), non-formal (e.g., workplace learning) and informal learning (e.g., museums and libraries) situations and environments. Volumes often include one Special Issue which these provides readers with a broad and in-depth perspective on a specific topic. First published in 1985, JCAL continues to have the aim of making the outcomes of contemporary research and experience accessible. During this period there have been major technological advances offering new opportunities and approaches in the use of a wide range of technologies to support learning and knowledge transfer more generally. There is currently much emphasis on the use of network functionality and the challenges its appropriate uses pose to teachers/tutors working with students locally and at a distance. JCAL welcomes: -Empirical reports, single studies or programmatic series of studies on the use of computers and information technologies in learning and assessment -Critical and original meta-reviews of literature on the use of computers for learning -Empirical studies on the design and development of innovative technology-based systems for learning -Conceptual articles on issues relating to the Aims and Scope