Seul Chan Lee , Tiju Baby , Rattawut Vongvit , Jieun Lee , Young Woo Kim , Min Chul Cha , Sol Hee Yoon
{"title":"大学生生成式人工智能能力量表(GenAIComp)的开发与验证","authors":"Seul Chan Lee , Tiju Baby , Rattawut Vongvit , Jieun Lee , Young Woo Kim , Min Chul Cha , Sol Hee Yoon","doi":"10.1016/j.techsoc.2025.103059","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid development of Generative Artificial Intelligence (Generative AI) across several sectors underscores the need for a systematic tool to evaluate AI competence. Current digital literacy frameworks lack AI-specific competencies, resulting in inconsistencies in the assessment of AI competence. This study aims to establish a standardized assessment framework for Generative AI competence by identifying key skill factors and empirically validating a structured evaluation tool called the Generative AI Competence Scale (GenAIComp). The proposed GenAIComp has five essential factors: Information and Data Literacy, Communication and Collaboration, Digital Content Creation, Safety and Ethics, and Problem-Solving. A quantitative approach was employed, incorporating expert validation, pilot testing, and extensive empirical evaluation involving 1000 participants, principally university students. The factor analysis confirmed a robust 5-factor structure with strong psychometric properties. The final model demonstrated excellent fit indices, confirming its reliability and validity in assessing Generative AI competence across the five key factors. Research demonstrates that educational background considerably impacts AI competence, with individuals from technical disciplines showing a greater aptitude for problem-solving and content generation. Gender-based disparities were noted, with males achieving marginally higher scores in several factors, but with minimal effect sizes. Correlation analysis indicated that perceived AI expertise and frequency of AI utilization significantly influenced competence, especially in data literacy and problem-solving, and exhibited less correlation with ethical awareness. GenAIComp provides a reliable tool for assessing AI competence, helping educators, industry experts, and policymakers to design AI training programs and integrate AI literacy into curricula and thereby AI technology advancement in society. Future research should explore its applicability across cultures and include performance-based assessments to enhance AI competence.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"84 ","pages":"Article 103059"},"PeriodicalIF":12.5000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Development and validation of Generative AI Competence Scale (GenAIComp) among university students\",\"authors\":\"Seul Chan Lee , Tiju Baby , Rattawut Vongvit , Jieun Lee , Young Woo Kim , Min Chul Cha , Sol Hee Yoon\",\"doi\":\"10.1016/j.techsoc.2025.103059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid development of Generative Artificial Intelligence (Generative AI) across several sectors underscores the need for a systematic tool to evaluate AI competence. Current digital literacy frameworks lack AI-specific competencies, resulting in inconsistencies in the assessment of AI competence. This study aims to establish a standardized assessment framework for Generative AI competence by identifying key skill factors and empirically validating a structured evaluation tool called the Generative AI Competence Scale (GenAIComp). The proposed GenAIComp has five essential factors: Information and Data Literacy, Communication and Collaboration, Digital Content Creation, Safety and Ethics, and Problem-Solving. A quantitative approach was employed, incorporating expert validation, pilot testing, and extensive empirical evaluation involving 1000 participants, principally university students. The factor analysis confirmed a robust 5-factor structure with strong psychometric properties. The final model demonstrated excellent fit indices, confirming its reliability and validity in assessing Generative AI competence across the five key factors. Research demonstrates that educational background considerably impacts AI competence, with individuals from technical disciplines showing a greater aptitude for problem-solving and content generation. Gender-based disparities were noted, with males achieving marginally higher scores in several factors, but with minimal effect sizes. Correlation analysis indicated that perceived AI expertise and frequency of AI utilization significantly influenced competence, especially in data literacy and problem-solving, and exhibited less correlation with ethical awareness. GenAIComp provides a reliable tool for assessing AI competence, helping educators, industry experts, and policymakers to design AI training programs and integrate AI literacy into curricula and thereby AI technology advancement in society. Future research should explore its applicability across cultures and include performance-based assessments to enhance AI competence.</div></div>\",\"PeriodicalId\":47979,\"journal\":{\"name\":\"Technology in Society\",\"volume\":\"84 \",\"pages\":\"Article 103059\"},\"PeriodicalIF\":12.5000,\"publicationDate\":\"2025-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0160791X25002490\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL ISSUES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25002490","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
Development and validation of Generative AI Competence Scale (GenAIComp) among university students
The rapid development of Generative Artificial Intelligence (Generative AI) across several sectors underscores the need for a systematic tool to evaluate AI competence. Current digital literacy frameworks lack AI-specific competencies, resulting in inconsistencies in the assessment of AI competence. This study aims to establish a standardized assessment framework for Generative AI competence by identifying key skill factors and empirically validating a structured evaluation tool called the Generative AI Competence Scale (GenAIComp). The proposed GenAIComp has five essential factors: Information and Data Literacy, Communication and Collaboration, Digital Content Creation, Safety and Ethics, and Problem-Solving. A quantitative approach was employed, incorporating expert validation, pilot testing, and extensive empirical evaluation involving 1000 participants, principally university students. The factor analysis confirmed a robust 5-factor structure with strong psychometric properties. The final model demonstrated excellent fit indices, confirming its reliability and validity in assessing Generative AI competence across the five key factors. Research demonstrates that educational background considerably impacts AI competence, with individuals from technical disciplines showing a greater aptitude for problem-solving and content generation. Gender-based disparities were noted, with males achieving marginally higher scores in several factors, but with minimal effect sizes. Correlation analysis indicated that perceived AI expertise and frequency of AI utilization significantly influenced competence, especially in data literacy and problem-solving, and exhibited less correlation with ethical awareness. GenAIComp provides a reliable tool for assessing AI competence, helping educators, industry experts, and policymakers to design AI training programs and integrate AI literacy into curricula and thereby AI technology advancement in society. Future research should explore its applicability across cultures and include performance-based assessments to enhance AI competence.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.