Tao Xu , Qian Chen , Zhiwei Zhang , Bingyu Dong , Hui Zhang , Jie Bai , Yun Zhou
{"title":"通过教学视频中的类人行为和动态视觉线索最大化人工智能生成教师的有效性:来自眼动追踪研究的证据","authors":"Tao Xu , Qian Chen , Zhiwei Zhang , Bingyu Dong , Hui Zhang , Jie Bai , Yun Zhou","doi":"10.1016/j.iheduc.2025.101034","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of Artificial Intelligence Generated Content (AIGC) has revolutionized the production of instructional videos and lectures in MOOCs, opening a new chapter in their use for education. This technology enables the generation of talking instructors from text and images or clips of real people. However, the impact of behavioral fidelity and visual cues provided by these generated instructors on learning outcomes in instructional videos remains unknown. This study addresses this gap by employing a 2 × 2 within-subjects design (behavioral fidelity: high/low; visual cues: with/without) with 36 college student participants. We assessed learning performance, social presence, satisfaction, cognitive load, humanness, and motivation, and analyzed eye movements. Results revealed that high behavioral fidelity increased social presence and more effectively engages students, leading to better transfer performance. Visual cues were found to facilitate the organization and integration of information. The generated instructors were most effective when resembling human behavior and using dynamic visual cues. However, further refinement is needed for the voice and gestures of these instructors. The findings extend the social agency theory related to animated and human instructors to the context of AI-generated instructors. We recommend using instructors with high behavioral fidelity in instructional videos, featuring realistic beat gestures, body movements, facial expressions, and eye contact. Our study offers valuable insights into the implications of using AIGC in educational settings.</div></div>","PeriodicalId":48186,"journal":{"name":"Internet and Higher Education","volume":"67 ","pages":"Article 101034"},"PeriodicalIF":6.4000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maximizing effectiveness of AI-generated instructors through human-like behavior and dynamic visual cues in instructional videos: Evidence from an eye-tracking study\",\"authors\":\"Tao Xu , Qian Chen , Zhiwei Zhang , Bingyu Dong , Hui Zhang , Jie Bai , Yun Zhou\",\"doi\":\"10.1016/j.iheduc.2025.101034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid advancement of Artificial Intelligence Generated Content (AIGC) has revolutionized the production of instructional videos and lectures in MOOCs, opening a new chapter in their use for education. This technology enables the generation of talking instructors from text and images or clips of real people. However, the impact of behavioral fidelity and visual cues provided by these generated instructors on learning outcomes in instructional videos remains unknown. This study addresses this gap by employing a 2 × 2 within-subjects design (behavioral fidelity: high/low; visual cues: with/without) with 36 college student participants. We assessed learning performance, social presence, satisfaction, cognitive load, humanness, and motivation, and analyzed eye movements. Results revealed that high behavioral fidelity increased social presence and more effectively engages students, leading to better transfer performance. Visual cues were found to facilitate the organization and integration of information. The generated instructors were most effective when resembling human behavior and using dynamic visual cues. However, further refinement is needed for the voice and gestures of these instructors. The findings extend the social agency theory related to animated and human instructors to the context of AI-generated instructors. We recommend using instructors with high behavioral fidelity in instructional videos, featuring realistic beat gestures, body movements, facial expressions, and eye contact. Our study offers valuable insights into the implications of using AIGC in educational settings.</div></div>\",\"PeriodicalId\":48186,\"journal\":{\"name\":\"Internet and Higher Education\",\"volume\":\"67 \",\"pages\":\"Article 101034\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet and Higher Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1096751625000430\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet and Higher Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1096751625000430","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Maximizing effectiveness of AI-generated instructors through human-like behavior and dynamic visual cues in instructional videos: Evidence from an eye-tracking study
The rapid advancement of Artificial Intelligence Generated Content (AIGC) has revolutionized the production of instructional videos and lectures in MOOCs, opening a new chapter in their use for education. This technology enables the generation of talking instructors from text and images or clips of real people. However, the impact of behavioral fidelity and visual cues provided by these generated instructors on learning outcomes in instructional videos remains unknown. This study addresses this gap by employing a 2 × 2 within-subjects design (behavioral fidelity: high/low; visual cues: with/without) with 36 college student participants. We assessed learning performance, social presence, satisfaction, cognitive load, humanness, and motivation, and analyzed eye movements. Results revealed that high behavioral fidelity increased social presence and more effectively engages students, leading to better transfer performance. Visual cues were found to facilitate the organization and integration of information. The generated instructors were most effective when resembling human behavior and using dynamic visual cues. However, further refinement is needed for the voice and gestures of these instructors. The findings extend the social agency theory related to animated and human instructors to the context of AI-generated instructors. We recommend using instructors with high behavioral fidelity in instructional videos, featuring realistic beat gestures, body movements, facial expressions, and eye contact. Our study offers valuable insights into the implications of using AIGC in educational settings.
期刊介绍:
The Internet and Higher Education is a quarterly peer-reviewed journal focused on contemporary issues and future trends in online learning, teaching, and administration within post-secondary education. It welcomes contributions from diverse academic disciplines worldwide and provides a platform for theory papers, research studies, critical essays, editorials, reviews, case studies, and social commentary.