Evaluation of teachers' explaining skills: An approach based on artificial intelligence and AI causal analysis

Science Talks Pub Date : 2025-12-01 Epub Date: 2025-09-19 DOI:10.1016/j.sctalk.2025.100491
Nafi'’atun Amaliya , Harjono , Sri Susilogati Sumarti , Ella Kusumastuti , Dimas Gilang Ramadhani
{"title":"Evaluation of teachers' explaining skills: An approach based on artificial intelligence and AI causal analysis","authors":"Nafi'’atun Amaliya ,&nbsp;Harjono ,&nbsp;Sri Susilogati Sumarti ,&nbsp;Ella Kusumastuti ,&nbsp;Dimas Gilang Ramadhani","doi":"10.1016/j.sctalk.2025.100491","DOIUrl":null,"url":null,"abstract":"<div><div>This study develops an innovative assessment system based on Artificial Intelligence (AI) and causal inference to objectively evaluate teachers' explaining skills. Traditional assessments often suffer from subjectivity and inconsistency, limiting their effectiveness in capturing the complex and dynamic nature of teaching skills. To overcome these limitations, this research integrates a black box AI model (Google Gemini 1.5 Flash LLM) and a glass box approach (multivariate linear regression) to provide transparent and interpretable assessments. Using 200 video transcripts from various teaching contexts, the results indicate a high correlation (<em>r</em> = 0.91) between the AI-generated scores and regression model predictions, confirming the validity and reliability of the approach. Causal analysis through Directed Acyclic Graphs (DAG) and Propensity Score Matching (PSM) identifies critical teaching indicators, such as “Adaptation to Student Understanding” (ATE = 0.41) and “Wait Time” (ATE = 0.48), which significantly impact teaching effectiveness. Counterfactual simulations further reveal potential score improvements by up to 0.6 points when enhancing these key areas. The proposed method provides systematic, transparent, and actionable insights, contributing significantly to improving educational quality through precise and evidence-based teacher evaluations. This research also aligns with the Sustainable Development Goals (SDGs), particularly SDG 4 (Quality Education), SDG 9 (Industry, Innovation, and Infrastructure), and SDG 10 (Reduced Inequalities), by promoting equitable, innovative, and high-quality education practices.</div></div>","PeriodicalId":101148,"journal":{"name":"Science Talks","volume":"16 ","pages":"Article 100491"},"PeriodicalIF":0.0000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science Talks","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772569325000738","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/19 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study develops an innovative assessment system based on Artificial Intelligence (AI) and causal inference to objectively evaluate teachers' explaining skills. Traditional assessments often suffer from subjectivity and inconsistency, limiting their effectiveness in capturing the complex and dynamic nature of teaching skills. To overcome these limitations, this research integrates a black box AI model (Google Gemini 1.5 Flash LLM) and a glass box approach (multivariate linear regression) to provide transparent and interpretable assessments. Using 200 video transcripts from various teaching contexts, the results indicate a high correlation (r = 0.91) between the AI-generated scores and regression model predictions, confirming the validity and reliability of the approach. Causal analysis through Directed Acyclic Graphs (DAG) and Propensity Score Matching (PSM) identifies critical teaching indicators, such as “Adaptation to Student Understanding” (ATE = 0.41) and “Wait Time” (ATE = 0.48), which significantly impact teaching effectiveness. Counterfactual simulations further reveal potential score improvements by up to 0.6 points when enhancing these key areas. The proposed method provides systematic, transparent, and actionable insights, contributing significantly to improving educational quality through precise and evidence-based teacher evaluations. This research also aligns with the Sustainable Development Goals (SDGs), particularly SDG 4 (Quality Education), SDG 9 (Industry, Innovation, and Infrastructure), and SDG 10 (Reduced Inequalities), by promoting equitable, innovative, and high-quality education practices.
教师讲解能力评价:基于人工智能和人工智能因果分析的方法
本研究开发了一种基于人工智能和因果推理的创新评估系统,以客观地评估教师的解释技能。传统的评估往往存在主观性和前后不一致的问题,限制了评估教学技能的复杂性和动态性的有效性。为了克服这些限制,本研究将黑盒人工智能模型(谷歌Gemini 1.5 Flash LLM)和玻璃盒方法(多元线性回归)集成在一起,以提供透明和可解释的评估。使用来自不同教学背景的200个视频文本,结果表明人工智能生成的分数与回归模型预测之间存在高度相关性(r = 0.91),证实了该方法的有效性和可靠性。通过有向无环图(DAG)和倾向得分匹配(PSM)进行因果分析,发现了影响教学效果的关键教学指标,如“适应学生理解”(ATE = 0.41)和“等待时间”(ATE = 0.48)。反事实模拟进一步显示,当增强这些关键领域时,得分可能会提高0.6分。所提出的方法提供了系统、透明和可操作的见解,通过精确和基于证据的教师评估,为提高教育质量做出了重大贡献。本研究还通过促进公平、创新和高质量的教育实践,与可持续发展目标(SDG)保持一致,特别是可持续发展目标4(优质教育)、可持续发展目标9(产业、创新和基础设施)和可持续发展目标10(减少不平等)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书