{"title":"从解释到行动:学生成绩反馈的零起点、理论驱动的 LLM 框架","authors":"Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser","doi":"arxiv-2409.08027","DOIUrl":null,"url":null,"abstract":"Recent advances in eXplainable AI (XAI) for education have highlighted a\ncritical challenge: ensuring that explanations for state-of-the-art AI models\nare understandable for non-technical users such as educators and students. In\nresponse, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI\npipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is\ndesigned to deliver theory-driven, actionable feedback to students in online\ncourses. iLLuMinaTE navigates three main stages - causal connection,\nexplanation selection, and explanation presentation - with variations drawing\nfrom eight social science theories (e.g. Abnormal Conditions, Pearl's Model of\nExplanation, Necessity and Robustness Selection, Contrastive Explanation). We\nextensively evaluate 21,915 natural language explanations of iLLuMinaTE\nextracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different\nunderlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from\nthree diverse online courses. Our evaluation involves analyses of explanation\nalignment to the social science theory, understandability of the explanation,\nand a real-world user preference study with 114 university students containing\na novel actionability simulation. We find that students prefer iLLuMinaTE\nexplanations over traditional explainers 89.52% of the time. Our work provides\na robust, ready-to-use framework for effectively communicating hybrid\nXAI-driven insights in education, with significant generalization potential for\nother human-centric fields.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback\",\"authors\":\"Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser\",\"doi\":\"arxiv-2409.08027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in eXplainable AI (XAI) for education have highlighted a\\ncritical challenge: ensuring that explanations for state-of-the-art AI models\\nare understandable for non-technical users such as educators and students. In\\nresponse, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI\\npipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is\\ndesigned to deliver theory-driven, actionable feedback to students in online\\ncourses. iLLuMinaTE navigates three main stages - causal connection,\\nexplanation selection, and explanation presentation - with variations drawing\\nfrom eight social science theories (e.g. Abnormal Conditions, Pearl's Model of\\nExplanation, Necessity and Robustness Selection, Contrastive Explanation). We\\nextensively evaluate 21,915 natural language explanations of iLLuMinaTE\\nextracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different\\nunderlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from\\nthree diverse online courses. Our evaluation involves analyses of explanation\\nalignment to the social science theory, understandability of the explanation,\\nand a real-world user preference study with 114 university students containing\\na novel actionability simulation. We find that students prefer iLLuMinaTE\\nexplanations over traditional explainers 89.52% of the time. Our work provides\\na robust, ready-to-use framework for effectively communicating hybrid\\nXAI-driven insights in education, with significant generalization potential for\\nother human-centric fields.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback
Recent advances in eXplainable AI (XAI) for education have highlighted a
critical challenge: ensuring that explanations for state-of-the-art AI models
are understandable for non-technical users such as educators and students. In
response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI
pipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is
designed to deliver theory-driven, actionable feedback to students in online
courses. iLLuMinaTE navigates three main stages - causal connection,
explanation selection, and explanation presentation - with variations drawing
from eight social science theories (e.g. Abnormal Conditions, Pearl's Model of
Explanation, Necessity and Robustness Selection, Contrastive Explanation). We
extensively evaluate 21,915 natural language explanations of iLLuMinaTE
extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different
underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from
three diverse online courses. Our evaluation involves analyses of explanation
alignment to the social science theory, understandability of the explanation,
and a real-world user preference study with 114 university students containing
a novel actionability simulation. We find that students prefer iLLuMinaTE
explanations over traditional explainers 89.52% of the time. Our work provides
a robust, ready-to-use framework for effectively communicating hybrid
XAI-driven insights in education, with significant generalization potential for
other human-centric fields.