{"title":"让科学、技术、工程和数学教师了解人工智能:使用可解释的人工智能解读课堂话语分析","authors":"Deliang Wang;Gaowei Chen","doi":"10.1109/TE.2024.3421606","DOIUrl":null,"url":null,"abstract":"Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.","PeriodicalId":55011,"journal":{"name":"IEEE Transactions on Education","volume":"67 6","pages":"907-918"},"PeriodicalIF":2.1000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis\",\"authors\":\"Deliang Wang;Gaowei Chen\",\"doi\":\"10.1109/TE.2024.3421606\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.\",\"PeriodicalId\":55011,\"journal\":{\"name\":\"IEEE Transactions on Education\",\"volume\":\"67 6\",\"pages\":\"907-918\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Education\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10605115/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Education","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10605115/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis
Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.
期刊介绍:
The IEEE Transactions on Education (ToE) publishes significant and original scholarly contributions to education in electrical and electronics engineering, computer engineering, computer science, and other fields within the scope of interest of IEEE. Contributions must address discovery, integration, and/or application of knowledge in education in these fields. Articles must support contributions and assertions with compelling evidence and provide explicit, transparent descriptions of the processes through which the evidence is collected, analyzed, and interpreted. While characteristics of compelling evidence cannot be described to address every conceivable situation, generally assessment of the work being reported must go beyond student self-report and attitudinal data.