{"title":"Causal-mechanical explanations in biology: Applying automated assessment for personalized learning in the science classroom","authors":"Moriah Ariely, Tanya Nazaretsky, Giora Alexandron","doi":"10.1002/tea.21929","DOIUrl":null,"url":null,"abstract":"<p>One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals for formulating scientific explanations. Consequently, teachers struggle to support their students in this process. It is hoped that recent advances in machine learning (ML) and its application to educational technologies can assist teachers and learners in analyzing student responses and providing automated formative feedback according to well-defined pedagogical criteria. In this study, we present a method to automate the entire assessment-feedback process. First, we developed a causal-mechanical (CM)-based grading rubric and applied it to student responses to two open-ended items. Second, we used unsupervised ML tools to identify patterns in student responses. Those patterns enable the definition of “meta-categories” of explanation types and the design of personalized feedback adapted to each category. Third, we designed an in-class intervention with personalized formative feedback that matches the response patterns. We used natural language processing and ML algorithms to assess students' explanations and provide feedback. Findings from a controlled experiment demonstrated that a CM-based grading scheme can be used to identify meaningful patterns and inform the design of formative feedback that promotes student ability to construct explanations in biology. We discuss possible implications for automated assessment and personalized teaching and learning of scientific writing in K-12 science education.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":"61 8","pages":"1858-1889"},"PeriodicalIF":3.6000,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21929","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21929","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals for formulating scientific explanations. Consequently, teachers struggle to support their students in this process. It is hoped that recent advances in machine learning (ML) and its application to educational technologies can assist teachers and learners in analyzing student responses and providing automated formative feedback according to well-defined pedagogical criteria. In this study, we present a method to automate the entire assessment-feedback process. First, we developed a causal-mechanical (CM)-based grading rubric and applied it to student responses to two open-ended items. Second, we used unsupervised ML tools to identify patterns in student responses. Those patterns enable the definition of “meta-categories” of explanation types and the design of personalized feedback adapted to each category. Third, we designed an in-class intervention with personalized formative feedback that matches the response patterns. We used natural language processing and ML algorithms to assess students' explanations and provide feedback. Findings from a controlled experiment demonstrated that a CM-based grading scheme can be used to identify meaningful patterns and inform the design of formative feedback that promotes student ability to construct explanations in biology. We discuss possible implications for automated assessment and personalized teaching and learning of scientific writing in K-12 science education.
科学的核心实践之一是构建科学解释。然而,大量研究表明,构建科学解释给学生带来了巨大挑战。对科学解释进行适当的评估既费钱又费时,而且教师往往对构建科学解释的教育目标没有明确的定义。因此,教师很难在这一过程中为学生提供支持。我们希望机器学习(ML)的最新进展及其在教育技术中的应用能够帮助教师和学习者分析学生的回答,并根据明确的教学标准提供自动形成性反馈。在本研究中,我们提出了一种将整个评估-反馈过程自动化的方法。首先,我们开发了基于因果机械(CM)的评分标准,并将其应用于学生对两个开放式项目的回答。其次,我们使用无监督 ML 工具来识别学生回答中的模式。通过这些模式,我们可以定义解释类型的 "元类别",并根据每个类别设计个性化反馈。第三,我们设计了一种与回答模式相匹配的个性化形成性反馈的课内干预。我们使用自然语言处理和 ML 算法来评估学生的解释并提供反馈。对照实验的结果表明,基于 CM 的评分方案可用于识别有意义的模式,并为形成性反馈的设计提供信息,从而提高学生构建生物学解释的能力。我们讨论了在 K-12 科学教育中对科学写作的自动评估和个性化教学可能产生的影响。
期刊介绍:
Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.