Paul P. Martin, David Kranz, Peter Wulff, Nicole Graulich
{"title":"探索新的深度:应用机器学习分析学生的化学论证","authors":"Paul P. Martin, David Kranz, Peter Wulff, Nicole Graulich","doi":"10.1002/tea.21903","DOIUrl":null,"url":null,"abstract":"<p>Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing so, students face significant challenges in coherently structuring their arguments and integrating chemical concepts. For this reason, a reliable assessment of students' argumentation is critical. However, as arguments are usually presented in open-ended tasks, scoring assessments manually is resource-consuming and conceptually difficult. To augment human diagnostic capabilities, artificial intelligence techniques such as machine learning or natural language processing offer novel possibilities for an in-depth analysis of students' argumentation. In this study, we extensively evaluated students' written arguments about the plausibility of competing chemical reactions based on a methodological approach called <i>computational grounded theory</i>. By using an unsupervised clustering technique, we sought to evaluate students' argumentation patterns in detail, providing new insights into the <i>modes of reasoning</i> and <i>levels of granularity</i> applied in students' written accounts. Based on this analysis, we developed a holistic 20-category rubric by combining the data-driven clusters with a theory-driven framework to automate the analysis of the identified argumentation patterns. Pre-trained large language models in conjunction with deep neural networks provided <i>almost perfect</i> machine-human score agreement and well-interpretable results, which underpins the potential of the applied state-of-the-art deep learning techniques in analyzing students' argument complexity. The findings demonstrate an approach to combining human and computer-based analysis in uncovering written argumentation.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21903","citationCount":"0","resultStr":"{\"title\":\"Exploring new depths: Applying machine learning for the analysis of student argumentation in chemistry\",\"authors\":\"Paul P. Martin, David Kranz, Peter Wulff, Nicole Graulich\",\"doi\":\"10.1002/tea.21903\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing so, students face significant challenges in coherently structuring their arguments and integrating chemical concepts. For this reason, a reliable assessment of students' argumentation is critical. However, as arguments are usually presented in open-ended tasks, scoring assessments manually is resource-consuming and conceptually difficult. To augment human diagnostic capabilities, artificial intelligence techniques such as machine learning or natural language processing offer novel possibilities for an in-depth analysis of students' argumentation. In this study, we extensively evaluated students' written arguments about the plausibility of competing chemical reactions based on a methodological approach called <i>computational grounded theory</i>. By using an unsupervised clustering technique, we sought to evaluate students' argumentation patterns in detail, providing new insights into the <i>modes of reasoning</i> and <i>levels of granularity</i> applied in students' written accounts. Based on this analysis, we developed a holistic 20-category rubric by combining the data-driven clusters with a theory-driven framework to automate the analysis of the identified argumentation patterns. Pre-trained large language models in conjunction with deep neural networks provided <i>almost perfect</i> machine-human score agreement and well-interpretable results, which underpins the potential of the applied state-of-the-art deep learning techniques in analyzing students' argument complexity. The findings demonstrate an approach to combining human and computer-based analysis in uncovering written argumentation.</p>\",\"PeriodicalId\":48369,\"journal\":{\"name\":\"Journal of Research in Science Teaching\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2023-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21903\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Research in Science Teaching\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/tea.21903\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21903","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Exploring new depths: Applying machine learning for the analysis of student argumentation in chemistry
Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing so, students face significant challenges in coherently structuring their arguments and integrating chemical concepts. For this reason, a reliable assessment of students' argumentation is critical. However, as arguments are usually presented in open-ended tasks, scoring assessments manually is resource-consuming and conceptually difficult. To augment human diagnostic capabilities, artificial intelligence techniques such as machine learning or natural language processing offer novel possibilities for an in-depth analysis of students' argumentation. In this study, we extensively evaluated students' written arguments about the plausibility of competing chemical reactions based on a methodological approach called computational grounded theory. By using an unsupervised clustering technique, we sought to evaluate students' argumentation patterns in detail, providing new insights into the modes of reasoning and levels of granularity applied in students' written accounts. Based on this analysis, we developed a holistic 20-category rubric by combining the data-driven clusters with a theory-driven framework to automate the analysis of the identified argumentation patterns. Pre-trained large language models in conjunction with deep neural networks provided almost perfect machine-human score agreement and well-interpretable results, which underpins the potential of the applied state-of-the-art deep learning techniques in analyzing students' argument complexity. The findings demonstrate an approach to combining human and computer-based analysis in uncovering written argumentation.
期刊介绍:
Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.