Lucas Wiese, Hector E. Will Pinto, Alejandra J. Magana
{"title":"Undergraduate and graduate students' conceptual understanding of model classification outcomes under the lens of scientific argumentation","authors":"Lucas Wiese, Hector E. Will Pinto, Alejandra J. Magana","doi":"10.1002/cae.22734","DOIUrl":null,"url":null,"abstract":"<p>Recent advancements in artificial intelligence (AI) and machine learning (ML) have driven research and development across multiple industries to meet national economic and technological demands. Consequently, companies are investing in AI, ML, and data analytics workforce development efforts to digitalize operations and enhance global competitiveness. As such, evidence-based educational research around ML is essential to provide a foundation for the future workforce as they face complex AI challenges. This study explored students' conceptual ML understanding through a scientific argumentation framework, where we examined how they used evidence and reasoning to support claims about their ML models. This framework lets us gain insight into students' conceptualizations and helped scaffold student learning via a cognitive apprenticeship model. Thirty students in a mechanical engineering classroom at Purdue University experimented with neural network ML models within a computational notebook to create visual claims (ML models) with textual explanations of their evidence and reasoning. Accordingly, we qualitatively analyzed their learning artifacts to examine their underfit, fit, and overfit models and explanations. It was found that some students tended toward technical explanations while others used visual explanations. Students with technically dominant explanations had higher proficiency in generating correctly fit models but lacked explanatory evidence. Conversely, students with visually dominant explanations provided evidence but lacked technical reasoning and were less accurate in identifying fit models. We discuss implications for both groups of students and offer future research directions to examine how positive pedagogical elements of learning design can optimize ML educational material and AI workforce development.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cae.22734","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.22734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in artificial intelligence (AI) and machine learning (ML) have driven research and development across multiple industries to meet national economic and technological demands. Consequently, companies are investing in AI, ML, and data analytics workforce development efforts to digitalize operations and enhance global competitiveness. As such, evidence-based educational research around ML is essential to provide a foundation for the future workforce as they face complex AI challenges. This study explored students' conceptual ML understanding through a scientific argumentation framework, where we examined how they used evidence and reasoning to support claims about their ML models. This framework lets us gain insight into students' conceptualizations and helped scaffold student learning via a cognitive apprenticeship model. Thirty students in a mechanical engineering classroom at Purdue University experimented with neural network ML models within a computational notebook to create visual claims (ML models) with textual explanations of their evidence and reasoning. Accordingly, we qualitatively analyzed their learning artifacts to examine their underfit, fit, and overfit models and explanations. It was found that some students tended toward technical explanations while others used visual explanations. Students with technically dominant explanations had higher proficiency in generating correctly fit models but lacked explanatory evidence. Conversely, students with visually dominant explanations provided evidence but lacked technical reasoning and were less accurate in identifying fit models. We discuss implications for both groups of students and offer future research directions to examine how positive pedagogical elements of learning design can optimize ML educational material and AI workforce development.
人工智能(AI)和机器学习(ML)的最新进展推动了多个行业的研究和发展,以满足国家经济和技术需求。因此,企业正在投资人工智能、ML 和数据分析人才培养工作,以实现数字化运营并提高全球竞争力。因此,围绕 ML 的循证教育研究对于为未来劳动力应对复杂的人工智能挑战奠定基础至关重要。本研究通过科学论证框架探讨了学生对概念性 ML 的理解,研究了他们如何使用证据和推理来支持对其 ML 模型的主张。通过这一框架,我们可以深入了解学生的概念,并通过认知学徒模式帮助学生掌握学习方法。普渡大学机械工程课堂上的 30 名学生在计算笔记本中尝试使用神经网络 ML 模型,创建可视化主张(ML 模型),并对其证据和推理进行文字说明。因此,我们对他们的学习成果进行了定性分析,以检查他们的模型和解释是否欠拟合、拟合和过拟合。结果发现,一些学生倾向于技术解释,而另一些学生则使用视觉解释。技术解释占主导地位的学生生成正确拟合模型的熟练程度较高,但缺乏解释证据。相反,视觉解释占主导地位的学生提供了证据,但缺乏技术推理,在确定拟合模型方面的准确性较低。我们讨论了对这两组学生的影响,并提出了未来的研究方向,以研究学习设计中的积极教学元素如何优化 ML 教育材料和人工智能人才培养。