科学论证视角下本科生和研究生对模型分类结果的概念理解

IF 2 3区 工程技术 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Lucas Wiese, Hector E. Will Pinto, Alejandra J. Magana
{"title":"科学论证视角下本科生和研究生对模型分类结果的概念理解","authors":"Lucas Wiese,&nbsp;Hector E. Will Pinto,&nbsp;Alejandra J. Magana","doi":"10.1002/cae.22734","DOIUrl":null,"url":null,"abstract":"<p>Recent advancements in artificial intelligence (AI) and machine learning (ML) have driven research and development across multiple industries to meet national economic and technological demands. Consequently, companies are investing in AI, ML, and data analytics workforce development efforts to digitalize operations and enhance global competitiveness. As such, evidence-based educational research around ML is essential to provide a foundation for the future workforce as they face complex AI challenges. This study explored students' conceptual ML understanding through a scientific argumentation framework, where we examined how they used evidence and reasoning to support claims about their ML models. This framework lets us gain insight into students' conceptualizations and helped scaffold student learning via a cognitive apprenticeship model. Thirty students in a mechanical engineering classroom at Purdue University experimented with neural network ML models within a computational notebook to create visual claims (ML models) with textual explanations of their evidence and reasoning. Accordingly, we qualitatively analyzed their learning artifacts to examine their underfit, fit, and overfit models and explanations. It was found that some students tended toward technical explanations while others used visual explanations. Students with technically dominant explanations had higher proficiency in generating correctly fit models but lacked explanatory evidence. Conversely, students with visually dominant explanations provided evidence but lacked technical reasoning and were less accurate in identifying fit models. We discuss implications for both groups of students and offer future research directions to examine how positive pedagogical elements of learning design can optimize ML educational material and AI workforce development.</p>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cae.22734","citationCount":"0","resultStr":"{\"title\":\"Undergraduate and graduate students' conceptual understanding of model classification outcomes under the lens of scientific argumentation\",\"authors\":\"Lucas Wiese,&nbsp;Hector E. Will Pinto,&nbsp;Alejandra J. Magana\",\"doi\":\"10.1002/cae.22734\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recent advancements in artificial intelligence (AI) and machine learning (ML) have driven research and development across multiple industries to meet national economic and technological demands. Consequently, companies are investing in AI, ML, and data analytics workforce development efforts to digitalize operations and enhance global competitiveness. As such, evidence-based educational research around ML is essential to provide a foundation for the future workforce as they face complex AI challenges. This study explored students' conceptual ML understanding through a scientific argumentation framework, where we examined how they used evidence and reasoning to support claims about their ML models. This framework lets us gain insight into students' conceptualizations and helped scaffold student learning via a cognitive apprenticeship model. Thirty students in a mechanical engineering classroom at Purdue University experimented with neural network ML models within a computational notebook to create visual claims (ML models) with textual explanations of their evidence and reasoning. Accordingly, we qualitatively analyzed their learning artifacts to examine their underfit, fit, and overfit models and explanations. It was found that some students tended toward technical explanations while others used visual explanations. Students with technically dominant explanations had higher proficiency in generating correctly fit models but lacked explanatory evidence. Conversely, students with visually dominant explanations provided evidence but lacked technical reasoning and were less accurate in identifying fit models. We discuss implications for both groups of students and offer future research directions to examine how positive pedagogical elements of learning design can optimize ML educational material and AI workforce development.</p>\",\"PeriodicalId\":50643,\"journal\":{\"name\":\"Computer Applications in Engineering Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cae.22734\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Applications in Engineering Education\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cae.22734\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.22734","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)和机器学习(ML)的最新进展推动了多个行业的研究和发展,以满足国家经济和技术需求。因此,企业正在投资人工智能、ML 和数据分析人才培养工作,以实现数字化运营并提高全球竞争力。因此,围绕 ML 的循证教育研究对于为未来劳动力应对复杂的人工智能挑战奠定基础至关重要。本研究通过科学论证框架探讨了学生对概念性 ML 的理解,研究了他们如何使用证据和推理来支持对其 ML 模型的主张。通过这一框架,我们可以深入了解学生的概念,并通过认知学徒模式帮助学生掌握学习方法。普渡大学机械工程课堂上的 30 名学生在计算笔记本中尝试使用神经网络 ML 模型,创建可视化主张(ML 模型),并对其证据和推理进行文字说明。因此,我们对他们的学习成果进行了定性分析,以检查他们的模型和解释是否欠拟合、拟合和过拟合。结果发现,一些学生倾向于技术解释,而另一些学生则使用视觉解释。技术解释占主导地位的学生生成正确拟合模型的熟练程度较高,但缺乏解释证据。相反,视觉解释占主导地位的学生提供了证据,但缺乏技术推理,在确定拟合模型方面的准确性较低。我们讨论了对这两组学生的影响,并提出了未来的研究方向,以研究学习设计中的积极教学元素如何优化 ML 教育材料和人工智能人才培养。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Undergraduate and graduate students' conceptual understanding of model classification outcomes under the lens of scientific argumentation

Undergraduate and graduate students' conceptual understanding of model classification outcomes under the lens of scientific argumentation

Recent advancements in artificial intelligence (AI) and machine learning (ML) have driven research and development across multiple industries to meet national economic and technological demands. Consequently, companies are investing in AI, ML, and data analytics workforce development efforts to digitalize operations and enhance global competitiveness. As such, evidence-based educational research around ML is essential to provide a foundation for the future workforce as they face complex AI challenges. This study explored students' conceptual ML understanding through a scientific argumentation framework, where we examined how they used evidence and reasoning to support claims about their ML models. This framework lets us gain insight into students' conceptualizations and helped scaffold student learning via a cognitive apprenticeship model. Thirty students in a mechanical engineering classroom at Purdue University experimented with neural network ML models within a computational notebook to create visual claims (ML models) with textual explanations of their evidence and reasoning. Accordingly, we qualitatively analyzed their learning artifacts to examine their underfit, fit, and overfit models and explanations. It was found that some students tended toward technical explanations while others used visual explanations. Students with technically dominant explanations had higher proficiency in generating correctly fit models but lacked explanatory evidence. Conversely, students with visually dominant explanations provided evidence but lacked technical reasoning and were less accurate in identifying fit models. We discuss implications for both groups of students and offer future research directions to examine how positive pedagogical elements of learning design can optimize ML educational material and AI workforce development.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Applications in Engineering Education
Computer Applications in Engineering Education 工程技术-工程:综合
CiteScore
7.20
自引率
10.30%
发文量
100
审稿时长
6-12 weeks
期刊介绍: Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信