Artificial Intelligence and Responsible Adoption in Engineering Education: Evidence, Concerns, and a Constructive Path Forward

IF 2.2 3区 工程技术 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Magdy F. Iskander, Alejandra J. Magana
{"title":"Artificial Intelligence and Responsible Adoption in Engineering Education: Evidence, Concerns, and a Constructive Path Forward","authors":"Magdy F. Iskander,&nbsp;Alejandra J. Magana","doi":"10.1002/cae.70186","DOIUrl":null,"url":null,"abstract":"<p>The rapid integration of generative artificial intelligence (AI) into educational practice has generated both enthusiasm and apprehension. For <i>Computer Applications in Engineering Education</i> (CAE), a journal founded on the premise that computational technologies can enhance learning effectiveness, the present moment represents not a disruption of mission, but an inflection point. Among the most frequently expressed concerns is academic integrity, and consequently the potential erosion of critical thinking skills. Has generative AI fundamentally increased cheating, or has it primarily transformed the mechanisms through which academic misconduct may occur?</p><p>A balanced examination of available evidence suggests a more nuanced picture than public discourse often conveys.</p><p>Recent survey data confirm that generative AI use among students is widespread. The Higher Education Policy Institute [<span>1</span>] reports that over 90% of surveyed UK students use generative AI tools for academic purposes. Similarly, the College Board [<span>2</span>] reports that more than 80% of US high school students use generative AI for school-related work. Even adult learners have reported using AI for academic work [<span>3</span>]. AI use is no longer peripheral; it is mainstream.</p><p>Large-scale submission analytics further demonstrate measurable AI integration into student work. Turnitin [<span>4</span>] reports that approximately 17% of global submissions exhibit substantial AI-writing indicators. Yet adoption alone does not equate to misconduct.</p><p>Emerging empirical research suggests that academic dishonesty rates may not have dramatically increased following the release of large language models. A recent study in <i>Computers &amp; Education</i> found that self-reported cheating behaviors among secondary students remained statistically comparable pre- and post-ChatGPT introduction, suggesting transformation rather than explosion of misconduct patterns (e.g., comparative analyses reported in 2024). Similarly, scholars writing in the <i>Journal of Engineering Education</i> argue that generative AI challenges assessment design more than it fundamentally alters student ethics [<span>5, 6</span>].</p><p>Educator concern nevertheless remains high. The 2025 AI Index Report from Stanford's Institute for Human-Centered AI identified academic integrity and misuse as primary concerns among teachers and administrators [<span>7</span>]. The central tension is therefore not only uncertainty about AI use, but also uncertainty about assessment resilience.</p><p>On the other hand, recent studies indicate that students in higher education use AI tools, but lack structured support and formal training skills [<span>8</span>]. Students want clearer institutional support, guidance, and preparation for responsible AI use and future careers. [<span>9</span>]. In contrast, other studies have reported on students' feelings of guilt, shame, and fear of using generative AI for academic work [<span>10</span>]. Thus, it is imperative that educators take action and deliver concrete guidance to students.</p><p>AI-detection systems have been rapidly deployed. However, vendors and standards organizations caution against treating automated outputs as definitive evidence. The National Institute of Standards and Technology [<span>11</span>] emphasized broader reliability and risk-management challenges inherent in evolving AI systems. False positives, paraphrasing, hybrid human–AI writing, and model drift complicate enforcement decisions.</p><p>Peer-reviewed discussions in engineering education similarly caution that reliance on detection technologies may produce procedural fairness concerns and may inadvertently penalize multilingual or stylistically distinctive writers [<span>6</span>]. Detection tools may serve as preliminary screening mechanisms, but they cannot replace sound pedagogical design.</p><p>International policy guidance increasingly advocates for governance frameworks grounded in transparency and AI literacy rather than prohibition. UNESCO [<span>12</span>] recommends clear institutional policies, disclosure practices, and educator capacity building.</p><p>Such strategies shift evaluation from determining whether AI was used to determining whether understanding has been demonstrated.</p><p>Engineering education occupies a uniquely advantageous position in this transition. For more than three decades, CAE has promoted simulation-driven learning, computational modeling, and digital laboratories. Generative AI may be understood as a continuation of this computational trajectory.</p><p>The core pedagogical question is not whether students consult AI systems, but whether they use them without forgoing learning, and whether assessments effectively measure modeling judgment, parameter selection, validation reasoning, and design trade-offs. These competencies resist superficial outsourcing.</p><p>As Magana et al. [<span>13</span>] argue in the <i>Journal of Engineering Education</i>, generative AI can be integrated productively into engineering research and learning workflows when guided by structured pedagogical frameworks. Engineering education, therefore, may serve as a proving ground for responsible AI integration rather than a casualty of its misuse.</p><p>Students can also be equipped with strategies that provide them with learning agency when using generative AI, so that they can develop self-regulated learning in this context. That is, students develop agency when they feel confident when using generative AI (dispositional agency), when they have access to generative AI tools, along with institutional support (positional), and when they have motivation, goals, and choice in their uses of generative AI (motivational) [<span>14</span>]. Once students develop such forms of learning agency, they can develop the capability to self-regulate their learning when using such tools, so that they plan, monitor, and evaluate the consequences of using them for academic work, without sacrificing their learning [<span>15</span>].</p><p>Rather than framing generative AI solely as a threat to academic integrity, CAE advocates for principled innovation grounded in evidence, transparency, and pedagogical rigor. The responsibility before engineering educators is not to retreat from technological change, but to shape it.</p><p>Generative AI is unlikely to recede from educational environments. The central question is therefore not whether AI will be present, but whether engineering education will lead in defining its responsible, pedagogical, and effective use.</p><p>Since its founding in 1992, CAE has consistently advanced the thoughtful integration of computational tools, simulation environments, multimedia learning modules, virtual laboratories, and data-driven instructional strategies. Each technological wave—from desktop computing to web-based learning, from CAD systems to high-fidelity modeling—initially raised concerns about rigor, dependency, and integrity. In each case, engineering education responded not by lowering standards, but by refining them.</p><p>Generative AI represents the next phase in this computational evolution.</p><p>In doing so, CAE does not position itself as reacting to the AI wave, but as continuing a long-standing mission: advancing digital technologies to enhance learning effectiveness and elevate engineering education globally.</p><p>The integrity of engineering education will not be preserved by resisting AI, but by embedding it within principled, research-based pedagogy. The opportunity before us is not merely to manage risk, but to define standards.</p><p>CAE stands committed to leading this effort—thoughtfully, rigorously, and with the dignity befitting a journal that has served the field for over three decades.</p><p>The challenge is real.</p><p>The opportunity is greater.</p><p>The responsibility is ours.</p><p>The authors declare no conflicts of interest.</p><p>The data that support the findings of this study are available from the corresponding author upon reasonable request.</p>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":"34 3","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2026-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cae.70186","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.70186","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid integration of generative artificial intelligence (AI) into educational practice has generated both enthusiasm and apprehension. For Computer Applications in Engineering Education (CAE), a journal founded on the premise that computational technologies can enhance learning effectiveness, the present moment represents not a disruption of mission, but an inflection point. Among the most frequently expressed concerns is academic integrity, and consequently the potential erosion of critical thinking skills. Has generative AI fundamentally increased cheating, or has it primarily transformed the mechanisms through which academic misconduct may occur?

A balanced examination of available evidence suggests a more nuanced picture than public discourse often conveys.

Recent survey data confirm that generative AI use among students is widespread. The Higher Education Policy Institute [1] reports that over 90% of surveyed UK students use generative AI tools for academic purposes. Similarly, the College Board [2] reports that more than 80% of US high school students use generative AI for school-related work. Even adult learners have reported using AI for academic work [3]. AI use is no longer peripheral; it is mainstream.

Large-scale submission analytics further demonstrate measurable AI integration into student work. Turnitin [4] reports that approximately 17% of global submissions exhibit substantial AI-writing indicators. Yet adoption alone does not equate to misconduct.

Emerging empirical research suggests that academic dishonesty rates may not have dramatically increased following the release of large language models. A recent study in Computers & Education found that self-reported cheating behaviors among secondary students remained statistically comparable pre- and post-ChatGPT introduction, suggesting transformation rather than explosion of misconduct patterns (e.g., comparative analyses reported in 2024). Similarly, scholars writing in the Journal of Engineering Education argue that generative AI challenges assessment design more than it fundamentally alters student ethics [5, 6].

Educator concern nevertheless remains high. The 2025 AI Index Report from Stanford's Institute for Human-Centered AI identified academic integrity and misuse as primary concerns among teachers and administrators [7]. The central tension is therefore not only uncertainty about AI use, but also uncertainty about assessment resilience.

On the other hand, recent studies indicate that students in higher education use AI tools, but lack structured support and formal training skills [8]. Students want clearer institutional support, guidance, and preparation for responsible AI use and future careers. [9]. In contrast, other studies have reported on students' feelings of guilt, shame, and fear of using generative AI for academic work [10]. Thus, it is imperative that educators take action and deliver concrete guidance to students.

AI-detection systems have been rapidly deployed. However, vendors and standards organizations caution against treating automated outputs as definitive evidence. The National Institute of Standards and Technology [11] emphasized broader reliability and risk-management challenges inherent in evolving AI systems. False positives, paraphrasing, hybrid human–AI writing, and model drift complicate enforcement decisions.

Peer-reviewed discussions in engineering education similarly caution that reliance on detection technologies may produce procedural fairness concerns and may inadvertently penalize multilingual or stylistically distinctive writers [6]. Detection tools may serve as preliminary screening mechanisms, but they cannot replace sound pedagogical design.

International policy guidance increasingly advocates for governance frameworks grounded in transparency and AI literacy rather than prohibition. UNESCO [12] recommends clear institutional policies, disclosure practices, and educator capacity building.

Such strategies shift evaluation from determining whether AI was used to determining whether understanding has been demonstrated.

Engineering education occupies a uniquely advantageous position in this transition. For more than three decades, CAE has promoted simulation-driven learning, computational modeling, and digital laboratories. Generative AI may be understood as a continuation of this computational trajectory.

The core pedagogical question is not whether students consult AI systems, but whether they use them without forgoing learning, and whether assessments effectively measure modeling judgment, parameter selection, validation reasoning, and design trade-offs. These competencies resist superficial outsourcing.

As Magana et al. [13] argue in the Journal of Engineering Education, generative AI can be integrated productively into engineering research and learning workflows when guided by structured pedagogical frameworks. Engineering education, therefore, may serve as a proving ground for responsible AI integration rather than a casualty of its misuse.

Students can also be equipped with strategies that provide them with learning agency when using generative AI, so that they can develop self-regulated learning in this context. That is, students develop agency when they feel confident when using generative AI (dispositional agency), when they have access to generative AI tools, along with institutional support (positional), and when they have motivation, goals, and choice in their uses of generative AI (motivational) [14]. Once students develop such forms of learning agency, they can develop the capability to self-regulate their learning when using such tools, so that they plan, monitor, and evaluate the consequences of using them for academic work, without sacrificing their learning [15].

Rather than framing generative AI solely as a threat to academic integrity, CAE advocates for principled innovation grounded in evidence, transparency, and pedagogical rigor. The responsibility before engineering educators is not to retreat from technological change, but to shape it.

Generative AI is unlikely to recede from educational environments. The central question is therefore not whether AI will be present, but whether engineering education will lead in defining its responsible, pedagogical, and effective use.

Since its founding in 1992, CAE has consistently advanced the thoughtful integration of computational tools, simulation environments, multimedia learning modules, virtual laboratories, and data-driven instructional strategies. Each technological wave—from desktop computing to web-based learning, from CAD systems to high-fidelity modeling—initially raised concerns about rigor, dependency, and integrity. In each case, engineering education responded not by lowering standards, but by refining them.

Generative AI represents the next phase in this computational evolution.

In doing so, CAE does not position itself as reacting to the AI wave, but as continuing a long-standing mission: advancing digital technologies to enhance learning effectiveness and elevate engineering education globally.

The integrity of engineering education will not be preserved by resisting AI, but by embedding it within principled, research-based pedagogy. The opportunity before us is not merely to manage risk, but to define standards.

CAE stands committed to leading this effort—thoughtfully, rigorously, and with the dignity befitting a journal that has served the field for over three decades.

The challenge is real.

The opportunity is greater.

The responsibility is ours.

The authors declare no conflicts of interest.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

人工智能和工程教育中负责任的采用:证据、关注和建设性的前进道路
生成式人工智能(AI)与教育实践的快速融合既引起了人们的热情,也引起了人们的担忧。《工程教育中的计算机应用》(Computer Applications in Engineering Education, CAE)是一本以计算技术可以提高学习效率为前提的杂志。对于这本杂志来说,当前时刻不是任务的中断,而是一个拐点。其中最常表达的担忧是学术诚信,以及由此导致的批判性思维能力的潜在侵蚀。是生成式人工智能从根本上增加了作弊行为,还是它主要改变了学术不端行为可能发生的机制?对现有证据的平衡审查显示出一幅比公共话语通常传达的更微妙的画面。最近的调查数据证实,学生普遍使用生成式人工智能。高等教育政策研究所b[1]报告称,超过90%的受访英国学生将生成式人工智能工具用于学术目的。同样,美国大学理事会(College Board)报告称,超过80%的美国高中生使用生成式人工智能来完成与学校相关的工作。即使是成人学习者也报告说在学术工作中使用人工智能。人工智能的使用不再是次要的;这是主流。大规模提交分析进一步展示了可衡量的人工智能集成到学生作业中。Turnitin[4]报告称,全球约17%的提交作品显示出大量的人工智能写作指标。然而,收养本身并不等同于不当行为。新兴的实证研究表明,在大型语言模型发布后,学术不诚实率可能没有大幅上升。《计算机与教育》杂志最近的一项研究发现,在引入chatgpt之前和之后,中学生自我报告的作弊行为在统计上仍然具有可比性,这表明不当行为模式的转变而不是爆炸(例如,2024年报告的比较分析)。同样,学者在《工程教育杂志》(Journal of Engineering Education)上撰文认为,生成式人工智能对评估设计的挑战大于从根本上改变学生道德[5,6]。然而,教育工作者的担忧仍然很高。斯坦福大学以人为本的人工智能研究所发布的《2025年人工智能指数报告》指出,学术诚信和滥用是教师和管理人员最关心的问题。因此,核心的紧张不仅在于人工智能使用的不确定性,还在于评估弹性的不确定性。另一方面,最近的研究表明,高等教育中的学生使用人工智能工具,但缺乏结构化的支持和正式的培训技能。学生们希望得到更明确的机构支持、指导,并为负责任的人工智能使用和未来的职业生涯做好准备。[9]。相比之下,其他研究报告了学生在使用生成式人工智能进行学术工作时的负罪感、羞耻感和恐惧感。因此,教育工作者必须采取行动,为学生提供具体的指导。人工智能检测系统已迅速部署。然而,供应商和标准组织警告不要将自动化输出作为确定的证据。美国国家标准与技术研究院(National Institute of Standards and Technology)强调了不断发展的人工智能系统所固有的更广泛的可靠性和风险管理挑战。误报、释义、人工智能混合写作和模型漂移使执法决策复杂化。工程教育中同行评议的讨论同样警告说,对检测技术的依赖可能会产生程序公平问题,并可能无意中惩罚多语言或风格独特的作者[10]。检测工具可以作为初步筛选机制,但它们不能取代健全的教学设计。国际政策指导越来越多地倡导基于透明度和人工智能素养的治理框架,而不是禁止。教科文组织[12]建议制定明确的制度政策、披露做法和教育工作者的能力建设。这种策略将评估从确定是否使用了人工智能转变为确定是否证明了理解。在这种转变中,工程教育具有独特的优势地位。三十多年来,CAE促进了仿真驱动的学习、计算建模和数字实验室。生成式人工智能可以被理解为这种计算轨迹的延续。核心的教学问题不是学生是否咨询人工智能系统,而是他们是否在不放弃学习的情况下使用它们,以及评估是否有效地衡量建模判断、参数选择、验证推理和设计权衡。这些能力抵制肤浅的外包。正如Magana等人在《工程教育杂志》上所说,在结构化教学框架的指导下,生成式人工智能可以有效地集成到工程研究和学习工作流程中。 因此,工程教育可以作为负责任的人工智能整合的试验场,而不是误用的牺牲品。在使用生成式人工智能时,学生还可以配备为他们提供学习代理的策略,以便他们在这种情况下发展自我调节的学习。也就是说,当学生在使用生成式人工智能(处置性代理)时感到自信时,当他们能够获得生成式人工智能工具以及机构支持(位置性)时,当他们在使用生成式人工智能(动机性)时拥有动机、目标和选择时,他们就会发展代理。一旦学生发展了这种形式的学习代理,他们就可以在使用这些工具时发展自我调节学习的能力,这样他们就可以在不牺牲学习效果的情况下计划、监控和评估使用这些工具进行学术工作的后果。CAE并没有将生成式人工智能视为对学术诚信的威胁,而是倡导基于证据、透明度和教学严谨性的原则性创新。工程教育工作者面临的责任不是从技术变革中退缩,而是塑造它。生成式人工智能不太可能从教育环境中消失。因此,核心问题不是人工智能是否会出现,而是工程教育是否会在定义其负责任的、教学的和有效的使用方面发挥主导作用。自1992年成立以来,CAE一直在推进计算工具、模拟环境、多媒体学习模块、虚拟实验室和数据驱动教学策略的周到整合。每一次技术浪潮——从桌面计算到基于网络的学习,从CAD系统到高保真建模——最初都引起了人们对严格性、依赖性和完整性的关注。在每一种情况下,工程教育的反应不是降低标准,而是改进标准。生成式人工智能代表了这种计算进化的下一个阶段。在这样做的过程中,CAE并没有将自己定位为对人工智能浪潮的反应,而是继续一项长期的使命:推进数字技术,提高学习效率,提升全球工程教育水平。工程教育的完整性不能通过抵制人工智能来保持,而是通过将其嵌入有原则的、基于研究的教学法中。摆在我们面前的机会不仅是管理风险,而且是定义标准。CAE致力于领导这一努力——深思熟虑、严谨,并以一份服务于该领域30多年的期刊应有的尊严。挑战是真实存在的。机会更大。责任在我们。作者声明无利益冲突。支持本研究结果的数据可根据通讯作者的合理要求提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Applications in Engineering Education
Computer Applications in Engineering Education 工程技术-工程:综合
CiteScore
7.20
自引率
10.30%
发文量
100
审稿时长
6-12 weeks
期刊介绍: Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书