Ambiguous regulations for dealing with AI in higher education can lead to moral hazards among students

Michelle Smit , Reinhard F. Wagner , Taryn Jane Bond-Barnard
{"title":"Ambiguous regulations for dealing with AI in higher education can lead to moral hazards among students","authors":"Michelle Smit ,&nbsp;Reinhard F. Wagner ,&nbsp;Taryn Jane Bond-Barnard","doi":"10.1016/j.plas.2025.100187","DOIUrl":null,"url":null,"abstract":"<div><div>The aim of this study was to investigate the ethical dilemmas and expectations surrounding the use of generative AI in academic work within a South African-based hybrid online master's program in engineering management. Central to this program is its strong focus on project management and engineering leadership, to drive ethical decision-making in their professional contexts. A total of 102 current and graduated students from the master's program were surveyed to explore their use of generative AI tools, such as ChatGPT and Grammarly, in both professional and academic contexts. The survey showed that 98 % of students actively use generative AI, demonstrating an awareness of its potential and usefulness. While anecdotal evidence suggests moral hazard including plagiarism, undisclosed AI use and reliance on AI without independent reasoning. 94 % Of students seek clear institutional policies at the university and program levels to guide the ethical use of AI in academia.</div><div>Students with higher confidence in their academic writing tend to perceive the quality of AI-generated content to be slightly inferior to their own work, highlighting varying dependency levels across the cohort. Furthermore, many students believe that universities should adopt explicit guidelines to define when and how AI tools are appropriate for academic work. These findings suggest that the absence of clear policies exacerbates ethical conflicts, impacting both educators and students.</div><div>The results of this research underscore the urgency of developing transparent guidelines to safeguard academic integrity while embracing the potential of generative AI. By framing the findings within the moral hazard theory, this study highlights the risks of over-reliance on AI tools and opens avenues for future research into their responsible integration in higher education.</div></div>","PeriodicalId":101050,"journal":{"name":"Project Leadership and Society","volume":"6 ","pages":"Article 100187"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Project Leadership and Society","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666721525000122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The aim of this study was to investigate the ethical dilemmas and expectations surrounding the use of generative AI in academic work within a South African-based hybrid online master's program in engineering management. Central to this program is its strong focus on project management and engineering leadership, to drive ethical decision-making in their professional contexts. A total of 102 current and graduated students from the master's program were surveyed to explore their use of generative AI tools, such as ChatGPT and Grammarly, in both professional and academic contexts. The survey showed that 98 % of students actively use generative AI, demonstrating an awareness of its potential and usefulness. While anecdotal evidence suggests moral hazard including plagiarism, undisclosed AI use and reliance on AI without independent reasoning. 94 % Of students seek clear institutional policies at the university and program levels to guide the ethical use of AI in academia.
Students with higher confidence in their academic writing tend to perceive the quality of AI-generated content to be slightly inferior to their own work, highlighting varying dependency levels across the cohort. Furthermore, many students believe that universities should adopt explicit guidelines to define when and how AI tools are appropriate for academic work. These findings suggest that the absence of clear policies exacerbates ethical conflicts, impacting both educators and students.
The results of this research underscore the urgency of developing transparent guidelines to safeguard academic integrity while embracing the potential of generative AI. By framing the findings within the moral hazard theory, this study highlights the risks of over-reliance on AI tools and opens avenues for future research into their responsible integration in higher education.
在高等教育中处理人工智能的模糊规定可能会导致学生的道德风险
本研究的目的是调查在南非工程管理混合在线硕士课程的学术工作中使用生成式人工智能的伦理困境和期望。该计划的核心是其对项目管理和工程领导的强烈关注,以推动其专业背景下的道德决策。共有102名硕士生和研究生接受了调查,以探索他们在专业和学术环境中使用生成式人工智能工具的情况,如ChatGPT和Grammarly。调查显示,98%的学生积极使用生成式人工智能,表明他们意识到它的潜力和有用性。尽管坊间证据表明存在道德风险,包括抄袭、未公开的人工智能使用以及在没有独立推理的情况下依赖人工智能。94%的学生在大学和项目层面寻求明确的制度政策,以指导学术界道德地使用人工智能。对学术写作更有信心的学生倾向于认为人工智能生成的内容的质量略低于他们自己的作品,这凸显了不同群体对人工智能的依赖程度。此外,许多学生认为大学应该采用明确的指导方针来定义人工智能工具何时以及如何适合学术工作。这些发现表明,缺乏明确的政策加剧了道德冲突,对教育者和学生都产生了影响。这项研究的结果强调了制定透明指导方针的紧迫性,以维护学术诚信,同时拥抱生成式人工智能的潜力。通过将研究结果纳入道德风险理论,本研究强调了过度依赖人工智能工具的风险,并为未来研究将其负责任地融入高等教育开辟了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信