Michelle Smit , Reinhard F. Wagner , Taryn Jane Bond-Barnard
{"title":"Ambiguous regulations for dealing with AI in higher education can lead to moral hazards among students","authors":"Michelle Smit , Reinhard F. Wagner , Taryn Jane Bond-Barnard","doi":"10.1016/j.plas.2025.100187","DOIUrl":null,"url":null,"abstract":"<div><div>The aim of this study was to investigate the ethical dilemmas and expectations surrounding the use of generative AI in academic work within a South African-based hybrid online master's program in engineering management. Central to this program is its strong focus on project management and engineering leadership, to drive ethical decision-making in their professional contexts. A total of 102 current and graduated students from the master's program were surveyed to explore their use of generative AI tools, such as ChatGPT and Grammarly, in both professional and academic contexts. The survey showed that 98 % of students actively use generative AI, demonstrating an awareness of its potential and usefulness. While anecdotal evidence suggests moral hazard including plagiarism, undisclosed AI use and reliance on AI without independent reasoning. 94 % Of students seek clear institutional policies at the university and program levels to guide the ethical use of AI in academia.</div><div>Students with higher confidence in their academic writing tend to perceive the quality of AI-generated content to be slightly inferior to their own work, highlighting varying dependency levels across the cohort. Furthermore, many students believe that universities should adopt explicit guidelines to define when and how AI tools are appropriate for academic work. These findings suggest that the absence of clear policies exacerbates ethical conflicts, impacting both educators and students.</div><div>The results of this research underscore the urgency of developing transparent guidelines to safeguard academic integrity while embracing the potential of generative AI. By framing the findings within the moral hazard theory, this study highlights the risks of over-reliance on AI tools and opens avenues for future research into their responsible integration in higher education.</div></div>","PeriodicalId":101050,"journal":{"name":"Project Leadership and Society","volume":"6 ","pages":"Article 100187"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Project Leadership and Society","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666721525000122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of this study was to investigate the ethical dilemmas and expectations surrounding the use of generative AI in academic work within a South African-based hybrid online master's program in engineering management. Central to this program is its strong focus on project management and engineering leadership, to drive ethical decision-making in their professional contexts. A total of 102 current and graduated students from the master's program were surveyed to explore their use of generative AI tools, such as ChatGPT and Grammarly, in both professional and academic contexts. The survey showed that 98 % of students actively use generative AI, demonstrating an awareness of its potential and usefulness. While anecdotal evidence suggests moral hazard including plagiarism, undisclosed AI use and reliance on AI without independent reasoning. 94 % Of students seek clear institutional policies at the university and program levels to guide the ethical use of AI in academia.
Students with higher confidence in their academic writing tend to perceive the quality of AI-generated content to be slightly inferior to their own work, highlighting varying dependency levels across the cohort. Furthermore, many students believe that universities should adopt explicit guidelines to define when and how AI tools are appropriate for academic work. These findings suggest that the absence of clear policies exacerbates ethical conflicts, impacting both educators and students.
The results of this research underscore the urgency of developing transparent guidelines to safeguard academic integrity while embracing the potential of generative AI. By framing the findings within the moral hazard theory, this study highlights the risks of over-reliance on AI tools and opens avenues for future research into their responsible integration in higher education.