{"title":"边缘灾难:学生在法律作业中使用GenAI的实证研究","authors":"Armin Alimardani","doi":"10.1109/TTS.2025.3540978","DOIUrl":null,"url":null,"abstract":"This empirical study examines the outcomes of integrating Generative AI (GenAI) into a law assignment at the School of Law, University of Wollongong, Australia. Despite receiving instructions on the importance of verifying GenAI outputs and feedback on their attempts to use these tools effectively, a notable portion of students included fabricated or inaccurate information that had been generated by AI in their assignments. This overreliance on AI outputs suggests that instruction and guided practice alone may not sufficiently mitigate the risks associated with the inappropriate use of GenAI. A particularly concerning issue is the difficulty of identifying AI-generated inaccuracies in assessment tasks, which often requires considerable time and effort. Consequently, such errors may go unnoticed, potentially allowing students to bypass the development of essential skills, such as critical thinking and the ability to independently evaluate the accuracy, credibility, and relevance of information. Addressing overreliance on GenAI will require developing robust strategies that should be implemented for the entire duration of a student’s university degree to ensure they engage with AI tools effectively and responsibly.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"6 2","pages":"210-219"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment\",\"authors\":\"Armin Alimardani\",\"doi\":\"10.1109/TTS.2025.3540978\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This empirical study examines the outcomes of integrating Generative AI (GenAI) into a law assignment at the School of Law, University of Wollongong, Australia. Despite receiving instructions on the importance of verifying GenAI outputs and feedback on their attempts to use these tools effectively, a notable portion of students included fabricated or inaccurate information that had been generated by AI in their assignments. This overreliance on AI outputs suggests that instruction and guided practice alone may not sufficiently mitigate the risks associated with the inappropriate use of GenAI. A particularly concerning issue is the difficulty of identifying AI-generated inaccuracies in assessment tasks, which often requires considerable time and effort. Consequently, such errors may go unnoticed, potentially allowing students to bypass the development of essential skills, such as critical thinking and the ability to independently evaluate the accuracy, credibility, and relevance of information. Addressing overreliance on GenAI will require developing robust strategies that should be implemented for the entire duration of a student’s university degree to ensure they engage with AI tools effectively and responsibly.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"6 2\",\"pages\":\"210-219\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10903146/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10903146/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment
This empirical study examines the outcomes of integrating Generative AI (GenAI) into a law assignment at the School of Law, University of Wollongong, Australia. Despite receiving instructions on the importance of verifying GenAI outputs and feedback on their attempts to use these tools effectively, a notable portion of students included fabricated or inaccurate information that had been generated by AI in their assignments. This overreliance on AI outputs suggests that instruction and guided practice alone may not sufficiently mitigate the risks associated with the inappropriate use of GenAI. A particularly concerning issue is the difficulty of identifying AI-generated inaccuracies in assessment tasks, which often requires considerable time and effort. Consequently, such errors may go unnoticed, potentially allowing students to bypass the development of essential skills, such as critical thinking and the ability to independently evaluate the accuracy, credibility, and relevance of information. Addressing overreliance on GenAI will require developing robust strategies that should be implemented for the entire duration of a student’s university degree to ensure they engage with AI tools effectively and responsibly.