How Should College Education Respond to Large Language Models?

Charles la Shure
{"title":"How Should College Education Respond to Large Language Models?","authors":"Charles la Shure","doi":"10.52723/jkl.48.007","DOIUrl":null,"url":null,"abstract":"The release of ChatGPT to the public at the end of last year had many in the field of education worried. In response, this paper explored the future of college education and artificial intelligence (AI). First, a proper understanding of how large language models (LLMs) “train” and “learn,” along with their abilities and limitations, was established. Simply put, while LLMs produce plausible linguistic output, they are “stochastic parrots” that have no actual understanding of language. Next, we examined the dangers of generative AI and discovered that they might help in the creation and dissemination of misinformation. Even if these AI are not used with malicious intent, the fact that their training data sets are drawn from the internet—which reflects majority thinking—means that they can perpetuate and amplify social inequality and hegemonic stereotypes and biases. On the other hand, if we consider what is missing from the training data, it is only natural that marginalized voices should be even more marginalized. In addition, leaving the issue of the socially vulnerable aside, LLMs can only be trained on digital data, meaning analog data is ignored. This is in line with the idea of “the destruction of history” put forth by Joseph Weizenbaum, an early critic who warned of the dangers of artificial intelligence. We then discussed the relationship between humans and machines and considered which relationships were problematic and which were desirable. Researchers in the aviation industry recognized the problem of automation bias from an early date, but this phenomenon can be seen in other areas of society as well. Put simply, if a human places too much trust in a machine, they abdicate their decision-making responsibility to that machine and thus fail to respond quickly to solve any problems that may arise should that machine malfunction. LLMs do not endanger lives in the same way that airplanes do, but a similar bias can be seen with them as well. A more important issue, though, is the fact that people are no longer seen as whole human beings but as computers. This tendency was evident long before the advent of computers, for example in the attempts to quantify human intelligence through IQ tests, but it is a problem we must be particularly wary of in the age of AI. Lastly, we considered means for college education to find its way in the present situation. Educators in the US in particular, while dealing with ChatGPT, have pinpointed not the LLMs themselves but the “transactional nature” of education as the problem. That is, they argue that education has long since become less a process of learning and more a transaction in which students receive grades and degrees. Given this transactional environment, it is no wonder that student would rely too much on ChatGPT. This over-reliance, however, comes with side effects: not learning how to think properly, a lack of sufficient academic information, and learning an AI-based writing style. In response, US educators have proposed both “stick” (strategies that make it difficult for students to use LLMs) and “carrot” (strategies that encourage students to learn like human beings, not algorithms) solutions, but the heart of the matter seems to be a sense of responsibility. Creating an educational environment in which students can develop a sense of responsibility for themselves is the path forward for education in the age of AI. If we do this, LLMs can become a useful tool rather than an enemy to fear.","PeriodicalId":202851,"journal":{"name":"The Society Of Korean Literature","volume":"121 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Society Of Korean Literature","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52723/jkl.48.007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The release of ChatGPT to the public at the end of last year had many in the field of education worried. In response, this paper explored the future of college education and artificial intelligence (AI). First, a proper understanding of how large language models (LLMs) “train” and “learn,” along with their abilities and limitations, was established. Simply put, while LLMs produce plausible linguistic output, they are “stochastic parrots” that have no actual understanding of language. Next, we examined the dangers of generative AI and discovered that they might help in the creation and dissemination of misinformation. Even if these AI are not used with malicious intent, the fact that their training data sets are drawn from the internet—which reflects majority thinking—means that they can perpetuate and amplify social inequality and hegemonic stereotypes and biases. On the other hand, if we consider what is missing from the training data, it is only natural that marginalized voices should be even more marginalized. In addition, leaving the issue of the socially vulnerable aside, LLMs can only be trained on digital data, meaning analog data is ignored. This is in line with the idea of “the destruction of history” put forth by Joseph Weizenbaum, an early critic who warned of the dangers of artificial intelligence. We then discussed the relationship between humans and machines and considered which relationships were problematic and which were desirable. Researchers in the aviation industry recognized the problem of automation bias from an early date, but this phenomenon can be seen in other areas of society as well. Put simply, if a human places too much trust in a machine, they abdicate their decision-making responsibility to that machine and thus fail to respond quickly to solve any problems that may arise should that machine malfunction. LLMs do not endanger lives in the same way that airplanes do, but a similar bias can be seen with them as well. A more important issue, though, is the fact that people are no longer seen as whole human beings but as computers. This tendency was evident long before the advent of computers, for example in the attempts to quantify human intelligence through IQ tests, but it is a problem we must be particularly wary of in the age of AI. Lastly, we considered means for college education to find its way in the present situation. Educators in the US in particular, while dealing with ChatGPT, have pinpointed not the LLMs themselves but the “transactional nature” of education as the problem. That is, they argue that education has long since become less a process of learning and more a transaction in which students receive grades and degrees. Given this transactional environment, it is no wonder that student would rely too much on ChatGPT. This over-reliance, however, comes with side effects: not learning how to think properly, a lack of sufficient academic information, and learning an AI-based writing style. In response, US educators have proposed both “stick” (strategies that make it difficult for students to use LLMs) and “carrot” (strategies that encourage students to learn like human beings, not algorithms) solutions, but the heart of the matter seems to be a sense of responsibility. Creating an educational environment in which students can develop a sense of responsibility for themselves is the path forward for education in the age of AI. If we do this, LLMs can become a useful tool rather than an enemy to fear.
大学教育应如何应对大语言模式?
去年年底向公众发布的 ChatGPT 让教育领域的许多人忧心忡忡。对此,本文探讨了大学教育与人工智能(AI)的未来。首先,我们要正确理解大型语言模型(LLM)是如何 "训练 "和 "学习 "的,以及它们的能力和局限性。简而言之,虽然大型语言模型能产生似是而非的语言输出,但它们只是 "随机鹦鹉",并不真正理解语言。 接下来,我们研究了生成式人工智能的危险,发现它们可能有助于制造和传播错误信息。即使这些人工智能没有被恶意使用,但它们的训练数据集来自互联网--而互联网反映的是多数人的想法--这意味着它们可能会延续和放大社会不平等以及霸权成见和偏见。另一方面,如果我们考虑到训练数据中缺失的内容,边缘化的声音自然会更加边缘化。此外,抛开社会弱势群体的问题不谈,LLM 只能在数字数据上进行训练,这意味着模拟数据会被忽略。这与约瑟夫-韦曾鲍姆(Joseph Weizenbaum)提出的 "毁灭历史 "的观点不谋而合,他是早期的评论家,曾对人工智能的危险发出警告。 随后,我们讨论了人类与机器之间的关系,并思考了哪些关系是有问题的,哪些关系是可取的。航空业的研究人员很早就认识到了自动化偏见的问题,但这种现象在社会的其他领域也同样存在。简单地说,如果人类过于信任一台机器,就会把自己的决策责任推卸给机器,从而在机器出现故障时无法迅速做出反应,解决可能出现的问题。虽然 LLMs 不会像飞机那样危及生命,但也会出现类似的偏差。不过,一个更重要的问题是,人们不再将人视为完整的人,而是将其视为计算机。早在计算机出现之前,这种倾向就已经很明显了,例如,人们试图通过智商测试来量化人类的智力,但在人工智能时代,这是一个我们必须特别警惕的问题。 最后,我们探讨了大学教育在当前形势下的出路。尤其是美国的教育工作者,在面对 ChatGPT 时,他们指出问题不在于法律硕士本身,而在于教育的 "交易性质"。也就是说,他们认为教育早已不再是一个学习的过程,而更像是学生获得分数和学位的交易。在这种交易环境下,学生过分依赖 ChatGPT 也就不足为奇了。然而,这种过度依赖也带来了副作用:学生无法学会如何正确思考,缺乏足够的学术信息,以及学习基于人工智能的写作风格。对此,美国教育工作者提出了 "大棒"(让学生难以使用 LLM 的策略)和 "胡萝卜"(鼓励学生像人而非算法一样学习的策略)两种解决方案,但问题的核心似乎在于责任感。创造一个能让学生培养对自己负责的教育环境,是人工智能时代教育的发展之路。如果我们能做到这一点,法律硕士就能成为一种有用的工具,而不是令人恐惧的敌人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信