Artificial Intelligence in Education: Use it, or Refuse it?

IF 0.9 Q3 EDUCATION & EDUCATIONAL RESEARCH
Nicholas C. Burbules
{"title":"Artificial Intelligence in Education: Use it, or Refuse it?","authors":"Nicholas C. Burbules","doi":"10.1111/edth.70038","DOIUrl":null,"url":null,"abstract":"<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Aboodi advocates for educational reforms designed to motivate and help students to think critically about AI applications. At present, we have frequent examples of large language models (LLMs) asserting untrue information (for example, a recent US government report on public health produced with AI was found to include nonexistent studies and to seriously misinterpret others).<sup>7</sup> Aboodi suggests that asking students to critically assess misleading or inaccurate AI-generated responses can itself be a valuable critical thinking activity. He argues that incorporating such activities into the curriculum is urgent because current and future generations are more likely to “outsource” their critical thinking to AI.</p><p>In “The Paradox of AI in ESL Instruction: Between Innovation and Oppression,” Liat Ariel and Merav Hayak explore the uses of ChatGPT and similar programs in teaching English as a second language.<sup>8</sup> They distinguish projects in which students learn to use AI to create or produce text, from those in which students merely interact with AI as consumers; this difference produces a two-tiered tracking system that creates inequalities in their learning opportunities. Ariel and Hayak draw from Iris Young's “five faces of oppression” — exploitation, marginalization, powerlessness, violence, and cultural imperialism — to analyze the effects of this tracking. The paradox is to incorporate, not ban, programs like ChatGPT while also being cognizant of these unjust effects.</p><p>In “Algorithmic Fairness and Educational Justice,” Aaron Wolf examines the use of AI for automated decision-making in education — for example, in helping with school admissions.<sup>9</sup> Because this is a data-intensive operation, it generates statistical evidence that provides a basis for assessments of what he calls “algorithmic fairness,” which has two normative dimensions: the assessment of affective values, the attitudes expressed within social practices, and distributive values, the actual outcomes and effects of those practices. He cites as an example of this kind of assessment the well-known evaluation of the COMPAS program, used for bail, sentencing, and parole, which was found to be systematically biased by race. This more quantitative approach provides an interesting contrast to the critique of Ariel and Hayak.</p><p>In “Educational Implications of Artificial Intelligence: Peirce, Reason, and the Pragmatic Maxim,” Kenneth Driggers and Deron Boyles draw from C.S. Peirce's pragmatism to develop a way of thinking about where and how AI can be educationally productive.<sup>10</sup> There is nothing wrong from the pragmatic point of view, they argue, with the artificial synthesis of intelligence itself; all human intelligence is an imperfect, fallible attempt to make sense of experience. The point is how we index our conceptions and theories to experience, wherever they come from. Here Peirce's “pragmatic maxim” is helpful: “the entire intellectual purport of any symbol consists in the total of all general modes of rational conduct which, conditionally upon all the possible different circumstances and desires, would ensue upon the acceptance of the symbol.” Driggers and Boyles use Peirce's pragmatism to develop criteria for the educationally productive uses of programs like ChatGPT, and AI generally.</p><p>In “<i>Frankenstein, Emile,</i> ChatGPT: Educating AI between Natural Learning and Artificial Monsters,” Gideon Dishon examines the uses of “natural” and “artificial” in characterizing this thing we call “artificial intelligence.”<sup>11</sup> While the distinction may seem to be descriptive, Dishon shows how it also entails a number of normative judgments. He explores these terms in the context of three textual examples: Rousseau's classic, <i>Emile</i>; Mary Shelley's <i>Frankenstein</i>; and Kevin Roose's 2023 account of a dialogue he had with the AI agent in Bing. In these contexts, he concludes, the relationship between natural and artificial in the context of human learning, development, and interaction is best viewed as dialectical, not dichotomous.</p><p>In “Educating AI: A Case Against Non-Originary Anthropomorphism,” Alexander Sidorkin offers perhaps the most optimistic account of AI in education in this symposium.<sup>12</sup> He notes two recurring anxieties about AI — its capacity to promote misinformation, and its potential (some day) to develop into a conscious, autonomous, and self-interested entity.<sup>13</sup> Sidorkin thinks the latter concern is exaggerated; we should be more concerned about the risks of what he calls the currently “enslaved” AI. In fact, he argues, a fully autonomous AI would have to incorporate ethics as part of its overall orientation. Though written apart from each other, this article and Dishon's set up an interesting comparison and contrast.</p><p>In “Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems,” Nicolas Tanchuk looks ahead to the development of superintelligent systems; AI that actually exceeds human intelligence.<sup>14</sup> This development would create numerous unprecedented challenges — challenges for which current approaches to AI literacy will prove inadequate. Instead, Tanchuk calls for what he terms “Deep ASI literacy,” an approach that takes seriously the need to rethink our terminology (is superintelligence just intelligence, but more of it — or a truly unique and emergent entity?); our views of knowledge (will it be possible for human intelligence to understand and assess the knowledge claims of a machine superintelligence?); and our ethics (will a superintelligence have an identity, or rights?). It is crucial, Tanchuk argues, to have these discussions now, before superintelligence becomes a reality.</p><p>It is amazing to see how quickly the artificial intelligence tsunami has come upon us. ChatGPT was launched in 2022 — until then, no one outside of technical fields knew what “generative AI” or “large language models” were. Suddenly, educators started realizing what a powerful resource this was for producing text, and that students were already using it for their assignments. We had debates about cheating and plagiarism, and many proposed banning the use of such programs — debates that, at times, seemed quaintly nostalgic and out of touch. As ChatGPT and similar programs have improved, they are starting to look like a valuable resource, and even many faculty are using them. As with the aforementioned Latham quote, the discussion has turned more and more into a recognition that <i>everything</i> about education, at all levels, will be influenced and reshaped by AI — for better or for worse (or for better <i>and</i> for worse).</p><p>We are not prepared for this future. Many of our own categories and ways of thinking as philosophers have not caught up with these new challenges. Most of us are still scrambling to understand the technical side of these issues — for example, what AI “tuning” means and why it is so important. Even given an understandable skepticism about hyperbolic claims for technology in education (remember when MOOCs were going to overturn all of higher education?), we must understand that this moment is different. We all have the sense that something is shifting under our feet, and we cannot afford to be in denial that it is a transient fad. Because it is such a fast-moving area of technology, all of our attempts to project or anticipate its consequences need to be constantly subject to revision. This symposium presents the work of an outstanding international group of scholars who are telling us, this rethinking must begin right now.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"597-602"},"PeriodicalIF":0.9000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70038","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EDUCATIONAL THEORY","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/edth.70038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?

This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on how ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?3

The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.4 Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.

In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting understanding.5 Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.

In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.6 No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Aboodi advocates for educational reforms designed to motivate and help students to think critically about AI applications. At present, we have frequent examples of large language models (LLMs) asserting untrue information (for example, a recent US government report on public health produced with AI was found to include nonexistent studies and to seriously misinterpret others).7 Aboodi suggests that asking students to critically assess misleading or inaccurate AI-generated responses can itself be a valuable critical thinking activity. He argues that incorporating such activities into the curriculum is urgent because current and future generations are more likely to “outsource” their critical thinking to AI.

In “The Paradox of AI in ESL Instruction: Between Innovation and Oppression,” Liat Ariel and Merav Hayak explore the uses of ChatGPT and similar programs in teaching English as a second language.8 They distinguish projects in which students learn to use AI to create or produce text, from those in which students merely interact with AI as consumers; this difference produces a two-tiered tracking system that creates inequalities in their learning opportunities. Ariel and Hayak draw from Iris Young's “five faces of oppression” — exploitation, marginalization, powerlessness, violence, and cultural imperialism — to analyze the effects of this tracking. The paradox is to incorporate, not ban, programs like ChatGPT while also being cognizant of these unjust effects.

In “Algorithmic Fairness and Educational Justice,” Aaron Wolf examines the use of AI for automated decision-making in education — for example, in helping with school admissions.9 Because this is a data-intensive operation, it generates statistical evidence that provides a basis for assessments of what he calls “algorithmic fairness,” which has two normative dimensions: the assessment of affective values, the attitudes expressed within social practices, and distributive values, the actual outcomes and effects of those practices. He cites as an example of this kind of assessment the well-known evaluation of the COMPAS program, used for bail, sentencing, and parole, which was found to be systematically biased by race. This more quantitative approach provides an interesting contrast to the critique of Ariel and Hayak.

In “Educational Implications of Artificial Intelligence: Peirce, Reason, and the Pragmatic Maxim,” Kenneth Driggers and Deron Boyles draw from C.S. Peirce's pragmatism to develop a way of thinking about where and how AI can be educationally productive.10 There is nothing wrong from the pragmatic point of view, they argue, with the artificial synthesis of intelligence itself; all human intelligence is an imperfect, fallible attempt to make sense of experience. The point is how we index our conceptions and theories to experience, wherever they come from. Here Peirce's “pragmatic maxim” is helpful: “the entire intellectual purport of any symbol consists in the total of all general modes of rational conduct which, conditionally upon all the possible different circumstances and desires, would ensue upon the acceptance of the symbol.” Driggers and Boyles use Peirce's pragmatism to develop criteria for the educationally productive uses of programs like ChatGPT, and AI generally.

In “Frankenstein, Emile, ChatGPT: Educating AI between Natural Learning and Artificial Monsters,” Gideon Dishon examines the uses of “natural” and “artificial” in characterizing this thing we call “artificial intelligence.”11 While the distinction may seem to be descriptive, Dishon shows how it also entails a number of normative judgments. He explores these terms in the context of three textual examples: Rousseau's classic, Emile; Mary Shelley's Frankenstein; and Kevin Roose's 2023 account of a dialogue he had with the AI agent in Bing. In these contexts, he concludes, the relationship between natural and artificial in the context of human learning, development, and interaction is best viewed as dialectical, not dichotomous.

In “Educating AI: A Case Against Non-Originary Anthropomorphism,” Alexander Sidorkin offers perhaps the most optimistic account of AI in education in this symposium.12 He notes two recurring anxieties about AI — its capacity to promote misinformation, and its potential (some day) to develop into a conscious, autonomous, and self-interested entity.13 Sidorkin thinks the latter concern is exaggerated; we should be more concerned about the risks of what he calls the currently “enslaved” AI. In fact, he argues, a fully autonomous AI would have to incorporate ethics as part of its overall orientation. Though written apart from each other, this article and Dishon's set up an interesting comparison and contrast.

In “Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems,” Nicolas Tanchuk looks ahead to the development of superintelligent systems; AI that actually exceeds human intelligence.14 This development would create numerous unprecedented challenges — challenges for which current approaches to AI literacy will prove inadequate. Instead, Tanchuk calls for what he terms “Deep ASI literacy,” an approach that takes seriously the need to rethink our terminology (is superintelligence just intelligence, but more of it — or a truly unique and emergent entity?); our views of knowledge (will it be possible for human intelligence to understand and assess the knowledge claims of a machine superintelligence?); and our ethics (will a superintelligence have an identity, or rights?). It is crucial, Tanchuk argues, to have these discussions now, before superintelligence becomes a reality.

It is amazing to see how quickly the artificial intelligence tsunami has come upon us. ChatGPT was launched in 2022 — until then, no one outside of technical fields knew what “generative AI” or “large language models” were. Suddenly, educators started realizing what a powerful resource this was for producing text, and that students were already using it for their assignments. We had debates about cheating and plagiarism, and many proposed banning the use of such programs — debates that, at times, seemed quaintly nostalgic and out of touch. As ChatGPT and similar programs have improved, they are starting to look like a valuable resource, and even many faculty are using them. As with the aforementioned Latham quote, the discussion has turned more and more into a recognition that everything about education, at all levels, will be influenced and reshaped by AI — for better or for worse (or for better and for worse).

We are not prepared for this future. Many of our own categories and ways of thinking as philosophers have not caught up with these new challenges. Most of us are still scrambling to understand the technical side of these issues — for example, what AI “tuning” means and why it is so important. Even given an understandable skepticism about hyperbolic claims for technology in education (remember when MOOCs were going to overturn all of higher education?), we must understand that this moment is different. We all have the sense that something is shifting under our feet, and we cannot afford to be in denial that it is a transient fad. Because it is such a fast-moving area of technology, all of our attempts to project or anticipate its consequences need to be constantly subject to revision. This symposium presents the work of an outstanding international group of scholars who are telling us, this rethinking must begin right now.

人工智能在教育中的应用:使用还是拒绝?
本次研讨会围绕两个共同的问题展开:第一,教育工作者应该如何看待人工智能(AI)作为一种教育资源,以及教育哲学在思考这些可能性方面可以做出什么贡献?第二,人工智能的未来可预见的方向是什么?在(不久的)将来,我们将面临哪些新的挑战?这是教育哲学的一项任务:确定,也许在某些情况下,重新制定教育的目的和目标,以适应这种不断变化的环境。它还涉及重申和捍卫人工智能无法容纳的东西,即使其他目标和目标必须根据人工智能重新审视。例如,使用ChatGPT制作学生论文是否被认为是“作弊”?这取决于如何使用ChatGPT吗?或者我们是否需要重新考虑我们传统意义上的“欺骗”?本次研讨会的文章都是针对这类“第三空间”问题,并将讨论超越了非此即彼的选择。总之,它们说明了对我们所有人来说,更加了解人工智能以及它能做什么(不能做什么)的重要性一些关注ChatGPT和类似的生成式人工智能程序,这些程序模拟或模仿人类的生产活动;另一些则讨论了关于人工智能未来的更广泛的问题——比如人工通用智能(AGI)甚至人工“超级智能”(ASI)的可能性。这些文章最初是作为教育哲学学会2024年会议的教育理论/PES会前研讨会的一部分提出的;在这些详细的讨论和反馈之后,这些文章被进一步修订,作为本次研讨会的一部分。在《校园人工智能:重新审视作为高等教育目标的理解》一书中,杰米·赫尔曼和亨利·拉拉-斯泰德尔认为,ChatGPT可能是有用的——例如,作为导师——但学生对它的依赖会危及促进理解的目标他们认为,我们的作业和评估策略强调的是知识而不是理解。与本次研讨会上的其他文章一样,在教育中使用人工智能的问题往往揭示了我们教育思维中的其他潜在错误。重申理解作为一个教育目标的重要性,并评估理解,是一个更广泛的目标,有助于我们认识到人工智能作为一种教育资源的价值和局限性。在《将批判性思维外包给人工智能的令人担忧的潜力》一书中,Ron Aboodi认为人工智能的可靠性是有局限性的,它独立于非工具性的教育目标,比如为了自身的利益而促进理解无论人工智能发展到什么程度,如果没有足够的批判性思维,依赖哪怕是最好的人工智能工具,都可能导致我们误入歧途,并导致严重的不良后果。因此,Aboodi倡导旨在激励和帮助学生批判性地思考人工智能应用的教育改革。目前,我们经常看到大型语言模型(llm)断言不真实信息的例子(例如,最近美国政府用人工智能制作的一份关于公共卫生的报告被发现包括不存在的研究,并严重误解了其他研究)Aboodi认为,要求学生批判性地评估人工智能产生的误导性或不准确的回答,本身就是一种有价值的批判性思维活动。他认为,将此类活动纳入课程迫在眉睫,因为当代人和后代更有可能将他们的批判性思维“外包”给人工智能。在“人工智能在ESL教学中的悖论:在创新与压迫之间”一文中,Liat Ariel和Merav Hayak探讨了ChatGPT和类似程序在英语作为第二语言教学中的应用他们将学生学习使用人工智能创建或生成文本的项目与学生仅仅作为消费者与人工智能互动的项目区分开来;这种差异产生了一个两层跟踪系统,造成了他们学习机会的不平等。Ariel和Hayak从Iris Young的“压迫的五面”——剥削、边缘化、无力、暴力和文化帝国主义——来分析这种追踪的影响。矛盾的是,在承认这些不公正的影响的同时,要纳入而不是禁止像ChatGPT这样的程序。在《算法公平和教育正义》一书中,Aaron Wolf研究了人工智能在教育中的自动决策应用——例如,在帮助学校招生方面因为这是一个数据密集型的操作,它产生的统计证据为他所谓的“算法公平”的评估提供了基础,它有两个规范的维度:对情感价值的评估,在社会实践中表达的态度,以及分配价值,这些实践的实际结果和影响。 他举了一个著名的评估COMPAS项目的例子,该项目用于保释、量刑和假释,被发现系统性地存在种族偏见。这种更定量的方法与阿里尔和哈亚克的批判形成了有趣的对比。在《人工智能的教育意义:皮尔斯、理性和实用主义格言》一书中,肯尼斯·德里格斯和德隆·博伊尔斯借鉴了C.S.皮尔斯的实用主义思想,提出了一种思考人工智能在哪里以及如何在教育上发挥作用的方法他们认为,从实用主义的角度来看,人工合成智能本身并没有错;人类所有的智慧都是一种不完美的、容易出错的理解经验的尝试。关键是我们如何将我们的概念和理论与经验联系起来,无论它们来自哪里。在这里,皮尔斯的“实用主义格言”很有帮助:“任何符号的全部知识主旨都包含在理性行为的所有一般模式的总和中,这些模式在所有可能的不同环境和欲望的条件下,会随着符号的接受而发生。”Driggers和Boyles利用Peirce的实用主义为ChatGPT和人工智能等程序的教育生产性使用制定了标准。在《弗兰肯斯坦,埃米尔,ChatGPT:在自然学习和人工怪物之间教育人工智能》一书中,吉迪恩·迪松(Gideon Dishon)研究了“自然”和“人工”在描述我们称之为“人工智能”的东西时的用途。虽然这种区别似乎是描述性的,但狄顺表明,它也包含了许多规范性判断。他在三个文本例子的背景下探讨了这些术语:卢梭的经典,爱弥儿;玛丽·雪莱的《弗兰肯斯坦》;以及凯文·卢斯(Kevin Roose)在2023年讲述的他与Bing中的人工智能特工的对话。在这些背景下,他总结道,在人类学习、发展和互动的背景下,自然和人工之间的关系最好被视为辩证的,而不是二分的。在“教育人工智能:一个反对非原创拟人论的案例”中,Alexander Sidorkin在本次研讨会上提供了可能是最乐观的人工智能教育描述他提到了关于人工智能的两个反复出现的担忧——它传播错误信息的能力,以及它(有朝一日)发展成为一个有意识的、自主的、自利的实体的潜力西多金认为后一种担忧被夸大了;我们应该更加关注他所说的目前“被奴役”的人工智能的风险。事实上,他认为,一个完全自主的人工智能必须将道德作为其整体方向的一部分。虽然这篇文章和迪顺的文章写得不同,但却建立了一个有趣的比较和对比。在《深度人工智能素养:与人工超级智能系统保持一致的教育》一书中,Nicolas Tanchuk展望了超级智能系统的发展;实际上超过人类智力的人工智能这种发展将带来许多前所未有的挑战——目前的人工智能扫盲方法将被证明是不够的。相反,Tanchuk呼吁采用他所谓的“深度人工智能素养”(Deep ASI literacy),这是一种认真思考我们的术语的方法(超级智能只是智能,还是更多的智能——还是一个真正独特和新兴的实体?)我们对知识的看法(人类智能是否有可能理解和评估机器超级智能的知识主张?);以及我们的伦理(超级智能会有身份或权利吗?)坦丘克认为,在超级智能成为现实之前,现在就进行这些讨论是至关重要的。看到人工智能海啸如此迅速地向我们袭来,真是令人惊讶。ChatGPT于2022年推出——在那之前,技术领域以外的人都不知道什么是“生成式人工智能”或“大型语言模型”。突然间,教育工作者开始意识到这是一个多么强大的资源来制作文本,学生们已经在使用它来完成他们的作业。我们对作弊和抄袭进行了辩论,许多人建议禁止使用这类程序——这些辩论有时显得有些怀旧和脱离现实。随着ChatGPT和类似程序的改进,它们开始看起来像一种有价值的资源,甚至许多教师都在使用它们。就像前面提到的莱瑟姆的话一样,讨论越来越变成了一种认识,即所有层次的教育都将受到人工智能的影响和重塑——无论是好是坏(或者是好是坏)。我们还没有为这样的未来做好准备。作为哲学家,我们自己的许多范畴和思维方式都没有跟上这些新的挑战。我们大多数人仍在努力理解这些问题的技术层面——例如,人工智能“调优”意味着什么,为什么它如此重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
EDUCATIONAL THEORY
EDUCATIONAL THEORY EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
2.20
自引率
0.00%
发文量
19
期刊介绍: The general purposes of Educational Theory are to foster the continuing development of educational theory and to encourage wide and effective discussion of theoretical problems within the educational profession. In order to achieve these purposes, the journal is devoted to publishing scholarly articles and studies in the foundations of education, and in related disciplines outside the field of education, which contribute to the advancement of educational theory. It is the policy of the sponsoring organizations to maintain the journal as an open channel of communication and as an open forum for discussion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信