{"title":"Artificial Intelligence in Education: Use it, or Refuse it?","authors":"Nicholas C. Burbules","doi":"10.1111/edth.70038","DOIUrl":null,"url":null,"abstract":"<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Aboodi advocates for educational reforms designed to motivate and help students to think critically about AI applications. At present, we have frequent examples of large language models (LLMs) asserting untrue information (for example, a recent US government report on public health produced with AI was found to include nonexistent studies and to seriously misinterpret others).<sup>7</sup> Aboodi suggests that asking students to critically assess misleading or inaccurate AI-generated responses can itself be a valuable critical thinking activity. He argues that incorporating such activities into the curriculum is urgent because current and future generations are more likely to “outsource” their critical thinking to AI.</p><p>In “The Paradox of AI in ESL Instruction: Between Innovation and Oppression,” Liat Ariel and Merav Hayak explore the uses of ChatGPT and similar programs in teaching English as a second language.<sup>8</sup> They distinguish projects in which students learn to use AI to create or produce text, from those in which students merely interact with AI as consumers; this difference produces a two-tiered tracking system that creates inequalities in their learning opportunities. Ariel and Hayak draw from Iris Young's “five faces of oppression” — exploitation, marginalization, powerlessness, violence, and cultural imperialism — to analyze the effects of this tracking. The paradox is to incorporate, not ban, programs like ChatGPT while also being cognizant of these unjust effects.</p><p>In “Algorithmic Fairness and Educational Justice,” Aaron Wolf examines the use of AI for automated decision-making in education — for example, in helping with school admissions.<sup>9</sup> Because this is a data-intensive operation, it generates statistical evidence that provides a basis for assessments of what he calls “algorithmic fairness,” which has two normative dimensions: the assessment of affective values, the attitudes expressed within social practices, and distributive values, the actual outcomes and effects of those practices. He cites as an example of this kind of assessment the well-known evaluation of the COMPAS program, used for bail, sentencing, and parole, which was found to be systematically biased by race. This more quantitative approach provides an interesting contrast to the critique of Ariel and Hayak.</p><p>In “Educational Implications of Artificial Intelligence: Peirce, Reason, and the Pragmatic Maxim,” Kenneth Driggers and Deron Boyles draw from C.S. Peirce's pragmatism to develop a way of thinking about where and how AI can be educationally productive.<sup>10</sup> There is nothing wrong from the pragmatic point of view, they argue, with the artificial synthesis of intelligence itself; all human intelligence is an imperfect, fallible attempt to make sense of experience. The point is how we index our conceptions and theories to experience, wherever they come from. Here Peirce's “pragmatic maxim” is helpful: “the entire intellectual purport of any symbol consists in the total of all general modes of rational conduct which, conditionally upon all the possible different circumstances and desires, would ensue upon the acceptance of the symbol.” Driggers and Boyles use Peirce's pragmatism to develop criteria for the educationally productive uses of programs like ChatGPT, and AI generally.</p><p>In “<i>Frankenstein, Emile,</i> ChatGPT: Educating AI between Natural Learning and Artificial Monsters,” Gideon Dishon examines the uses of “natural” and “artificial” in characterizing this thing we call “artificial intelligence.”<sup>11</sup> While the distinction may seem to be descriptive, Dishon shows how it also entails a number of normative judgments. He explores these terms in the context of three textual examples: Rousseau's classic, <i>Emile</i>; Mary Shelley's <i>Frankenstein</i>; and Kevin Roose's 2023 account of a dialogue he had with the AI agent in Bing. In these contexts, he concludes, the relationship between natural and artificial in the context of human learning, development, and interaction is best viewed as dialectical, not dichotomous.</p><p>In “Educating AI: A Case Against Non-Originary Anthropomorphism,” Alexander Sidorkin offers perhaps the most optimistic account of AI in education in this symposium.<sup>12</sup> He notes two recurring anxieties about AI — its capacity to promote misinformation, and its potential (some day) to develop into a conscious, autonomous, and self-interested entity.<sup>13</sup> Sidorkin thinks the latter concern is exaggerated; we should be more concerned about the risks of what he calls the currently “enslaved” AI. In fact, he argues, a fully autonomous AI would have to incorporate ethics as part of its overall orientation. Though written apart from each other, this article and Dishon's set up an interesting comparison and contrast.</p><p>In “Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems,” Nicolas Tanchuk looks ahead to the development of superintelligent systems; AI that actually exceeds human intelligence.<sup>14</sup> This development would create numerous unprecedented challenges — challenges for which current approaches to AI literacy will prove inadequate. Instead, Tanchuk calls for what he terms “Deep ASI literacy,” an approach that takes seriously the need to rethink our terminology (is superintelligence just intelligence, but more of it — or a truly unique and emergent entity?); our views of knowledge (will it be possible for human intelligence to understand and assess the knowledge claims of a machine superintelligence?); and our ethics (will a superintelligence have an identity, or rights?). It is crucial, Tanchuk argues, to have these discussions now, before superintelligence becomes a reality.</p><p>It is amazing to see how quickly the artificial intelligence tsunami has come upon us. ChatGPT was launched in 2022 — until then, no one outside of technical fields knew what “generative AI” or “large language models” were. Suddenly, educators started realizing what a powerful resource this was for producing text, and that students were already using it for their assignments. We had debates about cheating and plagiarism, and many proposed banning the use of such programs — debates that, at times, seemed quaintly nostalgic and out of touch. As ChatGPT and similar programs have improved, they are starting to look like a valuable resource, and even many faculty are using them. As with the aforementioned Latham quote, the discussion has turned more and more into a recognition that <i>everything</i> about education, at all levels, will be influenced and reshaped by AI — for better or for worse (or for better <i>and</i> for worse).</p><p>We are not prepared for this future. Many of our own categories and ways of thinking as philosophers have not caught up with these new challenges. Most of us are still scrambling to understand the technical side of these issues — for example, what AI “tuning” means and why it is so important. Even given an understandable skepticism about hyperbolic claims for technology in education (remember when MOOCs were going to overturn all of higher education?), we must understand that this moment is different. We all have the sense that something is shifting under our feet, and we cannot afford to be in denial that it is a transient fad. Because it is such a fast-moving area of technology, all of our attempts to project or anticipate its consequences need to be constantly subject to revision. This symposium presents the work of an outstanding international group of scholars who are telling us, this rethinking must begin right now.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"597-602"},"PeriodicalIF":0.9000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70038","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EDUCATIONAL THEORY","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/edth.70038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?
This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on how ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?3
The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.4 Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.
In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting understanding.5 Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.
In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.6 No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Aboodi advocates for educational reforms designed to motivate and help students to think critically about AI applications. At present, we have frequent examples of large language models (LLMs) asserting untrue information (for example, a recent US government report on public health produced with AI was found to include nonexistent studies and to seriously misinterpret others).7 Aboodi suggests that asking students to critically assess misleading or inaccurate AI-generated responses can itself be a valuable critical thinking activity. He argues that incorporating such activities into the curriculum is urgent because current and future generations are more likely to “outsource” their critical thinking to AI.
In “The Paradox of AI in ESL Instruction: Between Innovation and Oppression,” Liat Ariel and Merav Hayak explore the uses of ChatGPT and similar programs in teaching English as a second language.8 They distinguish projects in which students learn to use AI to create or produce text, from those in which students merely interact with AI as consumers; this difference produces a two-tiered tracking system that creates inequalities in their learning opportunities. Ariel and Hayak draw from Iris Young's “five faces of oppression” — exploitation, marginalization, powerlessness, violence, and cultural imperialism — to analyze the effects of this tracking. The paradox is to incorporate, not ban, programs like ChatGPT while also being cognizant of these unjust effects.
In “Algorithmic Fairness and Educational Justice,” Aaron Wolf examines the use of AI for automated decision-making in education — for example, in helping with school admissions.9 Because this is a data-intensive operation, it generates statistical evidence that provides a basis for assessments of what he calls “algorithmic fairness,” which has two normative dimensions: the assessment of affective values, the attitudes expressed within social practices, and distributive values, the actual outcomes and effects of those practices. He cites as an example of this kind of assessment the well-known evaluation of the COMPAS program, used for bail, sentencing, and parole, which was found to be systematically biased by race. This more quantitative approach provides an interesting contrast to the critique of Ariel and Hayak.
In “Educational Implications of Artificial Intelligence: Peirce, Reason, and the Pragmatic Maxim,” Kenneth Driggers and Deron Boyles draw from C.S. Peirce's pragmatism to develop a way of thinking about where and how AI can be educationally productive.10 There is nothing wrong from the pragmatic point of view, they argue, with the artificial synthesis of intelligence itself; all human intelligence is an imperfect, fallible attempt to make sense of experience. The point is how we index our conceptions and theories to experience, wherever they come from. Here Peirce's “pragmatic maxim” is helpful: “the entire intellectual purport of any symbol consists in the total of all general modes of rational conduct which, conditionally upon all the possible different circumstances and desires, would ensue upon the acceptance of the symbol.” Driggers and Boyles use Peirce's pragmatism to develop criteria for the educationally productive uses of programs like ChatGPT, and AI generally.
In “Frankenstein, Emile, ChatGPT: Educating AI between Natural Learning and Artificial Monsters,” Gideon Dishon examines the uses of “natural” and “artificial” in characterizing this thing we call “artificial intelligence.”11 While the distinction may seem to be descriptive, Dishon shows how it also entails a number of normative judgments. He explores these terms in the context of three textual examples: Rousseau's classic, Emile; Mary Shelley's Frankenstein; and Kevin Roose's 2023 account of a dialogue he had with the AI agent in Bing. In these contexts, he concludes, the relationship between natural and artificial in the context of human learning, development, and interaction is best viewed as dialectical, not dichotomous.
In “Educating AI: A Case Against Non-Originary Anthropomorphism,” Alexander Sidorkin offers perhaps the most optimistic account of AI in education in this symposium.12 He notes two recurring anxieties about AI — its capacity to promote misinformation, and its potential (some day) to develop into a conscious, autonomous, and self-interested entity.13 Sidorkin thinks the latter concern is exaggerated; we should be more concerned about the risks of what he calls the currently “enslaved” AI. In fact, he argues, a fully autonomous AI would have to incorporate ethics as part of its overall orientation. Though written apart from each other, this article and Dishon's set up an interesting comparison and contrast.
In “Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems,” Nicolas Tanchuk looks ahead to the development of superintelligent systems; AI that actually exceeds human intelligence.14 This development would create numerous unprecedented challenges — challenges for which current approaches to AI literacy will prove inadequate. Instead, Tanchuk calls for what he terms “Deep ASI literacy,” an approach that takes seriously the need to rethink our terminology (is superintelligence just intelligence, but more of it — or a truly unique and emergent entity?); our views of knowledge (will it be possible for human intelligence to understand and assess the knowledge claims of a machine superintelligence?); and our ethics (will a superintelligence have an identity, or rights?). It is crucial, Tanchuk argues, to have these discussions now, before superintelligence becomes a reality.
It is amazing to see how quickly the artificial intelligence tsunami has come upon us. ChatGPT was launched in 2022 — until then, no one outside of technical fields knew what “generative AI” or “large language models” were. Suddenly, educators started realizing what a powerful resource this was for producing text, and that students were already using it for their assignments. We had debates about cheating and plagiarism, and many proposed banning the use of such programs — debates that, at times, seemed quaintly nostalgic and out of touch. As ChatGPT and similar programs have improved, they are starting to look like a valuable resource, and even many faculty are using them. As with the aforementioned Latham quote, the discussion has turned more and more into a recognition that everything about education, at all levels, will be influenced and reshaped by AI — for better or for worse (or for better and for worse).
We are not prepared for this future. Many of our own categories and ways of thinking as philosophers have not caught up with these new challenges. Most of us are still scrambling to understand the technical side of these issues — for example, what AI “tuning” means and why it is so important. Even given an understandable skepticism about hyperbolic claims for technology in education (remember when MOOCs were going to overturn all of higher education?), we must understand that this moment is different. We all have the sense that something is shifting under our feet, and we cannot afford to be in denial that it is a transient fad. Because it is such a fast-moving area of technology, all of our attempts to project or anticipate its consequences need to be constantly subject to revision. This symposium presents the work of an outstanding international group of scholars who are telling us, this rethinking must begin right now.
期刊介绍:
The general purposes of Educational Theory are to foster the continuing development of educational theory and to encourage wide and effective discussion of theoretical problems within the educational profession. In order to achieve these purposes, the journal is devoted to publishing scholarly articles and studies in the foundations of education, and in related disciplines outside the field of education, which contribute to the advancement of educational theory. It is the policy of the sponsoring organizations to maintain the journal as an open channel of communication and as an open forum for discussion.