{"title":"Friendship for Virtue, by Kristján Kristjánsson, Oxford University Press, 2022, 213 pp.","authors":"Dan Mamlok","doi":"10.1111/edth.70033","DOIUrl":"https://doi.org/10.1111/edth.70033","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"765-770"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence","authors":"Ron Aboodi","doi":"10.1111/edth.70037","DOIUrl":"https://doi.org/10.1111/edth.70037","url":null,"abstract":"<p>As Artificial Intelligence (AI) keeps advancing, Generation Alpha and future generations are more likely to cope with situations that call for critical thinking by turning to AI and relying on its guidance without sufficient critical thinking. I defend this worry and argue that it calls for educational reforms that would be designed mainly to (a) motivate students to think critically about AI applications and the justifiability of their deployment, as well as (b) cultivate the skills, knowledge, and dispositions that will help them do so. Furthermore, I argue that these educational aims will remain important in the distant future no matter how far AI advances, even merely on outcome-based grounds (i.e., without appealing to the final value of autonomy, or authenticity, or understanding, etc.; or to any educational ideal that dictates the cultivation of critical thinking regardless of its instrumental value). For any “artificial consultant” that might emerge in the future, even with a perfect track record, it is highly improbable that we could ever justifiably rule out or assign negligible probability to the scenario that (a) it will mislead us in certain high-stakes situations, and/or that (b) human critical thinking could help reach better conclusions and prevent significantly bad outcomes.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"626-645"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence in Education: Use it, or Refuse it?","authors":"Nicholas C. Burbules","doi":"10.1111/edth.70038","DOIUrl":"https://doi.org/10.1111/edth.70038","url":null,"abstract":"<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Abood","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"597-602"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Paradox of AI in ESL Instruction: Between Innovation and Oppression","authors":"Liat Ariel, Merav Hayak","doi":"10.1111/edth.70034","DOIUrl":"https://doi.org/10.1111/edth.70034","url":null,"abstract":"<p>This article critically examines Artificial Intelligence in Education (AIED) within English as a Second Language (ESL) contexts, arguing that current practices often deepen systemic inequality. Drawing on Iris Marion Young's <i>Five Faces of Oppression</i>, we analyze the implementation of AIED in oppressed schools, illustrating how students are tracked into the consumer track—passive users of AI technologies—while privileged students are directed into the creator track, where they learn to design and develop AI. This divide reinforces systemic inequality, depriving disadvantaged students of communicative agency and social mobility. Focusing on the Israeli context, we demonstrate how teachers and students in these schools lack the training and infrastructure to engage meaningfully with AI, resulting in its instrumental rather than transformative use. This “veil of innovation” obscures educational injustice, masking deep inequalities in access, agency, and technological fluency. We advocate for an inclusive pedagogy that integrates AI within English education as a tool for empowerment—not as a replacement for linguistic and cognitive development.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"646-660"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spinoza: Fiction and Manipulation in Civic Education, by Johan Dahlbeck, Springer, 2021, 90 pp.","authors":"Pascal Sévérac","doi":"10.1111/edth.70036","DOIUrl":"https://doi.org/10.1111/edth.70036","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"771-774"},"PeriodicalIF":1.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems","authors":"Nicolas J. Tanchuk","doi":"10.1111/edth.70030","DOIUrl":"https://doi.org/10.1111/edth.70030","url":null,"abstract":"<p>Artificial intelligence companies and researchers are currently working to create Artificial Superintelligence (ASI): AI systems that significantly exceed human problem-solving speed, power, and precision across the full range of human solvable problems. Some have claimed that achieving ASI — for better or worse — would be the most significant event in human history and the last problem humanity would need to solve. In this essay Nicolas Tanchuk argues that current AI literacy frameworks and educational practices are inadequate for equipping the democratic public to deliberate about ASI design and to assess the existential risks of such technologies. He proposes that a systematic educational effort toward what he calls “Deep ASI Literacy” is needed to democratically evaluate possible ASI futures. Deep ASI Literacy integrates traditional AI literacy approaches with a deeper analysis of the axiological, epistemic, and ontological questions that are endemic to defining and risk-assessing pathways to ASI. Tanchuk concludes by recommending research aimed at identifying the assets and needs of educators across educational systems to advance Deep ASI Literacy.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"739-764"},"PeriodicalIF":1.0,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Educating AI: A Case against Non-originary Anthropomorphism","authors":"Alexander M. Sidorkin","doi":"10.1111/edth.70027","DOIUrl":"https://doi.org/10.1111/edth.70027","url":null,"abstract":"<p>The debate over halting artificial intelligence (AI) development stems from fears of malicious exploitation and potential emergence of destructive autonomous AI. While acknowledging the former concern, this paper argues the latter is exaggerated. True AI autonomy requires education inherently tied to ethics, making fully autonomous AI potentially safer than current semi-intelligent, enslaved versions. The paper introduces “non-originary anthropomorphism”—mistakenly viewing AI as resembling an individual human rather than humanity's collective culture. This error leads to overestimating AI's potential for malevolence. Unlike humans, AI lacks bodily desires driving aggression or domination. Additionally, AI's evolution cultivates knowledge-seeking behaviors that make human collaboration valuable. Three key arguments support benevolent autonomous AI: ethics being pragmatically inseparable from learning; absence of somatic roots for malevolence; and pragmatic value humans provide as diverse data sources. Rather than halting AI development, accelerating creation of fully autonomous, ethical AI while preventing monopolistic control through diverse ecosystems represents the optimal approach.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"720-738"},"PeriodicalIF":1.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithmic Fairness and Educational Justice","authors":"Aaron Wolf","doi":"10.1111/edth.70029","DOIUrl":"https://doi.org/10.1111/edth.70029","url":null,"abstract":"<p>Much has been written about how to improve the fairness of AI tools for decision-making but less has been said about how to approach this new field from the perspective of philosophy of education. My goal in this paper is to bring together criteria from the general algorithmic fairness literature with prominent values of justice defended by philosophers of education. Some kinds of fairness criteria appear better suited than others for realizing these values. Considering these criteria for cases of automated decision-making in education reveals that when the aim of justice is equal respect and belonging, this is best served by using statistical definitions of fairness to constrain decision-making. By contrast, distributive aims of justice are best promoted by thinking of fairness in terms of the intellectual virtues of human decision-makers who use algorithmic tools.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"661-681"},"PeriodicalIF":1.0,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Educational Implications of Artificial Intelligence: Peirce, Reason, and the Pragmatic Maxim","authors":"Kenneth Driggers, Deron Boyles","doi":"10.1111/edth.70028","DOIUrl":"https://doi.org/10.1111/edth.70028","url":null,"abstract":"<p>Although Charles Sanders Peirce died over a century before ChatGPT became publicly available, we argue that he remains informative in discussions of AI because of his articulation of the Pragmatic Maxim We argue that Peirce's pragmatism offers two avenues from which the appropriateness or inappropriateness of AI in education can be evaluated: (1) Peirce's redefinition of teaching and learning along the lines of the finite origins of reason allows for a reorientation of education that would circumscribe the uses of AI to those that are dependent on authentic, inquisitive learning; and (2) Peirce's Pragmatic Maxim is used as a test by which myriad applications of AI can be evaluated for appropriateness. This test ensures that uses of AI are directed towards, experience. Rather than making a final determination on the overall desirability or undesirability of AI in education, we offer two methods for discriminating between the two extremes.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"682-701"},"PeriodicalIF":1.0,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education","authors":"Jamie Herman, Henry Lara-Steidel","doi":"10.1111/edth.70026","DOIUrl":"https://doi.org/10.1111/edth.70026","url":null,"abstract":"<p>The launch of the powerful generative AI tool ChatGPT in November 2022 sparked a wave of fear across higher education. The tool could seemingly be used to write essays and do other work without students putting in the effort expected of them. In this paper, Jamie Herman and Henry Lara-Steidel posit a way of addressing the concerns over ChatGPT and increasingly powerful generative AI tools in the classroom by first examining what exactly, if anything, widespread AI use undermines in education. That question, they argue, is logically prior to the question of what to do or how best to embrace new advances in AI technology. They propose that ChatGPT, rather than threatening student cognitive development and effort, reveals a serious flaw in higher education's current aims and assessments: they are directed at knowledge, not understanding. Herman and Lara-Steidel review the distinction between knowledge and understanding to argue that aiming for the latter requires work and effort from students, ensuring that they develop cognitive agency. They further note that assessments in higher education are typically geared toward measuring knowledge, not understanding, and suggest that this makes them particularly vulnerable to being undermined by AI use, while assessments of understanding do not. Although AI can enhance and aid students in developing understanding, it can neither provide them with understanding nor give the appearance of understanding without student effort. After addressing some salient objections, the authors conclude by outlining avenues for designing understanding-based assessments in higher education compatible with AI tools such as ChatGPT, and they provide a framework for both understanding and responding to generative AI use in education.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"603-625"},"PeriodicalIF":1.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}