{"title":"校园人工智能:重新审视高等教育的目标——理解","authors":"Jamie Herman, Henry Lara-Steidel","doi":"10.1111/edth.70026","DOIUrl":null,"url":null,"abstract":"<p>The launch of the powerful generative AI tool ChatGPT in November 2022 sparked a wave of fear across higher education. The tool could seemingly be used to write essays and do other work without students putting in the effort expected of them. In this paper, Jamie Herman and Henry Lara-Steidel posit a way of addressing the concerns over ChatGPT and increasingly powerful generative AI tools in the classroom by first examining what exactly, if anything, widespread AI use undermines in education. That question, they argue, is logically prior to the question of what to do or how best to embrace new advances in AI technology. They propose that ChatGPT, rather than threatening student cognitive development and effort, reveals a serious flaw in higher education's current aims and assessments: they are directed at knowledge, not understanding. Herman and Lara-Steidel review the distinction between knowledge and understanding to argue that aiming for the latter requires work and effort from students, ensuring that they develop cognitive agency. They further note that assessments in higher education are typically geared toward measuring knowledge, not understanding, and suggest that this makes them particularly vulnerable to being undermined by AI use, while assessments of understanding do not. Although AI can enhance and aid students in developing understanding, it can neither provide them with understanding nor give the appearance of understanding without student effort. After addressing some salient objections, the authors conclude by outlining avenues for designing understanding-based assessments in higher education compatible with AI tools such as ChatGPT, and they provide a framework for both understanding and responding to generative AI use in education.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"603-625"},"PeriodicalIF":0.9000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70026","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education\",\"authors\":\"Jamie Herman, Henry Lara-Steidel\",\"doi\":\"10.1111/edth.70026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The launch of the powerful generative AI tool ChatGPT in November 2022 sparked a wave of fear across higher education. The tool could seemingly be used to write essays and do other work without students putting in the effort expected of them. In this paper, Jamie Herman and Henry Lara-Steidel posit a way of addressing the concerns over ChatGPT and increasingly powerful generative AI tools in the classroom by first examining what exactly, if anything, widespread AI use undermines in education. That question, they argue, is logically prior to the question of what to do or how best to embrace new advances in AI technology. They propose that ChatGPT, rather than threatening student cognitive development and effort, reveals a serious flaw in higher education's current aims and assessments: they are directed at knowledge, not understanding. Herman and Lara-Steidel review the distinction between knowledge and understanding to argue that aiming for the latter requires work and effort from students, ensuring that they develop cognitive agency. They further note that assessments in higher education are typically geared toward measuring knowledge, not understanding, and suggest that this makes them particularly vulnerable to being undermined by AI use, while assessments of understanding do not. Although AI can enhance and aid students in developing understanding, it can neither provide them with understanding nor give the appearance of understanding without student effort. After addressing some salient objections, the authors conclude by outlining avenues for designing understanding-based assessments in higher education compatible with AI tools such as ChatGPT, and they provide a framework for both understanding and responding to generative AI use in education.</p>\",\"PeriodicalId\":47134,\"journal\":{\"name\":\"EDUCATIONAL THEORY\",\"volume\":\"75 4\",\"pages\":\"603-625\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70026\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EDUCATIONAL THEORY\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/edth.70026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EDUCATIONAL THEORY","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/edth.70026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education
The launch of the powerful generative AI tool ChatGPT in November 2022 sparked a wave of fear across higher education. The tool could seemingly be used to write essays and do other work without students putting in the effort expected of them. In this paper, Jamie Herman and Henry Lara-Steidel posit a way of addressing the concerns over ChatGPT and increasingly powerful generative AI tools in the classroom by first examining what exactly, if anything, widespread AI use undermines in education. That question, they argue, is logically prior to the question of what to do or how best to embrace new advances in AI technology. They propose that ChatGPT, rather than threatening student cognitive development and effort, reveals a serious flaw in higher education's current aims and assessments: they are directed at knowledge, not understanding. Herman and Lara-Steidel review the distinction between knowledge and understanding to argue that aiming for the latter requires work and effort from students, ensuring that they develop cognitive agency. They further note that assessments in higher education are typically geared toward measuring knowledge, not understanding, and suggest that this makes them particularly vulnerable to being undermined by AI use, while assessments of understanding do not. Although AI can enhance and aid students in developing understanding, it can neither provide them with understanding nor give the appearance of understanding without student effort. After addressing some salient objections, the authors conclude by outlining avenues for designing understanding-based assessments in higher education compatible with AI tools such as ChatGPT, and they provide a framework for both understanding and responding to generative AI use in education.
期刊介绍:
The general purposes of Educational Theory are to foster the continuing development of educational theory and to encourage wide and effective discussion of theoretical problems within the educational profession. In order to achieve these purposes, the journal is devoted to publishing scholarly articles and studies in the foundations of education, and in related disciplines outside the field of education, which contribute to the advancement of educational theory. It is the policy of the sponsoring organizations to maintain the journal as an open channel of communication and as an open forum for discussion.