Taking Responsibility for Meaning and Mattering: An Agential Realist Approach to Generative AI and Literacy

IF 3.9 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Priya C. Kumar, Kelley Cotter, L. Y. Cabrera
{"title":"Taking Responsibility for Meaning and Mattering: An Agential Realist Approach to Generative AI and Literacy","authors":"Priya C. Kumar, Kelley Cotter, L. Y. Cabrera","doi":"10.1002/rrq.570","DOIUrl":null,"url":null,"abstract":"Questions and concerns about artificial intelligence (AI) technologies in education reached a fever pitch with the arrival of publicly accessible, user‐facing generative AI systems, especially ChatGPT. Many of these issues will require regulation and collective action to address. But when it comes to generative AI and literacy, we argue that posthuman perspectives can help literacy scholars and practitioners reframe some concerns into questions that open new areas of inquiry. Agential realism in particular offers a useful perspective for exploring how generative AI matters in literacy practices, not as a unilaterally destructive force, but as a set of phenomena that intra‐actively reconfigures literacy practices. As a sociocultural (and as we argue, sociotechnical) practice, literacy arises out of the entanglement of bodies, spaces, contexts, positions, histories, and technologies. Generative AI is another in a long line of technologies that reconfigures literacy practices. In this article, we briefly explain how generative AI systems work, focusing on text‐based systems called Large Language Models (LLMs), and suggest ways that generative AI may reconfigure the sociocultural practice of literacy. We then offer three provocations to shift discussions about generative AI and literacy (1) from concerns about intentionality to questions of responsibility, (2) from concerns about authenticity to questions of mattering, and (3) from concerns about imitation to questions of multifarious communication. We conclude by encouraging literacy scholars and practitioners to draw inspiration from critical literacy efforts to discover what matters when it comes to generative AI and literacy.","PeriodicalId":48160,"journal":{"name":"Reading Research Quarterly","volume":null,"pages":null},"PeriodicalIF":3.9000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Reading Research Quarterly","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1002/rrq.570","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Questions and concerns about artificial intelligence (AI) technologies in education reached a fever pitch with the arrival of publicly accessible, user‐facing generative AI systems, especially ChatGPT. Many of these issues will require regulation and collective action to address. But when it comes to generative AI and literacy, we argue that posthuman perspectives can help literacy scholars and practitioners reframe some concerns into questions that open new areas of inquiry. Agential realism in particular offers a useful perspective for exploring how generative AI matters in literacy practices, not as a unilaterally destructive force, but as a set of phenomena that intra‐actively reconfigures literacy practices. As a sociocultural (and as we argue, sociotechnical) practice, literacy arises out of the entanglement of bodies, spaces, contexts, positions, histories, and technologies. Generative AI is another in a long line of technologies that reconfigures literacy practices. In this article, we briefly explain how generative AI systems work, focusing on text‐based systems called Large Language Models (LLMs), and suggest ways that generative AI may reconfigure the sociocultural practice of literacy. We then offer three provocations to shift discussions about generative AI and literacy (1) from concerns about intentionality to questions of responsibility, (2) from concerns about authenticity to questions of mattering, and (3) from concerns about imitation to questions of multifarious communication. We conclude by encouraging literacy scholars and practitioners to draw inspiration from critical literacy efforts to discover what matters when it comes to generative AI and literacy.
为意义和蜕变负责:生成式人工智能和识字的积极现实主义方法
随着可公开访问、面向用户的生成式人工智能系统,尤其是 ChatGPT 的出现,有关教育领域人工智能(AI)技术的问题和担忧达到了白热化的程度。其中许多问题需要监管和集体行动来解决。但是,就生成式人工智能与扫盲而言,我们认为,后人类视角可以帮助扫盲学者和从业者将一些担忧重新构建为问题,从而开辟新的研究领域。行动现实主义尤其提供了一个有用的视角,来探索生成性人工智能如何在扫盲实践中发挥作用,它不是一种单方面的破坏性力量,而是一系列在行动中重构扫盲实践的现象。作为一种社会文化(我们认为也是社会技术)实践,扫盲产生于身体、空间、语境、立场、历史和技术的纠缠。生成式人工智能是众多重构扫盲实践的技术中的另一种。在本文中,我们将简要解释生成式人工智能系统是如何工作的,重点是基于文本的系统--大型语言模型(LLMs),并提出生成式人工智能可能重构扫盲的社会文化实践的方法。然后,我们提出了三点建议,以转移关于生成式人工智能和扫盲的讨论:(1) 从对意图性的关注转向对责任的关注;(2) 从对真实性的关注转向对重要性的关注;(3) 从对模仿的关注转向对多元交流的关注。最后,我们鼓励扫盲学者和实践者从批判性扫盲工作中汲取灵感,以发现生成式人工智能与扫盲的重要之处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.50
自引率
4.80%
发文量
32
期刊介绍: For more than 40 years, Reading Research Quarterly has been essential reading for those committed to scholarship on literacy among learners of all ages. The leading research journal in the field, each issue of RRQ includes •Reports of important studies •Multidisciplinary research •Various modes of investigation •Diverse viewpoints on literacy practices, teaching, and learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信