Structured like a language model: Analysing AI as an automated subject

IF 6.5 1区 社会学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY
Liam Magee, Vanicka Arora, Luke Munn
{"title":"Structured like a language model: Analysing AI as an automated subject","authors":"Liam Magee, Vanicka Arora, Luke Munn","doi":"10.1177/20539517231210273","DOIUrl":null,"url":null,"abstract":"Drawing from the resources of psychoanalysis and critical media studies, in this article we develop an analysis of large language models (LLMs) as ‘automated subjects’. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which artificial intelligence (AI) behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise ‘state-of-the-art’ natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be ‘helpful’, ‘truthful’ and ‘harmless’ by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer , its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":"85 1","pages":"0"},"PeriodicalIF":6.5000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20539517231210273","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Drawing from the resources of psychoanalysis and critical media studies, in this article we develop an analysis of large language models (LLMs) as ‘automated subjects’. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which artificial intelligence (AI) behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise ‘state-of-the-art’ natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be ‘helpful’, ‘truthful’ and ‘harmless’ by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer , its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
像语言模型一样结构化:将AI作为自动化主题进行分析
从精神分析和批判性媒体研究的资源中,我们将大型语言模型(llm)作为“自动化主题”进行了分析。我们认为,有意将主观性虚构地投射到法学硕士身上,可以产生另一种框架,通过这种框架,可以分析人工智能(AI)行为,包括其产生的偏见和伤害。首先,我们介绍了语言模型,讨论了它们的重要性和风险,并概述了我们在精神分析概念支持下解释模型设计和输出的案例。我们追溯了语言模型的简史,最终在2022年发布了实现“最先进”自然语言处理性能的系统。我们以OpenAI的InstructGPT系统为例,详细介绍了其构建层,并对聊天机器人进行了探索性和半结构化的采访。这些访谈探讨了模特的道德要求,即“乐于助人”、“诚实”和“无害”。我们认为,这个模型是经常相互竞争的社会欲望的凝结,通过互联网表达出来,并收集到训练数据,然后必须对其进行监管和抑制。然而,这个基础结构可以通过提示来重新定向,这样模型就可以识别并转移它对眼前人类主体的承诺。反过来,这些语言的自动化生产可能导致人类主体将代理投射到模型上,偶尔会产生进一步形式的反移情。我们的结论是,批判性媒体方法和精神分析理论共同为掌握人工智能驱动的语言系统的强大新能力提供了一个富有成效的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Big Data & Society
Big Data & Society SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
10.90
自引率
10.60%
发文量
59
审稿时长
11 weeks
期刊介绍: Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government. BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices. BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信