Gnostic Undercurrents in Our Avatar Culture

Fiachra Long
{"title":"Gnostic Undercurrents in Our Avatar Culture","authors":"Fiachra Long","doi":"10.1353/stu.2023.a911716","DOIUrl":null,"url":null,"abstract":"Gnostic Undercurrents in Our Avatar Culture Fiachra Long (bio) We are sometimes attracted by a striking, colourful and convenient initiative, but like the apparent bargain that flatters to deceive, or the colourful mushroom that turns out to be poisonous, some level of discretion is advised. The emergence of ChatGPT as the lead Artificial Intelligence platform is striking, colourful and convenient, but a high level of discretion is urgently advised. Apparently advanced algorithms can master more facts and possible connections than the human brain, and so there is a temptation to hand over human decisions to these platforms, thus relegating the importance of human experience as a litmus-test of wise action. This concern may seem trivial when you are looking for the best deal on a hotel room. It may even seem reactionary when computer power promises rapid analysis of medical conditions and an accelerated production of cures. The concern now is that technology might push an increasing number of situations beyond the scope of human judgement. This poses the question: should certain choices be reserved to humans and not handed over to machines or are we moving inexorably to a stage where important decisions and choices are moving out of biology into the digital sphere? The computer challenge ChatGPT is a Large Language Model (LLM) Conversational Agent that can use conversational language to interact with its user either by text-input or speech-input (such as Alexa or Siri). Generative AI systems can appear to 'think' by linking input terms to many tags or tokens that 'suggest' themselves in response to the inputted spoken words or text. These responses are drawn from a vast number of word strings based on probability. Responses are likely to be plausible but, governed by currently trained programmes, sometimes wide of the mark and false. ChatGPT generates predictions based on the data available up to 2021. Developers are working to reduce machine 'hallucination' as far as possible. OpenAI has warned that students using [End Page 371] ChatGPT who presume accuracy of detail in ChatGPT results would need to check them carefully1 and this same advice is given in other reports.2 It is like having several spellings presented in a spell-check. These failings, however, are likely to be short-lived. ChatGPT was launched by OpenAI on 30 November 2022, free to all, and within two months had 100 million monthly users (Hu, 2023 as referenced in Gimpel et al.). Subscription versions such GBT-3.5 could manage 4000 tokens while GBT-4 (March 2023) already can manage 32000 tokens. These impressive advances in a few months look likely to accelerate. In the meantime, scientists need to be more careful in their use of psychological descriptors. Too many speak of machines being 'conscious' and 'thinking'. Indeed William Reville has written of one researcher, Blake Lemoine, who believed that his own Lamda AI was not only sentient but a persona worthy of legal rights.3 These fears may be wild and exaggerated, but in May 2023, Dr Geoffrey Hinton quit his lead researcher role on the ChatGPT project with Google because of worry about its misuse by malevolent actors. Other researchers expressed similar concerns, explaining that the field of AI development resembled an open-source scramble rather than a carefully choreographed process. 'Pause the research', went the general cry. Pause until neuroscientists have time to assess what is happening. However judging by the deceit evident in the public sphere, not only in Russia's Ukraine policy, but in the fake news norm undermining the media in many places, it is unlikely that this appeal will be heeded. Unlikely too that malevolent players are not already involved. Meanwhile it is unclear whether young people would prefer a biological to a digital future since the former seems vulnerable and conflicted whereas the latter promises a form of immortality. Wikipedia tells us that 'an avatar is a graphical representation of a user or the user's character or persona'. The human imagination is toying with two kinds of avatar, two ways of imagining how human beings can interact with computers or 'conversational agents'. Leaving aside the issue of knowledge for the moment, I want to concentrate here...","PeriodicalId":488847,"journal":{"name":"Studies An Irish Quarterly Review","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies An Irish Quarterly Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1353/stu.2023.a911716","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Gnostic Undercurrents in Our Avatar Culture Fiachra Long (bio) We are sometimes attracted by a striking, colourful and convenient initiative, but like the apparent bargain that flatters to deceive, or the colourful mushroom that turns out to be poisonous, some level of discretion is advised. The emergence of ChatGPT as the lead Artificial Intelligence platform is striking, colourful and convenient, but a high level of discretion is urgently advised. Apparently advanced algorithms can master more facts and possible connections than the human brain, and so there is a temptation to hand over human decisions to these platforms, thus relegating the importance of human experience as a litmus-test of wise action. This concern may seem trivial when you are looking for the best deal on a hotel room. It may even seem reactionary when computer power promises rapid analysis of medical conditions and an accelerated production of cures. The concern now is that technology might push an increasing number of situations beyond the scope of human judgement. This poses the question: should certain choices be reserved to humans and not handed over to machines or are we moving inexorably to a stage where important decisions and choices are moving out of biology into the digital sphere? The computer challenge ChatGPT is a Large Language Model (LLM) Conversational Agent that can use conversational language to interact with its user either by text-input or speech-input (such as Alexa or Siri). Generative AI systems can appear to 'think' by linking input terms to many tags or tokens that 'suggest' themselves in response to the inputted spoken words or text. These responses are drawn from a vast number of word strings based on probability. Responses are likely to be plausible but, governed by currently trained programmes, sometimes wide of the mark and false. ChatGPT generates predictions based on the data available up to 2021. Developers are working to reduce machine 'hallucination' as far as possible. OpenAI has warned that students using [End Page 371] ChatGPT who presume accuracy of detail in ChatGPT results would need to check them carefully1 and this same advice is given in other reports.2 It is like having several spellings presented in a spell-check. These failings, however, are likely to be short-lived. ChatGPT was launched by OpenAI on 30 November 2022, free to all, and within two months had 100 million monthly users (Hu, 2023 as referenced in Gimpel et al.). Subscription versions such GBT-3.5 could manage 4000 tokens while GBT-4 (March 2023) already can manage 32000 tokens. These impressive advances in a few months look likely to accelerate. In the meantime, scientists need to be more careful in their use of psychological descriptors. Too many speak of machines being 'conscious' and 'thinking'. Indeed William Reville has written of one researcher, Blake Lemoine, who believed that his own Lamda AI was not only sentient but a persona worthy of legal rights.3 These fears may be wild and exaggerated, but in May 2023, Dr Geoffrey Hinton quit his lead researcher role on the ChatGPT project with Google because of worry about its misuse by malevolent actors. Other researchers expressed similar concerns, explaining that the field of AI development resembled an open-source scramble rather than a carefully choreographed process. 'Pause the research', went the general cry. Pause until neuroscientists have time to assess what is happening. However judging by the deceit evident in the public sphere, not only in Russia's Ukraine policy, but in the fake news norm undermining the media in many places, it is unlikely that this appeal will be heeded. Unlikely too that malevolent players are not already involved. Meanwhile it is unclear whether young people would prefer a biological to a digital future since the former seems vulnerable and conflicted whereas the latter promises a form of immortality. Wikipedia tells us that 'an avatar is a graphical representation of a user or the user's character or persona'. The human imagination is toying with two kinds of avatar, two ways of imagining how human beings can interact with computers or 'conversational agents'. Leaving aside the issue of knowledge for the moment, I want to concentrate here...
我们阿凡达文化中的诺斯替潜流
我们有时会被一个引人注目的、丰富多彩的、方便的倡议所吸引,但就像表面上的交易被奉承来欺骗,或者彩色的蘑菇被证明是有毒的,我们建议一定程度的谨慎。ChatGPT作为领先的人工智能平台的出现是引人注目的,丰富多彩的和方便的,但迫切需要高度的谨慎。显然,先进的算法可以掌握比人类大脑更多的事实和可能的联系,因此有一种将人类决策交给这些平台的诱惑,从而降低了人类经验作为明智行动试金石的重要性。当你在寻找最优惠的酒店房间时,这种担心可能看起来微不足道。当计算机能力承诺快速分析医疗状况和加速治疗时,它甚至可能看起来是反动的。现在的担忧是,技术可能会使越来越多的情况超出人类的判断范围。这就提出了一个问题:某些选择应该留给人类而不是交给机器,还是我们正在无情地走向一个阶段,在这个阶段,重要的决策和选择正在从生物学领域转移到数字领域?ChatGPT是一个大型语言模型(LLM)会话代理,它可以使用会话语言通过文本输入或语音输入(如Alexa或Siri)与用户进行交互。生成式人工智能系统可以通过将输入术语与许多标签或标记相关联来“思考”,这些标签或标记会根据输入的口语或文本“暗示”自己。这些响应是基于概率从大量的单词字符串中抽取的。回答可能是合理的,但根据目前训练有素的项目,有时会偏离目标,甚至是错误的。ChatGPT根据截至2021年的可用数据生成预测。开发人员正在努力尽可能地减少机器的“幻觉”。OpenAI警告说,使用ChatGPT的学生,如果认为ChatGPT结果的细节是准确的,就需要仔细检查1,其他报告中也给出了同样的建议这就像在拼写检查中出现了几个拼写。然而,这些失败很可能是短暂的。ChatGPT于2022年11月30日由OpenAI推出,向所有人免费开放,两个月内每月用户数达到1亿(Hu, 2023,参考Gimpel et al.)。像GBT-3.5这样的订阅版本可以管理4000个令牌,而GBT-4(2023年3月)已经可以管理32000个令牌。这些令人印象深刻的进步在未来几个月内可能会加速。与此同时,科学家在使用心理描述符时需要更加小心。太多的人说机器是“有意识的”和“会思考的”。事实上,威廉·雷维尔(William Reville)曾写过一位名叫布莱克·勒莫恩(Blake Lemoine)的研究人员,他认为自己的Lamda AI不仅有感情,而且是一个值得拥有法律权利的人这些担忧可能是疯狂和夸张的,但在2023年5月,杰弗里·辛顿博士辞去了他在谷歌ChatGPT项目的首席研究员职位,因为担心它被恶意行为者滥用。其他研究人员也表达了类似的担忧,他们解释说,人工智能开发领域更像是一场开源争夺战,而不是一个精心设计的过程。“停止研究!”大家喊道。暂停一下,直到神经科学家有时间评估发生了什么。然而,从公共领域明显存在的欺骗行为(不仅是俄罗斯的乌克兰政策,还有许多地方正在破坏媒体的假新闻规范)来看,这一呼吁不太可能得到重视。也不太可能没有恶意的玩家参与其中。与此同时,目前还不清楚年轻人是否更喜欢生物的未来而不是数字的未来,因为前者似乎是脆弱和矛盾的,而后者则承诺了一种不朽的形式。维基百科告诉我们,“头像是用户或用户的性格或角色的图形表示”。人类的想象力正在玩弄两种化身,想象人类如何与计算机或“对话代理”互动的两种方式。暂时把知识的问题放在一边,我想集中精力在这里……
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信