{"title":"Gnostic Undercurrents in Our Avatar Culture","authors":"Fiachra Long","doi":"10.1353/stu.2023.a911716","DOIUrl":null,"url":null,"abstract":"Gnostic Undercurrents in Our Avatar Culture Fiachra Long (bio) We are sometimes attracted by a striking, colourful and convenient initiative, but like the apparent bargain that flatters to deceive, or the colourful mushroom that turns out to be poisonous, some level of discretion is advised. The emergence of ChatGPT as the lead Artificial Intelligence platform is striking, colourful and convenient, but a high level of discretion is urgently advised. Apparently advanced algorithms can master more facts and possible connections than the human brain, and so there is a temptation to hand over human decisions to these platforms, thus relegating the importance of human experience as a litmus-test of wise action. This concern may seem trivial when you are looking for the best deal on a hotel room. It may even seem reactionary when computer power promises rapid analysis of medical conditions and an accelerated production of cures. The concern now is that technology might push an increasing number of situations beyond the scope of human judgement. This poses the question: should certain choices be reserved to humans and not handed over to machines or are we moving inexorably to a stage where important decisions and choices are moving out of biology into the digital sphere? The computer challenge ChatGPT is a Large Language Model (LLM) Conversational Agent that can use conversational language to interact with its user either by text-input or speech-input (such as Alexa or Siri). Generative AI systems can appear to 'think' by linking input terms to many tags or tokens that 'suggest' themselves in response to the inputted spoken words or text. These responses are drawn from a vast number of word strings based on probability. Responses are likely to be plausible but, governed by currently trained programmes, sometimes wide of the mark and false. ChatGPT generates predictions based on the data available up to 2021. Developers are working to reduce machine 'hallucination' as far as possible. OpenAI has warned that students using [End Page 371] ChatGPT who presume accuracy of detail in ChatGPT results would need to check them carefully1 and this same advice is given in other reports.2 It is like having several spellings presented in a spell-check. These failings, however, are likely to be short-lived. ChatGPT was launched by OpenAI on 30 November 2022, free to all, and within two months had 100 million monthly users (Hu, 2023 as referenced in Gimpel et al.). Subscription versions such GBT-3.5 could manage 4000 tokens while GBT-4 (March 2023) already can manage 32000 tokens. These impressive advances in a few months look likely to accelerate. In the meantime, scientists need to be more careful in their use of psychological descriptors. Too many speak of machines being 'conscious' and 'thinking'. Indeed William Reville has written of one researcher, Blake Lemoine, who believed that his own Lamda AI was not only sentient but a persona worthy of legal rights.3 These fears may be wild and exaggerated, but in May 2023, Dr Geoffrey Hinton quit his lead researcher role on the ChatGPT project with Google because of worry about its misuse by malevolent actors. Other researchers expressed similar concerns, explaining that the field of AI development resembled an open-source scramble rather than a carefully choreographed process. 'Pause the research', went the general cry. Pause until neuroscientists have time to assess what is happening. However judging by the deceit evident in the public sphere, not only in Russia's Ukraine policy, but in the fake news norm undermining the media in many places, it is unlikely that this appeal will be heeded. Unlikely too that malevolent players are not already involved. Meanwhile it is unclear whether young people would prefer a biological to a digital future since the former seems vulnerable and conflicted whereas the latter promises a form of immortality. Wikipedia tells us that 'an avatar is a graphical representation of a user or the user's character or persona'. The human imagination is toying with two kinds of avatar, two ways of imagining how human beings can interact with computers or 'conversational agents'. Leaving aside the issue of knowledge for the moment, I want to concentrate here...","PeriodicalId":488847,"journal":{"name":"Studies An Irish Quarterly Review","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies An Irish Quarterly Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1353/stu.2023.a911716","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gnostic Undercurrents in Our Avatar Culture Fiachra Long (bio) We are sometimes attracted by a striking, colourful and convenient initiative, but like the apparent bargain that flatters to deceive, or the colourful mushroom that turns out to be poisonous, some level of discretion is advised. The emergence of ChatGPT as the lead Artificial Intelligence platform is striking, colourful and convenient, but a high level of discretion is urgently advised. Apparently advanced algorithms can master more facts and possible connections than the human brain, and so there is a temptation to hand over human decisions to these platforms, thus relegating the importance of human experience as a litmus-test of wise action. This concern may seem trivial when you are looking for the best deal on a hotel room. It may even seem reactionary when computer power promises rapid analysis of medical conditions and an accelerated production of cures. The concern now is that technology might push an increasing number of situations beyond the scope of human judgement. This poses the question: should certain choices be reserved to humans and not handed over to machines or are we moving inexorably to a stage where important decisions and choices are moving out of biology into the digital sphere? The computer challenge ChatGPT is a Large Language Model (LLM) Conversational Agent that can use conversational language to interact with its user either by text-input or speech-input (such as Alexa or Siri). Generative AI systems can appear to 'think' by linking input terms to many tags or tokens that 'suggest' themselves in response to the inputted spoken words or text. These responses are drawn from a vast number of word strings based on probability. Responses are likely to be plausible but, governed by currently trained programmes, sometimes wide of the mark and false. ChatGPT generates predictions based on the data available up to 2021. Developers are working to reduce machine 'hallucination' as far as possible. OpenAI has warned that students using [End Page 371] ChatGPT who presume accuracy of detail in ChatGPT results would need to check them carefully1 and this same advice is given in other reports.2 It is like having several spellings presented in a spell-check. These failings, however, are likely to be short-lived. ChatGPT was launched by OpenAI on 30 November 2022, free to all, and within two months had 100 million monthly users (Hu, 2023 as referenced in Gimpel et al.). Subscription versions such GBT-3.5 could manage 4000 tokens while GBT-4 (March 2023) already can manage 32000 tokens. These impressive advances in a few months look likely to accelerate. In the meantime, scientists need to be more careful in their use of psychological descriptors. Too many speak of machines being 'conscious' and 'thinking'. Indeed William Reville has written of one researcher, Blake Lemoine, who believed that his own Lamda AI was not only sentient but a persona worthy of legal rights.3 These fears may be wild and exaggerated, but in May 2023, Dr Geoffrey Hinton quit his lead researcher role on the ChatGPT project with Google because of worry about its misuse by malevolent actors. Other researchers expressed similar concerns, explaining that the field of AI development resembled an open-source scramble rather than a carefully choreographed process. 'Pause the research', went the general cry. Pause until neuroscientists have time to assess what is happening. However judging by the deceit evident in the public sphere, not only in Russia's Ukraine policy, but in the fake news norm undermining the media in many places, it is unlikely that this appeal will be heeded. Unlikely too that malevolent players are not already involved. Meanwhile it is unclear whether young people would prefer a biological to a digital future since the former seems vulnerable and conflicted whereas the latter promises a form of immortality. Wikipedia tells us that 'an avatar is a graphical representation of a user or the user's character or persona'. The human imagination is toying with two kinds of avatar, two ways of imagining how human beings can interact with computers or 'conversational agents'. Leaving aside the issue of knowledge for the moment, I want to concentrate here...