Artificial intelligence, consciousness and psychiatry

IF 73.3 1区 医学 Q1 Medicine
World Psychiatry Pub Date : 2024-09-16 DOI:10.1002/wps.21222
Giulio Tononi, Charles Raison
{"title":"Artificial intelligence, consciousness and psychiatry","authors":"Giulio Tononi, Charles Raison","doi":"10.1002/wps.21222","DOIUrl":null,"url":null,"abstract":"<p>In 1966, a researcher at the Massachusetts Institute of Technology introduced ELIZA, a computer program that simulated a psychotherapist in the Rogerian tradition, rephrasing a patient's words into questions according to simple but effective scripts. This was one of the first (and few) successes of early artificial intelligence (AI). To the dismay of its creator, some people took ELIZA for a real psychotherapist, perhaps because of our innate tendency to project consciousness when we detect intelligence, especially intelligent speech.</p>\n<p>ELIZA's stuttering attempt at AI has now become an immensely eloquent golem. ChatGPT can easily outspeak, outwrite and outperform S. Freud. Because large language models (LLM) benefit from superhuman lexicon, knowledge, memory and speed, artificial brains can now trump natural ones in most tasks.</p>\n<p>ELIZA was named after the flower-girl in G.B. Shaw's play Pygmalion, supposedly because it learned to improve its speech with practice. The original myth of Pygmalion – the sculptor who carved the ideal woman Galatea out of ivory and hoped to bring her to life – is even more apt: does the creation of AI portend artificial consciousness, perhaps even superhuman consciousness? Two camps are beginning to emerge, with radically different answers to this question.</p>\n<p>According to the dominant computational/functionalist stance in cognitive neuroscience, the answer is yes<span><sup>1</sup></span>. Cognitive neuroscience assumes that we are ultimately machines running sophisticated software (that can derail and be reprogrammed). Neural algorithms recognize objects and scenes, direct attention, hold items in working memory, and store them in long-term memory. Complex neural computations drive cognitive control, decision making, emotional reactions, social behaviors, and of course language. In this view, consciousness must be just another function, perhaps the global broadcasting of information<span><sup>2</sup></span> or the metacognitive assessment of sensory inputs<span><sup>3</sup></span>. In this case, whenever computers can reproduce the same functions as our brain, just implemented differently (the functionalists’ “multiple realizability”), they will be conscious like we are.</p>\n<p>Admittedly, despite LLMs sounding a lot like conscious humans nowadays, there is no principled way for determining whether they are already conscious and, if so, in which ways and to what degree<span><sup>1</sup></span>. Nor is it clear how we might establish whether they feel anything (just asking, we suspect, might not do…).</p>\n<p>Cognitive neuroscience typically takes the <i>extrinsic perspective</i>, introduced by Galileo, which has been immensely successful in much of science. From this perspective, consciousness is either a “user illusion”<span><sup>4</sup></span>, or a mysterious “emergent” property. However, as recognized long ago by Leibniz, this leaves experience – what we see, hear, think and feel – entirely unaccounted for. This implicit dualism is one that has plagued not just neuroscience, but also psychiatry from the very beginning: are we treating the brain, the psyche, or both? If so, how are they related? Is the soul just the brain's ephemeral passenger?</p>\n<p>Integrated information theory (IIT) provides a radically different approach<span><sup>5</sup></span>, and this is our own view. IIT takes the <i>intrinsic perspective</i>, starting not from the brain and what it does, but from consciousness and what it is. After all, for each of us, experience is what exists irrefutably, and the world is an inference from within experience – a good one, but still an inference, as psychiatrists should know well.</p>\n<p>IIT first characterizes the essential properties of consciousness – those that are irrefutably true of every conceivable experience – and then asks what is required to account for them in physical terms. Crucially, this leads to identifying an experience, in all its richness, with a <i>structure</i> (rather than with a process, a computation, or a function) – a structure that expresses the causal powers of a (neural) substrate in its current state. In fact, IIT provides a calculus for determining, at least in principle, whether a substrate is conscious, in which way, and to what degree.</p>\n<p>The theory can explain why certain parts of the brain can support consciousness, while others, such as the cerebellum and portions of prefrontal cortex, cannot. It can explain why – due to a breakdown of causal links – consciousness is lost in dreamless sleep, anesthesia, and generalized seizures<span><sup>6</sup></span>. It has also started to account for the quality of experience – the way space feels extended and time flowing<span><sup>7</sup></span>. It leads to many testable predictions, including counterintuitive ones: for example, that a near-silent cortex can support a vivid experience of pure presence. Finally, IIT has spawned the development of a transcranial magnetic stimulation/electroencephalography method that is currently the most specific and sensitive for assessing the presence of consciousness in unresponsive patients<span><sup>8</sup></span>.</p>\n<p>If IIT is right, and in sharp contrast to the dominant computational/functionalist view, AI lacks (and will lack) any spark of consciousness: it may talk and behave just as well or better than any of us (it will be “functionally equivalent”), but it will not be “phenomenally equivalent” (it will feel nothing at all)<span><sup>5</sup></span>. In the words of T. Nagel, there will be nothing “it is like to be” a computer, no matter how intelligent. Just like the cerebellum, the computer has the wrong architecture for consciousness. Even though it may perform flawlessly every “cognitive” function we may care for, including those we are used to consider uniquely human, all those functions will unroll “in the dark”. They will unroll as unconsciously as the processes in our brain that smoothly string together phonemes into words and words into sentences to express a fleeting thought.</p>\n<p>If IIT is right, attributing consciousness to AI is truly an “existential” mistake – because consciousness is about being, not doing, and AI is about doing, not being. Under selective pressure, biological constraints may promote the co-evolution of intelligence and consciousness (by favoring highly integrated substrates)<span><sup>9</sup></span>. However, in a larger context, intelligence and consciousness can be doubly dissociated. There can be experience without the functional abilities that we associate with intelligence. For example, minimally responsive patients may be unable to do or say anything but may harbor rich subjective experiences<span><sup>8</sup></span>. And there can be great intelligence without consciousness: an eloquent AI may engage in a stimulating conversation and impress us with its intellect, without anything existing besides the stream of sentences we hear – in the words of P. Larkin, “No sight, no sound / No touch or taste or smell, nothing to think with / Nothing to love or link with”.</p>\n<p>AI poses a unique and urgent challenge not just for mental health, but for the human condition and our place in nature. Either mainstream computational/functionalist approaches are right, and we – highly constrained and often defective biological machines – will soon be superseded by machines made of silicon that will be not just better and faster but also enjoy a richer inner life. Or IIT is right, and every human experience is an extraordinary and precious phenomenon, one that requires a very special neural substrate that cannot be replicated by merely simulating its functions.</p>","PeriodicalId":23858,"journal":{"name":"World Psychiatry","volume":"3 1","pages":""},"PeriodicalIF":73.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/wps.21222","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

In 1966, a researcher at the Massachusetts Institute of Technology introduced ELIZA, a computer program that simulated a psychotherapist in the Rogerian tradition, rephrasing a patient's words into questions according to simple but effective scripts. This was one of the first (and few) successes of early artificial intelligence (AI). To the dismay of its creator, some people took ELIZA for a real psychotherapist, perhaps because of our innate tendency to project consciousness when we detect intelligence, especially intelligent speech.

ELIZA's stuttering attempt at AI has now become an immensely eloquent golem. ChatGPT can easily outspeak, outwrite and outperform S. Freud. Because large language models (LLM) benefit from superhuman lexicon, knowledge, memory and speed, artificial brains can now trump natural ones in most tasks.

ELIZA was named after the flower-girl in G.B. Shaw's play Pygmalion, supposedly because it learned to improve its speech with practice. The original myth of Pygmalion – the sculptor who carved the ideal woman Galatea out of ivory and hoped to bring her to life – is even more apt: does the creation of AI portend artificial consciousness, perhaps even superhuman consciousness? Two camps are beginning to emerge, with radically different answers to this question.

According to the dominant computational/functionalist stance in cognitive neuroscience, the answer is yes1. Cognitive neuroscience assumes that we are ultimately machines running sophisticated software (that can derail and be reprogrammed). Neural algorithms recognize objects and scenes, direct attention, hold items in working memory, and store them in long-term memory. Complex neural computations drive cognitive control, decision making, emotional reactions, social behaviors, and of course language. In this view, consciousness must be just another function, perhaps the global broadcasting of information2 or the metacognitive assessment of sensory inputs3. In this case, whenever computers can reproduce the same functions as our brain, just implemented differently (the functionalists’ “multiple realizability”), they will be conscious like we are.

Admittedly, despite LLMs sounding a lot like conscious humans nowadays, there is no principled way for determining whether they are already conscious and, if so, in which ways and to what degree1. Nor is it clear how we might establish whether they feel anything (just asking, we suspect, might not do…).

Cognitive neuroscience typically takes the extrinsic perspective, introduced by Galileo, which has been immensely successful in much of science. From this perspective, consciousness is either a “user illusion”4, or a mysterious “emergent” property. However, as recognized long ago by Leibniz, this leaves experience – what we see, hear, think and feel – entirely unaccounted for. This implicit dualism is one that has plagued not just neuroscience, but also psychiatry from the very beginning: are we treating the brain, the psyche, or both? If so, how are they related? Is the soul just the brain's ephemeral passenger?

Integrated information theory (IIT) provides a radically different approach5, and this is our own view. IIT takes the intrinsic perspective, starting not from the brain and what it does, but from consciousness and what it is. After all, for each of us, experience is what exists irrefutably, and the world is an inference from within experience – a good one, but still an inference, as psychiatrists should know well.

IIT first characterizes the essential properties of consciousness – those that are irrefutably true of every conceivable experience – and then asks what is required to account for them in physical terms. Crucially, this leads to identifying an experience, in all its richness, with a structure (rather than with a process, a computation, or a function) – a structure that expresses the causal powers of a (neural) substrate in its current state. In fact, IIT provides a calculus for determining, at least in principle, whether a substrate is conscious, in which way, and to what degree.

The theory can explain why certain parts of the brain can support consciousness, while others, such as the cerebellum and portions of prefrontal cortex, cannot. It can explain why – due to a breakdown of causal links – consciousness is lost in dreamless sleep, anesthesia, and generalized seizures6. It has also started to account for the quality of experience – the way space feels extended and time flowing7. It leads to many testable predictions, including counterintuitive ones: for example, that a near-silent cortex can support a vivid experience of pure presence. Finally, IIT has spawned the development of a transcranial magnetic stimulation/electroencephalography method that is currently the most specific and sensitive for assessing the presence of consciousness in unresponsive patients8.

If IIT is right, and in sharp contrast to the dominant computational/functionalist view, AI lacks (and will lack) any spark of consciousness: it may talk and behave just as well or better than any of us (it will be “functionally equivalent”), but it will not be “phenomenally equivalent” (it will feel nothing at all)5. In the words of T. Nagel, there will be nothing “it is like to be” a computer, no matter how intelligent. Just like the cerebellum, the computer has the wrong architecture for consciousness. Even though it may perform flawlessly every “cognitive” function we may care for, including those we are used to consider uniquely human, all those functions will unroll “in the dark”. They will unroll as unconsciously as the processes in our brain that smoothly string together phonemes into words and words into sentences to express a fleeting thought.

If IIT is right, attributing consciousness to AI is truly an “existential” mistake – because consciousness is about being, not doing, and AI is about doing, not being. Under selective pressure, biological constraints may promote the co-evolution of intelligence and consciousness (by favoring highly integrated substrates)9. However, in a larger context, intelligence and consciousness can be doubly dissociated. There can be experience without the functional abilities that we associate with intelligence. For example, minimally responsive patients may be unable to do or say anything but may harbor rich subjective experiences8. And there can be great intelligence without consciousness: an eloquent AI may engage in a stimulating conversation and impress us with its intellect, without anything existing besides the stream of sentences we hear – in the words of P. Larkin, “No sight, no sound / No touch or taste or smell, nothing to think with / Nothing to love or link with”.

AI poses a unique and urgent challenge not just for mental health, but for the human condition and our place in nature. Either mainstream computational/functionalist approaches are right, and we – highly constrained and often defective biological machines – will soon be superseded by machines made of silicon that will be not just better and faster but also enjoy a richer inner life. Or IIT is right, and every human experience is an extraordinary and precious phenomenon, one that requires a very special neural substrate that cannot be replicated by merely simulating its functions.

人工智能、意识和精神病学
如果人工智能研究所的观点是正确的,那么与占主导地位的计算/功能主义观点形成鲜明对比的是,人工智能缺乏(而且将会缺乏)任何意识的火花:它可能比我们任何人都能说会道,甚至表现得更好(它将 "在功能上等同"),但它不会 "在现象上等同"(它将毫无感觉)5。 用纳格尔(T. Nagel)的话说,计算机无论多么智能,都不会有任何 "成为它的感觉"。就像小脑一样,计算机的意识架构也是错误的。即使它可以完美地执行我们可能关心的所有 "认知 "功能,包括那些我们习惯于认为是人类独有的功能,但所有这些功能都将在 "黑暗中 "展开。如果人工智能研究所是对的,那么将意识归因于人工智能就是一个真正的 "存在 "错误--因为意识是关于 "存在",而不是 "做",而人工智能是关于 "做",而不是 "存在"。在选择性压力下,生物制约因素可能会促进智能与意识的共同进化(通过偏爱高度整合的基质)9。 然而,在更大的背景下,智能与意识可以是双重分离的。我们可以在没有与智力相关的功能能力的情况下获得经验。例如,反应迟钝的病人可能什么也做不了,什么也说不出来,但却拥有丰富的主观体验8。没有意识也可能有高智慧:一个能言善辩的人工智能可能会与我们进行令人兴奋的对话,并以其智慧给我们留下深刻印象,但除了我们听到的句子流之外,并不存在任何其他东西--用 P. 拉金的话来说就是:"没有视觉,没有听觉/没有触觉、味觉或嗅觉,没有任何东西可以用来思考/没有任何东西可以用来爱或联系"。人工智能不仅对心理健康提出了独特而紧迫的挑战,也对人类的状况和我们在自然界中的地位提出了挑战。要么主流的计算/功能主义方法是正确的,我们--高度受限且经常有缺陷的生物机器--很快就会被硅制造的机器所取代,这些机器不仅更好更快,而且还能享受更丰富的内在生活。或者,IIT 是正确的,人类的每一种体验都是一种非凡而珍贵的现象,需要一种非常特殊的神经基质,而这种基质是无法通过模拟其功能来复制的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
World Psychiatry
World Psychiatry Nursing-Psychiatric Mental Health
CiteScore
64.10
自引率
7.40%
发文量
124
期刊介绍: World Psychiatry is the official journal of the World Psychiatric Association. It aims to disseminate information on significant clinical, service, and research developments in the mental health field. World Psychiatry is published three times per year and is sent free of charge to psychiatrists.The recipient psychiatrists' names and addresses are provided by WPA member societies and sections.The language used in the journal is designed to be understandable by the majority of mental health professionals worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信