{"title":"Artificial intelligence, consciousness and psychiatry","authors":"Giulio Tononi, Charles Raison","doi":"10.1002/wps.21222","DOIUrl":null,"url":null,"abstract":"<p>In 1966, a researcher at the Massachusetts Institute of Technology introduced ELIZA, a computer program that simulated a psychotherapist in the Rogerian tradition, rephrasing a patient's words into questions according to simple but effective scripts. This was one of the first (and few) successes of early artificial intelligence (AI). To the dismay of its creator, some people took ELIZA for a real psychotherapist, perhaps because of our innate tendency to project consciousness when we detect intelligence, especially intelligent speech.</p>\n<p>ELIZA's stuttering attempt at AI has now become an immensely eloquent golem. ChatGPT can easily outspeak, outwrite and outperform S. Freud. Because large language models (LLM) benefit from superhuman lexicon, knowledge, memory and speed, artificial brains can now trump natural ones in most tasks.</p>\n<p>ELIZA was named after the flower-girl in G.B. Shaw's play Pygmalion, supposedly because it learned to improve its speech with practice. The original myth of Pygmalion – the sculptor who carved the ideal woman Galatea out of ivory and hoped to bring her to life – is even more apt: does the creation of AI portend artificial consciousness, perhaps even superhuman consciousness? Two camps are beginning to emerge, with radically different answers to this question.</p>\n<p>According to the dominant computational/functionalist stance in cognitive neuroscience, the answer is yes<span><sup>1</sup></span>. Cognitive neuroscience assumes that we are ultimately machines running sophisticated software (that can derail and be reprogrammed). Neural algorithms recognize objects and scenes, direct attention, hold items in working memory, and store them in long-term memory. Complex neural computations drive cognitive control, decision making, emotional reactions, social behaviors, and of course language. In this view, consciousness must be just another function, perhaps the global broadcasting of information<span><sup>2</sup></span> or the metacognitive assessment of sensory inputs<span><sup>3</sup></span>. In this case, whenever computers can reproduce the same functions as our brain, just implemented differently (the functionalists’ “multiple realizability”), they will be conscious like we are.</p>\n<p>Admittedly, despite LLMs sounding a lot like conscious humans nowadays, there is no principled way for determining whether they are already conscious and, if so, in which ways and to what degree<span><sup>1</sup></span>. Nor is it clear how we might establish whether they feel anything (just asking, we suspect, might not do…).</p>\n<p>Cognitive neuroscience typically takes the <i>extrinsic perspective</i>, introduced by Galileo, which has been immensely successful in much of science. From this perspective, consciousness is either a “user illusion”<span><sup>4</sup></span>, or a mysterious “emergent” property. However, as recognized long ago by Leibniz, this leaves experience – what we see, hear, think and feel – entirely unaccounted for. This implicit dualism is one that has plagued not just neuroscience, but also psychiatry from the very beginning: are we treating the brain, the psyche, or both? If so, how are they related? Is the soul just the brain's ephemeral passenger?</p>\n<p>Integrated information theory (IIT) provides a radically different approach<span><sup>5</sup></span>, and this is our own view. IIT takes the <i>intrinsic perspective</i>, starting not from the brain and what it does, but from consciousness and what it is. After all, for each of us, experience is what exists irrefutably, and the world is an inference from within experience – a good one, but still an inference, as psychiatrists should know well.</p>\n<p>IIT first characterizes the essential properties of consciousness – those that are irrefutably true of every conceivable experience – and then asks what is required to account for them in physical terms. Crucially, this leads to identifying an experience, in all its richness, with a <i>structure</i> (rather than with a process, a computation, or a function) – a structure that expresses the causal powers of a (neural) substrate in its current state. In fact, IIT provides a calculus for determining, at least in principle, whether a substrate is conscious, in which way, and to what degree.</p>\n<p>The theory can explain why certain parts of the brain can support consciousness, while others, such as the cerebellum and portions of prefrontal cortex, cannot. It can explain why – due to a breakdown of causal links – consciousness is lost in dreamless sleep, anesthesia, and generalized seizures<span><sup>6</sup></span>. It has also started to account for the quality of experience – the way space feels extended and time flowing<span><sup>7</sup></span>. It leads to many testable predictions, including counterintuitive ones: for example, that a near-silent cortex can support a vivid experience of pure presence. Finally, IIT has spawned the development of a transcranial magnetic stimulation/electroencephalography method that is currently the most specific and sensitive for assessing the presence of consciousness in unresponsive patients<span><sup>8</sup></span>.</p>\n<p>If IIT is right, and in sharp contrast to the dominant computational/functionalist view, AI lacks (and will lack) any spark of consciousness: it may talk and behave just as well or better than any of us (it will be “functionally equivalent”), but it will not be “phenomenally equivalent” (it will feel nothing at all)<span><sup>5</sup></span>. In the words of T. Nagel, there will be nothing “it is like to be” a computer, no matter how intelligent. Just like the cerebellum, the computer has the wrong architecture for consciousness. Even though it may perform flawlessly every “cognitive” function we may care for, including those we are used to consider uniquely human, all those functions will unroll “in the dark”. They will unroll as unconsciously as the processes in our brain that smoothly string together phonemes into words and words into sentences to express a fleeting thought.</p>\n<p>If IIT is right, attributing consciousness to AI is truly an “existential” mistake – because consciousness is about being, not doing, and AI is about doing, not being. Under selective pressure, biological constraints may promote the co-evolution of intelligence and consciousness (by favoring highly integrated substrates)<span><sup>9</sup></span>. However, in a larger context, intelligence and consciousness can be doubly dissociated. There can be experience without the functional abilities that we associate with intelligence. For example, minimally responsive patients may be unable to do or say anything but may harbor rich subjective experiences<span><sup>8</sup></span>. And there can be great intelligence without consciousness: an eloquent AI may engage in a stimulating conversation and impress us with its intellect, without anything existing besides the stream of sentences we hear – in the words of P. Larkin, “No sight, no sound / No touch or taste or smell, nothing to think with / Nothing to love or link with”.</p>\n<p>AI poses a unique and urgent challenge not just for mental health, but for the human condition and our place in nature. Either mainstream computational/functionalist approaches are right, and we – highly constrained and often defective biological machines – will soon be superseded by machines made of silicon that will be not just better and faster but also enjoy a richer inner life. Or IIT is right, and every human experience is an extraordinary and precious phenomenon, one that requires a very special neural substrate that cannot be replicated by merely simulating its functions.</p>","PeriodicalId":23858,"journal":{"name":"World Psychiatry","volume":"3 1","pages":""},"PeriodicalIF":73.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/wps.21222","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
In 1966, a researcher at the Massachusetts Institute of Technology introduced ELIZA, a computer program that simulated a psychotherapist in the Rogerian tradition, rephrasing a patient's words into questions according to simple but effective scripts. This was one of the first (and few) successes of early artificial intelligence (AI). To the dismay of its creator, some people took ELIZA for a real psychotherapist, perhaps because of our innate tendency to project consciousness when we detect intelligence, especially intelligent speech.
ELIZA's stuttering attempt at AI has now become an immensely eloquent golem. ChatGPT can easily outspeak, outwrite and outperform S. Freud. Because large language models (LLM) benefit from superhuman lexicon, knowledge, memory and speed, artificial brains can now trump natural ones in most tasks.
ELIZA was named after the flower-girl in G.B. Shaw's play Pygmalion, supposedly because it learned to improve its speech with practice. The original myth of Pygmalion – the sculptor who carved the ideal woman Galatea out of ivory and hoped to bring her to life – is even more apt: does the creation of AI portend artificial consciousness, perhaps even superhuman consciousness? Two camps are beginning to emerge, with radically different answers to this question.
According to the dominant computational/functionalist stance in cognitive neuroscience, the answer is yes1. Cognitive neuroscience assumes that we are ultimately machines running sophisticated software (that can derail and be reprogrammed). Neural algorithms recognize objects and scenes, direct attention, hold items in working memory, and store them in long-term memory. Complex neural computations drive cognitive control, decision making, emotional reactions, social behaviors, and of course language. In this view, consciousness must be just another function, perhaps the global broadcasting of information2 or the metacognitive assessment of sensory inputs3. In this case, whenever computers can reproduce the same functions as our brain, just implemented differently (the functionalists’ “multiple realizability”), they will be conscious like we are.
Admittedly, despite LLMs sounding a lot like conscious humans nowadays, there is no principled way for determining whether they are already conscious and, if so, in which ways and to what degree1. Nor is it clear how we might establish whether they feel anything (just asking, we suspect, might not do…).
Cognitive neuroscience typically takes the extrinsic perspective, introduced by Galileo, which has been immensely successful in much of science. From this perspective, consciousness is either a “user illusion”4, or a mysterious “emergent” property. However, as recognized long ago by Leibniz, this leaves experience – what we see, hear, think and feel – entirely unaccounted for. This implicit dualism is one that has plagued not just neuroscience, but also psychiatry from the very beginning: are we treating the brain, the psyche, or both? If so, how are they related? Is the soul just the brain's ephemeral passenger?
Integrated information theory (IIT) provides a radically different approach5, and this is our own view. IIT takes the intrinsic perspective, starting not from the brain and what it does, but from consciousness and what it is. After all, for each of us, experience is what exists irrefutably, and the world is an inference from within experience – a good one, but still an inference, as psychiatrists should know well.
IIT first characterizes the essential properties of consciousness – those that are irrefutably true of every conceivable experience – and then asks what is required to account for them in physical terms. Crucially, this leads to identifying an experience, in all its richness, with a structure (rather than with a process, a computation, or a function) – a structure that expresses the causal powers of a (neural) substrate in its current state. In fact, IIT provides a calculus for determining, at least in principle, whether a substrate is conscious, in which way, and to what degree.
The theory can explain why certain parts of the brain can support consciousness, while others, such as the cerebellum and portions of prefrontal cortex, cannot. It can explain why – due to a breakdown of causal links – consciousness is lost in dreamless sleep, anesthesia, and generalized seizures6. It has also started to account for the quality of experience – the way space feels extended and time flowing7. It leads to many testable predictions, including counterintuitive ones: for example, that a near-silent cortex can support a vivid experience of pure presence. Finally, IIT has spawned the development of a transcranial magnetic stimulation/electroencephalography method that is currently the most specific and sensitive for assessing the presence of consciousness in unresponsive patients8.
If IIT is right, and in sharp contrast to the dominant computational/functionalist view, AI lacks (and will lack) any spark of consciousness: it may talk and behave just as well or better than any of us (it will be “functionally equivalent”), but it will not be “phenomenally equivalent” (it will feel nothing at all)5. In the words of T. Nagel, there will be nothing “it is like to be” a computer, no matter how intelligent. Just like the cerebellum, the computer has the wrong architecture for consciousness. Even though it may perform flawlessly every “cognitive” function we may care for, including those we are used to consider uniquely human, all those functions will unroll “in the dark”. They will unroll as unconsciously as the processes in our brain that smoothly string together phonemes into words and words into sentences to express a fleeting thought.
If IIT is right, attributing consciousness to AI is truly an “existential” mistake – because consciousness is about being, not doing, and AI is about doing, not being. Under selective pressure, biological constraints may promote the co-evolution of intelligence and consciousness (by favoring highly integrated substrates)9. However, in a larger context, intelligence and consciousness can be doubly dissociated. There can be experience without the functional abilities that we associate with intelligence. For example, minimally responsive patients may be unable to do or say anything but may harbor rich subjective experiences8. And there can be great intelligence without consciousness: an eloquent AI may engage in a stimulating conversation and impress us with its intellect, without anything existing besides the stream of sentences we hear – in the words of P. Larkin, “No sight, no sound / No touch or taste or smell, nothing to think with / Nothing to love or link with”.
AI poses a unique and urgent challenge not just for mental health, but for the human condition and our place in nature. Either mainstream computational/functionalist approaches are right, and we – highly constrained and often defective biological machines – will soon be superseded by machines made of silicon that will be not just better and faster but also enjoy a richer inner life. Or IIT is right, and every human experience is an extraordinary and precious phenomenon, one that requires a very special neural substrate that cannot be replicated by merely simulating its functions.
期刊介绍:
World Psychiatry is the official journal of the World Psychiatric Association. It aims to disseminate information on significant clinical, service, and research developments in the mental health field.
World Psychiatry is published three times per year and is sent free of charge to psychiatrists.The recipient psychiatrists' names and addresses are provided by WPA member societies and sections.The language used in the journal is designed to be understandable by the majority of mental health professionals worldwide.