{"title":"Neuroethics, Covert Consciousness, and Disability Rights: What Happens When Artificial Intelligence Meets Cognitive Motor Dissociation?","authors":"Joseph J. Fins;Kaiulani S. Shulman","doi":"10.1162/jocn_a_02157","DOIUrl":null,"url":null,"abstract":"In this article, we consider the intersection of cognitive motor dissociation (CMD) and artificial intelligence (AI), hence when CMD meets AI. In covert consciousness, there is a discordance between the observed behavior, the traditional bedside mode of assessment, and the response to volitional commands as depicted by neuroimaging or EEG studies. This alphabet soup of acronyms represents both the promise and peril of nascent technology in covert consciousness. On the diagnostic side, there is the complexity and uncertainty of identifying the discordance between cognitive activity and overt behavior. On the therapeutic side, when AI is used to generate speech, there is the possibility of misrepresenting the thoughts and intentions of those who are otherwise voiceless. This concordance of factors makes the application of AI to CMD worthy of deeper consideration. We offer this analysis in the spirit of anticipatory governance, a prudential process by which one plans to prevent or mitigate unintended consequences of novel technology. We first consider the normative challenges posed by CMD for clinical practice, neuroethics, and the law. We then explore the history of covert consciousness and the relationship of severe brain injury to the right-to-die movement, before introducing three biographies of brain injury that highlight the potential impact of disability bias or ableism in clinical practice, assistive technology, and translational research. Subsequently, we explore how AI might give voice to conscious individuals who are unable to communicate and the ethical challenges that this technology must overcome to promote human flourishing drawing upon what Nussbaum and Sen have described as a “capabilities approach” to promote normative reasoning.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":"36 8","pages":"1667-1674"},"PeriodicalIF":3.1000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://ieeexplore.ieee.org/document/10596005/","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
In this article, we consider the intersection of cognitive motor dissociation (CMD) and artificial intelligence (AI), hence when CMD meets AI. In covert consciousness, there is a discordance between the observed behavior, the traditional bedside mode of assessment, and the response to volitional commands as depicted by neuroimaging or EEG studies. This alphabet soup of acronyms represents both the promise and peril of nascent technology in covert consciousness. On the diagnostic side, there is the complexity and uncertainty of identifying the discordance between cognitive activity and overt behavior. On the therapeutic side, when AI is used to generate speech, there is the possibility of misrepresenting the thoughts and intentions of those who are otherwise voiceless. This concordance of factors makes the application of AI to CMD worthy of deeper consideration. We offer this analysis in the spirit of anticipatory governance, a prudential process by which one plans to prevent or mitigate unintended consequences of novel technology. We first consider the normative challenges posed by CMD for clinical practice, neuroethics, and the law. We then explore the history of covert consciousness and the relationship of severe brain injury to the right-to-die movement, before introducing three biographies of brain injury that highlight the potential impact of disability bias or ableism in clinical practice, assistive technology, and translational research. Subsequently, we explore how AI might give voice to conscious individuals who are unable to communicate and the ethical challenges that this technology must overcome to promote human flourishing drawing upon what Nussbaum and Sen have described as a “capabilities approach” to promote normative reasoning.