Rights and Wrongs in Talk of Mind-Reading Technology.

IF 1.5 4区 医学 Q3 HEALTH CARE SCIENCES & SERVICES
Stephen Rainey
{"title":"Rights and Wrongs in Talk of Mind-Reading Technology.","authors":"Stephen Rainey","doi":"10.1017/S0963180124000045","DOIUrl":null,"url":null,"abstract":"<p><p>This article examines the idea of mind-reading technology by focusing on an interesting case of applying a large language model (LLM) to brain data. On the face of it, experimental results appear to show that it is possible to reconstruct mental contents directly from brain data by processing via a chatGPT-like LLM. However, the author argues that this apparent conclusion is not warranted. Through examining how LLMs work, it is shown that they are importantly different from natural language. The former operates on the basis of nonrational data transformations based on a large textual corpus. The latter has a rational dimension, being based on reasons. Using this as a basis, it is argued that brain data does not directly reveal mental content, but can be processed to ground predictions indirectly about mental content. The author concludes that this is impressive but different in principle from technology-mediated mind reading. The applications of LLM-based brain data processing are nevertheless promising for speech rehabilitation or novel communication methods.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"1-11"},"PeriodicalIF":1.5000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cambridge Quarterly of Healthcare Ethics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1017/S0963180124000045","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

This article examines the idea of mind-reading technology by focusing on an interesting case of applying a large language model (LLM) to brain data. On the face of it, experimental results appear to show that it is possible to reconstruct mental contents directly from brain data by processing via a chatGPT-like LLM. However, the author argues that this apparent conclusion is not warranted. Through examining how LLMs work, it is shown that they are importantly different from natural language. The former operates on the basis of nonrational data transformations based on a large textual corpus. The latter has a rational dimension, being based on reasons. Using this as a basis, it is argued that brain data does not directly reveal mental content, but can be processed to ground predictions indirectly about mental content. The author concludes that this is impressive but different in principle from technology-mediated mind reading. The applications of LLM-based brain data processing are nevertheless promising for speech rehabilitation or novel communication methods.

谈论读心技术的对与错。
本文通过将大语言模型(LLM)应用于大脑数据这一有趣案例,探讨了读心技术的理念。从表面上看,实验结果似乎表明,通过类似 chatGPT 的 LLM 处理,可以直接从大脑数据中重建心理内容。然而,作者认为这一表面结论并不成立。通过研究 LLM 的工作原理,可以发现它们与自然语言有着重要的不同。前者是在基于大型文本语料库的非理性数据转换基础上运行的。后者具有理性维度,以理由为基础。以此为基础,作者认为大脑数据并不能直接揭示心理内容,但可以通过处理间接地预测心理内容。作者的结论是,这令人印象深刻,但原则上不同于以技术为媒介的读心术。不过,基于 LLM 的大脑数据处理在语言康复或新型交流方法方面的应用前景广阔。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.90
自引率
11.10%
发文量
127
审稿时长
>12 weeks
期刊介绍: The Cambridge Quarterly of Healthcare Ethics is designed to address the challenges of biology, medicine and healthcare and to meet the needs of professionals serving on healthcare ethics committees in hospitals, nursing homes, hospices and rehabilitation centres. The aim of the journal is to serve as the international forum for the wide range of serious and urgent issues faced by members of healthcare ethics committees, physicians, nurses, social workers, clergy, lawyers and community representatives.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信