我们在其中游泳仅靠监管无法实现对人工智能的合理信任

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS
Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber
{"title":"我们在其中游泳仅靠监管无法实现对人工智能的合理信任","authors":"Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber","doi":"10.1109/MTS.2023.3341463","DOIUrl":null,"url":null,"abstract":"Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI\",\"authors\":\"Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber\",\"doi\":\"10.1109/MTS.2023.3341463\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10410106/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10410106/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

摘要

最近,人工智能(AI)领域出现了一些大型语言模型(LLM),如 GPT-4 和 Bard。这些无疑是令人印象深刻的成就,但它们也引发了关于挪用、准确性、可解释性、可访问性、责任等方面的严重问题。该领域的资深研究人员曾惺惺作态、自贬身价地呼吁停止开发,而行业领导者也曾围绕人工智能系统的潜力(无论好坏)发表过自以为是的评论。其中许多评论利用了大众想象中对机器智能能力的错误概念,这些概念建立在某种弗兰肯斯坦或终结者式的虚构之上:然而,这使得人们完全不清楚以 LLM 为代表的人类与人工智能之间的关系到底是什么或可能是什么。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI
Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信