Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber
{"title":"我们在其中游泳仅靠监管无法实现对人工智能的合理信任","authors":"Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber","doi":"10.1109/MTS.2023.3341463","DOIUrl":null,"url":null,"abstract":"Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI\",\"authors\":\"Simon T. Powers;Olena Linnyk;Michael Guckert;Jennifer Hannig;Jeremy Pitt;Neil Urquhart;Aniko Ekárt;Nils Gumpfer;The Anh Han;Peter R. Lewis;Stephen Marsh;Tim Weber\",\"doi\":\"10.1109/MTS.2023.3341463\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10410106/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10410106/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI
Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.