Vallijah Subasri, Negin Baghbanzadeh, Leo Anthony Celi, Laleh Seyyed-Kalantari
{"title":"近期人工智能风险可能演变为医疗保健领域的生存威胁。","authors":"Vallijah Subasri, Negin Baghbanzadeh, Leo Anthony Celi, Laleh Seyyed-Kalantari","doi":"10.1136/bmjhci-2024-101130","DOIUrl":null,"url":null,"abstract":"<p><p>The recent emergence of foundation model-based chatbots, such as ChatGPT (OpenAI, San Francisco, CA, USA), has showcased remarkable language mastery and intuitive comprehension capabilities. Despite significant efforts to identify and address the near-term risks associated with artificial intelligence (AI), our understanding of the existential threats they pose remains limited. Near-term risks stem from AI that already exist or are under active development with a clear trajectory towards deployment. Existential risks of AI can be an extension of the near-term risks studied by the fairness, accountability, transparency and ethics community, and are characterised by a potential to threaten humanity's long-term potential. In this paper, we delve into the ways AI can give rise to existential harm and explore potential risk mitigation strategies. This involves further investigation of critical domains, including AI alignment, overtrust in AI, AI safety, open-sourcing, the implications of AI to healthcare and the broader societal risks.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"32 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035420/pdf/","citationCount":"0","resultStr":"{\"title\":\"Potential for near-term AI risks to evolve into existential threats in healthcare.\",\"authors\":\"Vallijah Subasri, Negin Baghbanzadeh, Leo Anthony Celi, Laleh Seyyed-Kalantari\",\"doi\":\"10.1136/bmjhci-2024-101130\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The recent emergence of foundation model-based chatbots, such as ChatGPT (OpenAI, San Francisco, CA, USA), has showcased remarkable language mastery and intuitive comprehension capabilities. Despite significant efforts to identify and address the near-term risks associated with artificial intelligence (AI), our understanding of the existential threats they pose remains limited. Near-term risks stem from AI that already exist or are under active development with a clear trajectory towards deployment. Existential risks of AI can be an extension of the near-term risks studied by the fairness, accountability, transparency and ethics community, and are characterised by a potential to threaten humanity's long-term potential. In this paper, we delve into the ways AI can give rise to existential harm and explore potential risk mitigation strategies. This involves further investigation of critical domains, including AI alignment, overtrust in AI, AI safety, open-sourcing, the implications of AI to healthcare and the broader societal risks.</p>\",\"PeriodicalId\":9050,\"journal\":{\"name\":\"BMJ Health & Care Informatics\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-04-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035420/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMJ Health & Care Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1136/bmjhci-2024-101130\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2024-101130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
摘要
最近出现的基于基础模型的聊天机器人,如ChatGPT (OpenAI, San Francisco, CA, USA),展示了卓越的语言掌握和直觉理解能力。尽管在识别和解决与人工智能(AI)相关的短期风险方面做出了巨大努力,但我们对其构成的生存威胁的理解仍然有限。近期风险源于已经存在或正在积极开发的人工智能,并有明确的部署轨迹。人工智能的存在风险可以是公平、问责、透明和道德社区所研究的近期风险的延伸,其特征是有可能威胁到人类的长期潜力。在本文中,我们深入研究了人工智能可能导致存在危害的方式,并探讨了潜在的风险缓解策略。这涉及对关键领域的进一步调查,包括人工智能校准、对人工智能的过度信任、人工智能安全、开源、人工智能对医疗保健的影响以及更广泛的社会风险。
Potential for near-term AI risks to evolve into existential threats in healthcare.
The recent emergence of foundation model-based chatbots, such as ChatGPT (OpenAI, San Francisco, CA, USA), has showcased remarkable language mastery and intuitive comprehension capabilities. Despite significant efforts to identify and address the near-term risks associated with artificial intelligence (AI), our understanding of the existential threats they pose remains limited. Near-term risks stem from AI that already exist or are under active development with a clear trajectory towards deployment. Existential risks of AI can be an extension of the near-term risks studied by the fairness, accountability, transparency and ethics community, and are characterised by a potential to threaten humanity's long-term potential. In this paper, we delve into the ways AI can give rise to existential harm and explore potential risk mitigation strategies. This involves further investigation of critical domains, including AI alignment, overtrust in AI, AI safety, open-sourcing, the implications of AI to healthcare and the broader societal risks.