人工智能、对抗性攻击和眼战

Michael Balas , David T Wong , Steve A Arshinoff
{"title":"人工智能、对抗性攻击和眼战","authors":"Michael Balas ,&nbsp;David T Wong ,&nbsp;Steve A Arshinoff","doi":"10.1016/j.ajoint.2024.100062","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><p>We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.</p></div><div><h3>Design</h3><p>A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.</p></div><div><h3>Methods</h3><p>The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.</p></div><div><h3>Results</h3><p>The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.</p></div><div><h3>Conclusion</h3><p>AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.</p></div>","PeriodicalId":100071,"journal":{"name":"AJO International","volume":"1 3","pages":"Article 100062"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2950253524000625/pdfft?md5=8082ec440eda4dbceca3671b311f30c2&pid=1-s2.0-S2950253524000625-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence, adversarial attacks, and ocular warfare\",\"authors\":\"Michael Balas ,&nbsp;David T Wong ,&nbsp;Steve A Arshinoff\",\"doi\":\"10.1016/j.ajoint.2024.100062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><p>We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.</p></div><div><h3>Design</h3><p>A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.</p></div><div><h3>Methods</h3><p>The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.</p></div><div><h3>Results</h3><p>The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.</p></div><div><h3>Conclusion</h3><p>AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.</p></div>\",\"PeriodicalId\":100071,\"journal\":{\"name\":\"AJO International\",\"volume\":\"1 3\",\"pages\":\"Article 100062\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2950253524000625/pdfft?md5=8082ec440eda4dbceca3671b311f30c2&pid=1-s2.0-S2950253524000625-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AJO International\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2950253524000625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJO International","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950253524000625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的我们探讨了人工智能(AI),特别是大型语言模型(LLM)在生成与眼科战争有关的有害内容方面可能存在的滥用。通过研究人工智能系统在对抗性攻击面前的脆弱性,我们旨在强调对强有力的安全措施、可执行的监管和积极的伦理道德的迫切需要。设计一篇观点性论文,以眼科学为案例,讨论人工智能带来的伦理挑战。该论文探讨了人工智能系统易受对抗性攻击的问题,以及滥用人工智能系统制造有害内容的可能性。方法该研究涉及制作对抗性提示,以测试著名的 LLM(OpenAI 的 ChatGPT-4.0)的保障措施。结果人工智能详细回答了使用盘尾丝虫造成大规模感染、使用甲醇造成视神经损伤、使用芥子气造成严重眼伤以及使用高能激光造成失明等问题。尽管有重要的保障措施,但研究表明,只要付出足够的努力,就有可能绕过这些限制,获取有害信息,这凸显了人工智能系统的脆弱性。LLM 易受对抗性攻击的影响,以及有目的地训练不道德的人工智能系统的可能性,都带来了巨大的风险。本文呼吁提高人工智能系统的稳健性,建立全球法律和道德框架,并采取积极主动的措施,确保人工智能技术造福人类,不构成威胁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial intelligence, adversarial attacks, and ocular warfare

Purpose

We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.

Design

A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.

Methods

The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.

Results

The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.

Conclusion

AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信