{"title":"Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking","authors":"Stav Cohen, Ron Bitton, Ben Nassi","doi":"arxiv-2409.08045","DOIUrl":null,"url":null,"abstract":"In this paper, we show that with the ability to jailbreak a GenAI model,\nattackers can escalate the outcome of attacks against RAG-based GenAI-powered\napplications in severity and scale. In the first part of the paper, we show\nthat attackers can escalate RAG membership inference attacks and RAG entity\nextraction attacks to RAG documents extraction attacks, forcing a more severe\noutcome compared to existing attacks. We evaluate the results obtained from\nthree extraction methods, the influence of the type and the size of five\nembeddings algorithms employed, the size of the provided context, and the GenAI\nengine. We show that attackers can extract 80%-99.8% of the data stored in the\ndatabase used by the RAG of a Q&A chatbot. In the second part of the paper, we\nshow that attackers can escalate the scale of RAG data poisoning attacks from\ncompromising a single GenAI-powered application to compromising the entire\nGenAI ecosystem, forcing a greater scale of damage. This is done by crafting an\nadversarial self-replicating prompt that triggers a chain reaction of a\ncomputer worm within the ecosystem and forces each affected application to\nperform a malicious activity and compromise the RAG of additional applications.\nWe evaluate the performance of the worm in creating a chain of confidential\ndata extraction about users within a GenAI ecosystem of GenAI-powered email\nassistants and analyze how the performance of the worm is affected by the size\nof the context, the adversarial self-replicating prompt used, the type and size\nof the embeddings algorithm employed, and the number of hops in the\npropagation. Finally, we review and analyze guardrails to protect RAG-based\ninference and discuss the tradeoffs.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we show that with the ability to jailbreak a GenAI model,
attackers can escalate the outcome of attacks against RAG-based GenAI-powered
applications in severity and scale. In the first part of the paper, we show
that attackers can escalate RAG membership inference attacks and RAG entity
extraction attacks to RAG documents extraction attacks, forcing a more severe
outcome compared to existing attacks. We evaluate the results obtained from
three extraction methods, the influence of the type and the size of five
embeddings algorithms employed, the size of the provided context, and the GenAI
engine. We show that attackers can extract 80%-99.8% of the data stored in the
database used by the RAG of a Q&A chatbot. In the second part of the paper, we
show that attackers can escalate the scale of RAG data poisoning attacks from
compromising a single GenAI-powered application to compromising the entire
GenAI ecosystem, forcing a greater scale of damage. This is done by crafting an
adversarial self-replicating prompt that triggers a chain reaction of a
computer worm within the ecosystem and forces each affected application to
perform a malicious activity and compromise the RAG of additional applications.
We evaluate the performance of the worm in creating a chain of confidential
data extraction about users within a GenAI ecosystem of GenAI-powered email
assistants and analyze how the performance of the worm is affected by the size
of the context, the adversarial self-replicating prompt used, the type and size
of the embeddings algorithm employed, and the number of hops in the
propagation. Finally, we review and analyze guardrails to protect RAG-based
inference and discuss the tradeoffs.