{"title":"交易魔鬼决赛:通过股市和贝叶斯优化进行后门攻击","authors":"Orson Mengara","doi":"arxiv-2407.14573","DOIUrl":null,"url":null,"abstract":"Since the advent of generative artificial intelligence, every company and\nresearcher has been rushing to develop their own generative models, whether\ncommercial or not. Given the large number of users of these powerful new tools,\nthere is currently no intrinsically verifiable way to explain from the ground\nup what happens when LLMs (large language models) learn. For example, those\nbased on automatic speech recognition systems, which have to rely on huge and\nastronomical amounts of data collected from all over the web to produce fast\nand efficient results, In this article, we develop a backdoor attack called\nMarketBackFinal 2.0, based on acoustic data poisoning, MarketBackFinal 2.0 is\nmainly based on modern stock market models. In order to show the possible\nvulnerabilities of speech-based transformers that may rely on LLMs.","PeriodicalId":501139,"journal":{"name":"arXiv - QuantFin - Statistical Finance","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization\",\"authors\":\"Orson Mengara\",\"doi\":\"arxiv-2407.14573\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since the advent of generative artificial intelligence, every company and\\nresearcher has been rushing to develop their own generative models, whether\\ncommercial or not. Given the large number of users of these powerful new tools,\\nthere is currently no intrinsically verifiable way to explain from the ground\\nup what happens when LLMs (large language models) learn. For example, those\\nbased on automatic speech recognition systems, which have to rely on huge and\\nastronomical amounts of data collected from all over the web to produce fast\\nand efficient results, In this article, we develop a backdoor attack called\\nMarketBackFinal 2.0, based on acoustic data poisoning, MarketBackFinal 2.0 is\\nmainly based on modern stock market models. In order to show the possible\\nvulnerabilities of speech-based transformers that may rely on LLMs.\",\"PeriodicalId\":501139,\"journal\":{\"name\":\"arXiv - QuantFin - Statistical Finance\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Statistical Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.14573\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Statistical Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.14573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization
Since the advent of generative artificial intelligence, every company and
researcher has been rushing to develop their own generative models, whether
commercial or not. Given the large number of users of these powerful new tools,
there is currently no intrinsically verifiable way to explain from the ground
up what happens when LLMs (large language models) learn. For example, those
based on automatic speech recognition systems, which have to rely on huge and
astronomical amounts of data collected from all over the web to produce fast
and efficient results, In this article, we develop a backdoor attack called
MarketBackFinal 2.0, based on acoustic data poisoning, MarketBackFinal 2.0 is
mainly based on modern stock market models. In order to show the possible
vulnerabilities of speech-based transformers that may rely on LLMs.