{"title":"Securing Trust-Based Resilient Algorithms Against Smart Malicious Agents","authors":"Chan-Yuan Kuo;Bin Du;Dengfeng Sun","doi":"10.1109/LCSYS.2025.3588309","DOIUrl":null,"url":null,"abstract":"In this letter, we study the problem of legitimate in-neighborhood learning in multi-agent systems, where stochastic observations of trust between agents are available. Unlike previous works, we consider two types of malicious agents: naive and smart. Naive malicious agents always behave maliciously, while smart malicious agents can intermittently disguise themselves as legitimate agents. We identify a security vulnerability of the standard threshold design <inline-formula> <tex-math>$\\epsilon = 1/2$ </tex-math></inline-formula>, which is commonly used in trust aggregation approaches. This design fails to account for the deceptive behavior of smart malicious agents, making the approach vulnerable to their attacks. To address this, we propose a threshold design that explicitly accounts for such agents. Specifically, we provide a sufficient condition for the existence of a constant threshold that enables legitimate agents to identify their legitimate in-neighbors over time, despite the presence of smart malicious agents. In addition, we show that the proposed threshold design ensures geometrically decaying misclassification probabilities. Finally, we present numerical examples to validate our theoretical results and demonstrate how the design enhances the security of existing trust-based resilient algorithms against smart malicious agents.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"1922-1927"},"PeriodicalIF":2.0000,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Control Systems Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11078447/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In this letter, we study the problem of legitimate in-neighborhood learning in multi-agent systems, where stochastic observations of trust between agents are available. Unlike previous works, we consider two types of malicious agents: naive and smart. Naive malicious agents always behave maliciously, while smart malicious agents can intermittently disguise themselves as legitimate agents. We identify a security vulnerability of the standard threshold design $\epsilon = 1/2$ , which is commonly used in trust aggregation approaches. This design fails to account for the deceptive behavior of smart malicious agents, making the approach vulnerable to their attacks. To address this, we propose a threshold design that explicitly accounts for such agents. Specifically, we provide a sufficient condition for the existence of a constant threshold that enables legitimate agents to identify their legitimate in-neighbors over time, despite the presence of smart malicious agents. In addition, we show that the proposed threshold design ensures geometrically decaying misclassification probabilities. Finally, we present numerical examples to validate our theoretical results and demonstrate how the design enhances the security of existing trust-based resilient algorithms against smart malicious agents.