TRiSM for Agentic AI: A review of Trust, Risk, and Security Management in LLM-based Agentic Multi-Agent Systems

IF 14.8
AI Open Pub Date : 2026-01-01 Epub Date: 2026-03-02 DOI:10.1016/j.aiopen.2026.02.006
Shaina Raza , Ranjan Sapkota , Manoj Karkee , Christos Emmanouilidis
{"title":"TRiSM for Agentic AI: A review of Trust, Risk, and Security Management in LLM-based Agentic Multi-Agent Systems","authors":"Shaina Raza ,&nbsp;Ranjan Sapkota ,&nbsp;Manoj Karkee ,&nbsp;Christos Emmanouilidis","doi":"10.1016/j.aiopen.2026.02.006","DOIUrl":null,"url":null,"abstract":"<div><div>Agentic AI systems, built upon large language models (LLMs) and deployed in multi-agent configurations, are redefining intelligence, autonomy, collaboration, and decision-making across enterprise and societal domains. This review presents a structured analysis of Trust, Risk, and Security Management (TRiSM) in the context of LLM-based Agentic Multi-Agent Systems (AMAS). We begin by examining the conceptual foundations of Agentic AI and highlight its architectural distinctions from traditional AI agents. We then adapt and extend the AI TRiSM framework for Agentic AI, structured around key pillars: <em>Explainability, ModelOps, Security, Privacy</em> and <em>their Lifecycle Governance</em>, each contextualized to the challenges of AMAS. A risk taxonomy is proposed to capture the unique threats and vulnerabilities of Agentic AI, ranging from coordination failures to prompt-based adversarial manipulation. To make coordination and tool use measurable in practice, we propose two metrics: the Component Synergy Score (CSS), which captures inter-agent enablement, and the Tool Utilization Efficacy (TUE), which evaluates whether tools are invoked correctly and efficiently. We further discuss strategies for improving explainability in Agentic AI, as well as approaches to enhancing security and privacy through encryption, adversarial robustness, and regulatory compliance. The review concludes with a research roadmap for the responsible development and deployment of Agentic AI, highlighting key directions to align emerging systems with TRiSM principles-ensuring safety, transparency, and accountability in their operation.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"7 ","pages":"Pages 71-95"},"PeriodicalIF":14.8000,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651026000069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/3/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Agentic AI systems, built upon large language models (LLMs) and deployed in multi-agent configurations, are redefining intelligence, autonomy, collaboration, and decision-making across enterprise and societal domains. This review presents a structured analysis of Trust, Risk, and Security Management (TRiSM) in the context of LLM-based Agentic Multi-Agent Systems (AMAS). We begin by examining the conceptual foundations of Agentic AI and highlight its architectural distinctions from traditional AI agents. We then adapt and extend the AI TRiSM framework for Agentic AI, structured around key pillars: Explainability, ModelOps, Security, Privacy and their Lifecycle Governance, each contextualized to the challenges of AMAS. A risk taxonomy is proposed to capture the unique threats and vulnerabilities of Agentic AI, ranging from coordination failures to prompt-based adversarial manipulation. To make coordination and tool use measurable in practice, we propose two metrics: the Component Synergy Score (CSS), which captures inter-agent enablement, and the Tool Utilization Efficacy (TUE), which evaluates whether tools are invoked correctly and efficiently. We further discuss strategies for improving explainability in Agentic AI, as well as approaches to enhancing security and privacy through encryption, adversarial robustness, and regulatory compliance. The review concludes with a research roadmap for the responsible development and deployment of Agentic AI, highlighting key directions to align emerging systems with TRiSM principles-ensuring safety, transparency, and accountability in their operation.
代理人工智能的TRiSM:基于法学硕士的代理多代理系统中的信任、风险和安全管理综述
基于大型语言模型(llm)并部署在多智能体配置中的人工智能系统正在重新定义跨企业和社会领域的智能、自治、协作和决策。本文综述了基于法学硕士的代理多代理系统(AMAS)背景下的信任、风险和安全管理(TRiSM)的结构化分析。我们首先研究人工智能代理的概念基础,并强调其与传统人工智能代理在架构上的区别。然后,我们为人工智能调整和扩展了人工智能TRiSM框架,围绕关键支柱构建:可解释性、ModelOps、安全性、隐私及其生命周期治理,每个支柱都针对AMAS的挑战。提出了一种风险分类法来捕捉人工智能的独特威胁和漏洞,从协调失败到基于提示的对抗性操纵。为了使协调和工具使用在实践中可测量,我们提出了两个度量标准:组件协同得分(CSS),它捕获代理间的启用,以及工具利用效率(TUE),它评估工具是否被正确和有效地调用。我们进一步讨论了提高人工智能可解释性的策略,以及通过加密、对抗性鲁棒性和法规遵从性来增强安全性和隐私性的方法。报告最后提出了负责任开发和部署人工智能的研究路线图,强调了使新兴系统与TRiSM原则保持一致的关键方向,即确保其运行中的安全性、透明度和问责制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
45.00
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书