Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez
{"title":"Combining AI control systems and human decision support via robustness and criticality","authors":"Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez","doi":"10.1117/12.3016311","DOIUrl":null,"url":null,"abstract":"AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world. Yet, the ability of these systems to always make correct or safe decisions is a constant source of criticism and reluctance to use them. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. Additionally, by leveraging an AI control system built specifically to assist in joint human/machine decisions, the opportunity naturally arises to then use human interactions to continuously improve the AI control system’s accuracy and robustness. We extend a methodology for Adversarial Explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing Strategically Similar Autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI’s decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defense + Commercial Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3016311","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world. Yet, the ability of these systems to always make correct or safe decisions is a constant source of criticism and reluctance to use them. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. Additionally, by leveraging an AI control system built specifically to assist in joint human/machine decisions, the opportunity naturally arises to then use human interactions to continuously improve the AI control system’s accuracy and robustness. We extend a methodology for Adversarial Explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing Strategically Similar Autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI’s decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time.