{"title":"人-机器人团队信任的博弈论模型:指导人类监控机器人行为的观察策略","authors":"Zahra Zahedi;Sailik Sengupta;Subbarao Kambhampati","doi":"10.1109/THMS.2024.3488559","DOIUrl":null,"url":null,"abstract":"In scenarios involving robots generating and executing plans, conflicts can arise between cost-effective robot execution and meeting human expectations for safe behavior. When humans supervise robots, their accountability increases, especially when robot behavior deviates from expectations. To address this, robots may choose a highly constrained plan when monitored and a more optimal one when unobserved. While this behavior is not driven by human-like motives, it stems from robots accommodating diverse supervisors. To optimize monitoring costs while ensuring safety, we model this interaction in a trust-based game-theoretic framework. However, pure-strategy Nash equilibrium often fails to exist in this model. To address this, we introduce the concept of a trust boundary within the mixed strategy space, aiding in the discovery of optimal monitoring strategies. Human studies demonstrate the necessity of optimal strategies and the benefits of our suggested approaches.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 1","pages":"37-47"},"PeriodicalIF":3.5000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Game-Theoretic Model of Trust in Human–Robot Teaming: Guiding Human Observation Strategy for Monitoring Robot Behavior\",\"authors\":\"Zahra Zahedi;Sailik Sengupta;Subbarao Kambhampati\",\"doi\":\"10.1109/THMS.2024.3488559\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In scenarios involving robots generating and executing plans, conflicts can arise between cost-effective robot execution and meeting human expectations for safe behavior. When humans supervise robots, their accountability increases, especially when robot behavior deviates from expectations. To address this, robots may choose a highly constrained plan when monitored and a more optimal one when unobserved. While this behavior is not driven by human-like motives, it stems from robots accommodating diverse supervisors. To optimize monitoring costs while ensuring safety, we model this interaction in a trust-based game-theoretic framework. However, pure-strategy Nash equilibrium often fails to exist in this model. To address this, we introduce the concept of a trust boundary within the mixed strategy space, aiding in the discovery of optimal monitoring strategies. Human studies demonstrate the necessity of optimal strategies and the benefits of our suggested approaches.\",\"PeriodicalId\":48916,\"journal\":{\"name\":\"IEEE Transactions on Human-Machine Systems\",\"volume\":\"55 1\",\"pages\":\"37-47\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Human-Machine Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10776782/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Human-Machine Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10776782/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A Game-Theoretic Model of Trust in Human–Robot Teaming: Guiding Human Observation Strategy for Monitoring Robot Behavior
In scenarios involving robots generating and executing plans, conflicts can arise between cost-effective robot execution and meeting human expectations for safe behavior. When humans supervise robots, their accountability increases, especially when robot behavior deviates from expectations. To address this, robots may choose a highly constrained plan when monitored and a more optimal one when unobserved. While this behavior is not driven by human-like motives, it stems from robots accommodating diverse supervisors. To optimize monitoring costs while ensuring safety, we model this interaction in a trust-based game-theoretic framework. However, pure-strategy Nash equilibrium often fails to exist in this model. To address this, we introduce the concept of a trust boundary within the mixed strategy space, aiding in the discovery of optimal monitoring strategies. Human studies demonstrate the necessity of optimal strategies and the benefits of our suggested approaches.
期刊介绍:
The scope of the IEEE Transactions on Human-Machine Systems includes the fields of human machine systems. It covers human systems and human organizational interactions including cognitive ergonomics, system test and evaluation, and human information processing concerns in systems and organizations.