{"title":"通过预测信息的 \"到达-避开 \"动态游戏学会安全影响的机器人","authors":"Ravi Pandya, Changliu Liu, Andrea Bajcsy","doi":"arxiv-2409.12153","DOIUrl":null,"url":null,"abstract":"Robots can influence people to accomplish their tasks more efficiently:\nautonomous cars can inch forward at an intersection to pass through, and\ntabletop manipulators can go for an object on the table first. However, a\nrobot's ability to influence can also compromise the safety of nearby people if\nnaively executed. In this work, we pose and solve a novel robust reach-avoid\ndynamic game which enables robots to be maximally influential, but only when a\nsafety backup control exists. On the human side, we model the human's behavior\nas goal-driven but conditioned on the robot's plan, enabling us to capture\ninfluence. On the robot side, we solve the dynamic game in the joint physical\nand belief space, enabling the robot to reason about how its uncertainty in\nhuman behavior will evolve over time. We instantiate our method, called SLIDE\n(Safely Leveraging Influence in Dynamic Environments), in a high-dimensional\n(39-D) simulated human-robot collaborative manipulation task solved via offline\ngame-theoretic reinforcement learning. We compare our approach to a robust\nbaseline that treats the human as a worst-case adversary, a safety controller\nthat does not explicitly reason about influence, and an energy-function-based\nsafety shield. We find that SLIDE consistently enables the robot to leverage\nthe influence it has on the human when it is safe to do so, ultimately allowing\nthe robot to be less conservative while still ensuring a high safety rate\nduring task execution.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robots that Learn to Safely Influence via Prediction-Informed Reach-Avoid Dynamic Games\",\"authors\":\"Ravi Pandya, Changliu Liu, Andrea Bajcsy\",\"doi\":\"arxiv-2409.12153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robots can influence people to accomplish their tasks more efficiently:\\nautonomous cars can inch forward at an intersection to pass through, and\\ntabletop manipulators can go for an object on the table first. However, a\\nrobot's ability to influence can also compromise the safety of nearby people if\\nnaively executed. In this work, we pose and solve a novel robust reach-avoid\\ndynamic game which enables robots to be maximally influential, but only when a\\nsafety backup control exists. On the human side, we model the human's behavior\\nas goal-driven but conditioned on the robot's plan, enabling us to capture\\ninfluence. On the robot side, we solve the dynamic game in the joint physical\\nand belief space, enabling the robot to reason about how its uncertainty in\\nhuman behavior will evolve over time. We instantiate our method, called SLIDE\\n(Safely Leveraging Influence in Dynamic Environments), in a high-dimensional\\n(39-D) simulated human-robot collaborative manipulation task solved via offline\\ngame-theoretic reinforcement learning. We compare our approach to a robust\\nbaseline that treats the human as a worst-case adversary, a safety controller\\nthat does not explicitly reason about influence, and an energy-function-based\\nsafety shield. We find that SLIDE consistently enables the robot to leverage\\nthe influence it has on the human when it is safe to do so, ultimately allowing\\nthe robot to be less conservative while still ensuring a high safety rate\\nduring task execution.\",\"PeriodicalId\":501031,\"journal\":{\"name\":\"arXiv - CS - Robotics\",\"volume\":\"52 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robots that Learn to Safely Influence via Prediction-Informed Reach-Avoid Dynamic Games
Robots can influence people to accomplish their tasks more efficiently:
autonomous cars can inch forward at an intersection to pass through, and
tabletop manipulators can go for an object on the table first. However, a
robot's ability to influence can also compromise the safety of nearby people if
naively executed. In this work, we pose and solve a novel robust reach-avoid
dynamic game which enables robots to be maximally influential, but only when a
safety backup control exists. On the human side, we model the human's behavior
as goal-driven but conditioned on the robot's plan, enabling us to capture
influence. On the robot side, we solve the dynamic game in the joint physical
and belief space, enabling the robot to reason about how its uncertainty in
human behavior will evolve over time. We instantiate our method, called SLIDE
(Safely Leveraging Influence in Dynamic Environments), in a high-dimensional
(39-D) simulated human-robot collaborative manipulation task solved via offline
game-theoretic reinforcement learning. We compare our approach to a robust
baseline that treats the human as a worst-case adversary, a safety controller
that does not explicitly reason about influence, and an energy-function-based
safety shield. We find that SLIDE consistently enables the robot to leverage
the influence it has on the human when it is safe to do so, ultimately allowing
the robot to be less conservative while still ensuring a high safety rate
during task execution.