{"title":"UniLCD: Unified Local-Cloud Decision-Making via Reinforcement Learning","authors":"Kathakoli Sengupta, Zhongkai Shagguan, Sandesh Bharadwaj, Sanjay Arora, Eshed Ohn-Bar, Renato Mancuso","doi":"arxiv-2409.11403","DOIUrl":null,"url":null,"abstract":"Embodied vision-based real-world systems, such as mobile robots, require a\ncareful balance between energy consumption, compute latency, and safety\nconstraints to optimize operation across dynamic tasks and contexts. As local\ncomputation tends to be restricted, offloading the computation, ie, to a remote\nserver, can save local resources while providing access to high-quality\npredictions from powerful and large models. However, the resulting\ncommunication and latency overhead has led to limited usability of cloud models\nin dynamic, safety-critical, real-time settings. To effectively address this\ntrade-off, we introduce UniLCD, a novel hybrid inference framework for enabling\nflexible local-cloud collaboration. By efficiently optimizing a flexible\nrouting module via reinforcement learning and a suitable multi-task objective,\nUniLCD is specifically designed to support the multiple constraints of\nsafety-critical end-to-end mobile systems. We validate the proposed approach\nusing a challenging, crowded navigation task requiring frequent and timely\nswitching between local and cloud operations. UniLCD demonstrates improved\noverall performance and efficiency, by over 35% compared to state-of-the-art\nbaselines based on various split computing and early exit strategies.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":"28 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Embodied vision-based real-world systems, such as mobile robots, require a
careful balance between energy consumption, compute latency, and safety
constraints to optimize operation across dynamic tasks and contexts. As local
computation tends to be restricted, offloading the computation, ie, to a remote
server, can save local resources while providing access to high-quality
predictions from powerful and large models. However, the resulting
communication and latency overhead has led to limited usability of cloud models
in dynamic, safety-critical, real-time settings. To effectively address this
trade-off, we introduce UniLCD, a novel hybrid inference framework for enabling
flexible local-cloud collaboration. By efficiently optimizing a flexible
routing module via reinforcement learning and a suitable multi-task objective,
UniLCD is specifically designed to support the multiple constraints of
safety-critical end-to-end mobile systems. We validate the proposed approach
using a challenging, crowded navigation task requiring frequent and timely
switching between local and cloud operations. UniLCD demonstrates improved
overall performance and efficiency, by over 35% compared to state-of-the-art
baselines based on various split computing and early exit strategies.