Mahmoud Medany, Lorenzo Piglia, Liam Achenbach, S. Karthik Mukkavilli, Daniel Ahmed
{"title":"基于模型的超声驱动自主微型机器人强化学习","authors":"Mahmoud Medany, Lorenzo Piglia, Liam Achenbach, S. Karthik Mukkavilli, Daniel Ahmed","doi":"10.1038/s42256-025-01054-2","DOIUrl":null,"url":null,"abstract":"<p>Reinforcement learning is emerging as a powerful tool for microrobots control, as it enables autonomous navigation in environments where classical control approaches fall short. However, applying reinforcement learning to microrobotics is difficult due to the need for large training datasets, the slow convergence in physical systems and poor generalizability across environments. These challenges are amplified in ultrasound-actuated microrobots, which require rapid, precise adjustments in high-dimensional action space, which are often too complex for human operators. Addressing these challenges requires sample-efficient algorithms that adapt from limited data while managing complex physical interactions. To meet these challenges, we implemented model-based reinforcement learning for autonomous control of an ultrasound-driven microrobot, which learns from recurrent imagined environments. Our non-invasive, AI-controlled microrobot offers precise propulsion and efficiently learns from images in data-scarce environments. On transitioning from a pretrained simulation environment, we achieved sample-efficient collision avoidance and channel navigation, reaching a 90% success rate in target navigation across various channels within an hour of fine-tuning. Moreover, our model initially generalized successfully in 50% of tasks in new environments, improving to over 90% with 30‚Äâmin of further training. We further demonstrated real-time manipulation of microrobots in complex vasculatures under both static and flow conditions, thus underscoring the potential of AI to revolutionize microrobotics in biomedical applications.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"46 1","pages":""},"PeriodicalIF":18.8000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Model-based reinforcement learning for ultrasound-driven autonomous microrobots\",\"authors\":\"Mahmoud Medany, Lorenzo Piglia, Liam Achenbach, S. Karthik Mukkavilli, Daniel Ahmed\",\"doi\":\"10.1038/s42256-025-01054-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Reinforcement learning is emerging as a powerful tool for microrobots control, as it enables autonomous navigation in environments where classical control approaches fall short. However, applying reinforcement learning to microrobotics is difficult due to the need for large training datasets, the slow convergence in physical systems and poor generalizability across environments. These challenges are amplified in ultrasound-actuated microrobots, which require rapid, precise adjustments in high-dimensional action space, which are often too complex for human operators. Addressing these challenges requires sample-efficient algorithms that adapt from limited data while managing complex physical interactions. To meet these challenges, we implemented model-based reinforcement learning for autonomous control of an ultrasound-driven microrobot, which learns from recurrent imagined environments. Our non-invasive, AI-controlled microrobot offers precise propulsion and efficiently learns from images in data-scarce environments. On transitioning from a pretrained simulation environment, we achieved sample-efficient collision avoidance and channel navigation, reaching a 90% success rate in target navigation across various channels within an hour of fine-tuning. Moreover, our model initially generalized successfully in 50% of tasks in new environments, improving to over 90% with 30‚Äâmin of further training. We further demonstrated real-time manipulation of microrobots in complex vasculatures under both static and flow conditions, thus underscoring the potential of AI to revolutionize microrobotics in biomedical applications.</p>\",\"PeriodicalId\":48533,\"journal\":{\"name\":\"Nature Machine Intelligence\",\"volume\":\"46 1\",\"pages\":\"\"},\"PeriodicalIF\":18.8000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1038/s42256-025-01054-2\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1038/s42256-025-01054-2","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Model-based reinforcement learning for ultrasound-driven autonomous microrobots
Reinforcement learning is emerging as a powerful tool for microrobots control, as it enables autonomous navigation in environments where classical control approaches fall short. However, applying reinforcement learning to microrobotics is difficult due to the need for large training datasets, the slow convergence in physical systems and poor generalizability across environments. These challenges are amplified in ultrasound-actuated microrobots, which require rapid, precise adjustments in high-dimensional action space, which are often too complex for human operators. Addressing these challenges requires sample-efficient algorithms that adapt from limited data while managing complex physical interactions. To meet these challenges, we implemented model-based reinforcement learning for autonomous control of an ultrasound-driven microrobot, which learns from recurrent imagined environments. Our non-invasive, AI-controlled microrobot offers precise propulsion and efficiently learns from images in data-scarce environments. On transitioning from a pretrained simulation environment, we achieved sample-efficient collision avoidance and channel navigation, reaching a 90% success rate in target navigation across various channels within an hour of fine-tuning. Moreover, our model initially generalized successfully in 50% of tasks in new environments, improving to over 90% with 30‚Äâmin of further training. We further demonstrated real-time manipulation of microrobots in complex vasculatures under both static and flow conditions, thus underscoring the potential of AI to revolutionize microrobotics in biomedical applications.
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.