John McConnell , Ivana Collado-Gonzalez , Paul Szenher , Armon Shariati
{"title":"面向水面和水下领域的机器人间定位的海上场景匹配","authors":"John McConnell , Ivana Collado-Gonzalez , Paul Szenher , Armon Shariati","doi":"10.1016/j.robot.2025.105166","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous underwater vehicles (AUVs) play a crucial role across various sectors, including oil and gas production, civil engineering, and defense. However, underwater localization remains a significant challenge, limiting the widespread adoption of AUVs in these fields. A common strategy to address this challenge is to deploy a fleet of Uncrewed Surface Vessels (USVs), which are easier to localize on the surface, alongside AUVs to help anchor their position estimates. These methods typically rely on acoustic pinging among the robots to relay position-related data. Unfortunately, acoustic pinging requires synchronized clocks and a clear line of sight, making it difficult to deploy large-scale, decentralized teams, particularly in littoral (i.e. near-shore) environments.</div><div>To bridge this gap, we propose an alternative approach that is resolvable over asynchronous, intermittent communications. By leveraging the fact that many human-made structures in littoral environments are visible both above and below the waterline, our method automatically detects correspondences between above-water LiDAR scenes and underwater sonar scenes. First, we convert underwater and above-water data into a common representation, generate descriptors to find commonalities, and then apply point cloud registration tools to find rigid body transformations between them. Lastly, we apply pairwise consistent measurement set maximization (PCM) as a robust outlier rejection system. Our results demonstrate that our solution to this novel <em>Maritime Scene Matching (MSM) problem</em> is both robust to outliers and effective in localizing sonar scenes with an accuracy of less than two meters. Datasets are collected using a single robot equipped with underwater imaging sonar and above-water LiDAR. We have made our real-world datasets, hardware designs, and open-source code available to promote reproducibility and to encourage broader community engagement with the MSM problem. Opensource code: <span><span>https://github.com/jake3991/maritime-scene-matching</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105166"},"PeriodicalIF":5.2000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maritime scene matching for inter-robot localization across surface and underwater domains\",\"authors\":\"John McConnell , Ivana Collado-Gonzalez , Paul Szenher , Armon Shariati\",\"doi\":\"10.1016/j.robot.2025.105166\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Autonomous underwater vehicles (AUVs) play a crucial role across various sectors, including oil and gas production, civil engineering, and defense. However, underwater localization remains a significant challenge, limiting the widespread adoption of AUVs in these fields. A common strategy to address this challenge is to deploy a fleet of Uncrewed Surface Vessels (USVs), which are easier to localize on the surface, alongside AUVs to help anchor their position estimates. These methods typically rely on acoustic pinging among the robots to relay position-related data. Unfortunately, acoustic pinging requires synchronized clocks and a clear line of sight, making it difficult to deploy large-scale, decentralized teams, particularly in littoral (i.e. near-shore) environments.</div><div>To bridge this gap, we propose an alternative approach that is resolvable over asynchronous, intermittent communications. By leveraging the fact that many human-made structures in littoral environments are visible both above and below the waterline, our method automatically detects correspondences between above-water LiDAR scenes and underwater sonar scenes. First, we convert underwater and above-water data into a common representation, generate descriptors to find commonalities, and then apply point cloud registration tools to find rigid body transformations between them. Lastly, we apply pairwise consistent measurement set maximization (PCM) as a robust outlier rejection system. Our results demonstrate that our solution to this novel <em>Maritime Scene Matching (MSM) problem</em> is both robust to outliers and effective in localizing sonar scenes with an accuracy of less than two meters. Datasets are collected using a single robot equipped with underwater imaging sonar and above-water LiDAR. We have made our real-world datasets, hardware designs, and open-source code available to promote reproducibility and to encourage broader community engagement with the MSM problem. Opensource code: <span><span>https://github.com/jake3991/maritime-scene-matching</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49592,\"journal\":{\"name\":\"Robotics and Autonomous Systems\",\"volume\":\"194 \",\"pages\":\"Article 105166\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2025-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics and Autonomous Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0921889025002635\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889025002635","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Maritime scene matching for inter-robot localization across surface and underwater domains
Autonomous underwater vehicles (AUVs) play a crucial role across various sectors, including oil and gas production, civil engineering, and defense. However, underwater localization remains a significant challenge, limiting the widespread adoption of AUVs in these fields. A common strategy to address this challenge is to deploy a fleet of Uncrewed Surface Vessels (USVs), which are easier to localize on the surface, alongside AUVs to help anchor their position estimates. These methods typically rely on acoustic pinging among the robots to relay position-related data. Unfortunately, acoustic pinging requires synchronized clocks and a clear line of sight, making it difficult to deploy large-scale, decentralized teams, particularly in littoral (i.e. near-shore) environments.
To bridge this gap, we propose an alternative approach that is resolvable over asynchronous, intermittent communications. By leveraging the fact that many human-made structures in littoral environments are visible both above and below the waterline, our method automatically detects correspondences between above-water LiDAR scenes and underwater sonar scenes. First, we convert underwater and above-water data into a common representation, generate descriptors to find commonalities, and then apply point cloud registration tools to find rigid body transformations between them. Lastly, we apply pairwise consistent measurement set maximization (PCM) as a robust outlier rejection system. Our results demonstrate that our solution to this novel Maritime Scene Matching (MSM) problem is both robust to outliers and effective in localizing sonar scenes with an accuracy of less than two meters. Datasets are collected using a single robot equipped with underwater imaging sonar and above-water LiDAR. We have made our real-world datasets, hardware designs, and open-source code available to promote reproducibility and to encourage broader community engagement with the MSM problem. Opensource code: https://github.com/jake3991/maritime-scene-matching.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.