{"title":"声学反射体绘图的机器学习框架","authors":"Usama Saqib, Letizia Marchegiani, Jesper Rindom Jensen","doi":"arxiv-2409.12094","DOIUrl":null,"url":null,"abstract":"Sonar-based indoor mapping systems have been widely employed in robotics for\nseveral decades. While such systems are still the mainstream in underwater and\npipe inspection settings, the vulnerability to noise reduced, over time, their\ngeneral widespread usage in favour of other modalities(\\textit{e.g.}, cameras,\nlidars), whose technologies were encountering, instead, extraordinary\nadvancements. Nevertheless, mapping physical environments using acoustic\nsignals and echolocation can bring significant benefits to robot navigation in\nadverse scenarios, thanks to their complementary characteristics compared to\nother sensors. Cameras and lidars, indeed, struggle in harsh weather\nconditions, when dealing with lack of illumination, or with non-reflective\nwalls. Yet, for acoustic sensors to be able to generate accurate maps, noise\nhas to be properly and effectively handled. Traditional signal processing\ntechniques are not always a solution in those cases. In this paper, we propose\na framework where machine learning is exploited to aid more traditional signal\nprocessing methods to cope with background noise, by removing outliers and\nartefacts from the generated maps using acoustic sensors. Our goal is to\ndemonstrate that the performance of traditional echolocation mapping techniques\ncan be greatly enhanced, even in particularly noisy conditions, facilitating\nthe employment of acoustic sensors in state-of-the-art multi-modal robot\nnavigation systems. Our simulated evaluation demonstrates that the system can\nreliably operate at an SNR of $-10$dB. Moreover, we also show that the proposed\nmethod is capable of operating in different reverberate environments. In this\npaper, we also use the proposed method to map the outline of a simulated room\nusing a robotic platform.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A machine learning framework for acoustic reflector mapping\",\"authors\":\"Usama Saqib, Letizia Marchegiani, Jesper Rindom Jensen\",\"doi\":\"arxiv-2409.12094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sonar-based indoor mapping systems have been widely employed in robotics for\\nseveral decades. While such systems are still the mainstream in underwater and\\npipe inspection settings, the vulnerability to noise reduced, over time, their\\ngeneral widespread usage in favour of other modalities(\\\\textit{e.g.}, cameras,\\nlidars), whose technologies were encountering, instead, extraordinary\\nadvancements. Nevertheless, mapping physical environments using acoustic\\nsignals and echolocation can bring significant benefits to robot navigation in\\nadverse scenarios, thanks to their complementary characteristics compared to\\nother sensors. Cameras and lidars, indeed, struggle in harsh weather\\nconditions, when dealing with lack of illumination, or with non-reflective\\nwalls. Yet, for acoustic sensors to be able to generate accurate maps, noise\\nhas to be properly and effectively handled. Traditional signal processing\\ntechniques are not always a solution in those cases. In this paper, we propose\\na framework where machine learning is exploited to aid more traditional signal\\nprocessing methods to cope with background noise, by removing outliers and\\nartefacts from the generated maps using acoustic sensors. Our goal is to\\ndemonstrate that the performance of traditional echolocation mapping techniques\\ncan be greatly enhanced, even in particularly noisy conditions, facilitating\\nthe employment of acoustic sensors in state-of-the-art multi-modal robot\\nnavigation systems. Our simulated evaluation demonstrates that the system can\\nreliably operate at an SNR of $-10$dB. Moreover, we also show that the proposed\\nmethod is capable of operating in different reverberate environments. In this\\npaper, we also use the proposed method to map the outline of a simulated room\\nusing a robotic platform.\",\"PeriodicalId\":501031,\"journal\":{\"name\":\"arXiv - CS - Robotics\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A machine learning framework for acoustic reflector mapping
Sonar-based indoor mapping systems have been widely employed in robotics for
several decades. While such systems are still the mainstream in underwater and
pipe inspection settings, the vulnerability to noise reduced, over time, their
general widespread usage in favour of other modalities(\textit{e.g.}, cameras,
lidars), whose technologies were encountering, instead, extraordinary
advancements. Nevertheless, mapping physical environments using acoustic
signals and echolocation can bring significant benefits to robot navigation in
adverse scenarios, thanks to their complementary characteristics compared to
other sensors. Cameras and lidars, indeed, struggle in harsh weather
conditions, when dealing with lack of illumination, or with non-reflective
walls. Yet, for acoustic sensors to be able to generate accurate maps, noise
has to be properly and effectively handled. Traditional signal processing
techniques are not always a solution in those cases. In this paper, we propose
a framework where machine learning is exploited to aid more traditional signal
processing methods to cope with background noise, by removing outliers and
artefacts from the generated maps using acoustic sensors. Our goal is to
demonstrate that the performance of traditional echolocation mapping techniques
can be greatly enhanced, even in particularly noisy conditions, facilitating
the employment of acoustic sensors in state-of-the-art multi-modal robot
navigation systems. Our simulated evaluation demonstrates that the system can
reliably operate at an SNR of $-10$dB. Moreover, we also show that the proposed
method is capable of operating in different reverberate environments. In this
paper, we also use the proposed method to map the outline of a simulated room
using a robotic platform.