Marko Orescanin;Derek Olson;Brian Harrington;Marc Geilhufe;Roy Edgar Hansen;Dalton Duvio;Narada Warakagoda
{"title":"基于贝叶斯深度学习的合成孔径声呐成像伪影分类","authors":"Marko Orescanin;Derek Olson;Brian Harrington;Marc Geilhufe;Roy Edgar Hansen;Dalton Duvio;Narada Warakagoda","doi":"10.1109/JOE.2025.3538948","DOIUrl":null,"url":null,"abstract":"Synthetic aperture sonar (SAS) provides high-resolution underwater imaging but can suffer from artifacts due to environment or navigation errors. This work explores Bayesian deep learning for classifying common imaging artifacts while quantifying model reliability. We introduce a novel labeled data set with simulated imaging errors through controlled beamforming perturbations. Two Bayesian neural network variants, Monte Carlo dropout and flipout, were trained on this data to detect three artifacts induced by: sound speed errors, yaw attitude error, and additive noise. Results demonstrate these methods accurately classify artifacts in SAS imagery while producing well-calibrated uncertainty estimates. Uncertainty tends to be higher for uniform seafloor textures where artifacts are harder to perceive, and lower for richly textured environments. Analyzing uncertainty reveals regions likely to be misclassified. By discarding 20% of the most uncertain predictions, classification improves from 0.92 F<inline-formula><tex-math>$_{1}$</tex-math></inline-formula>-score to 0.98 F<inline-formula><tex-math>$_{1}$</tex-math></inline-formula>-score. Overall, the Bayesian approach enables uncertainty-aware perception, boosting model reliability—an essential capability for real-world autonomous underwater systems. This work establishes Bayesian deep learning as a robust technique for uncertainty quantification and artifact detection in SAS.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 3","pages":"2280-2295"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Classification of Imaging Artifacts in Synthetic Aperture Sonar With Bayesian Deep Learning\",\"authors\":\"Marko Orescanin;Derek Olson;Brian Harrington;Marc Geilhufe;Roy Edgar Hansen;Dalton Duvio;Narada Warakagoda\",\"doi\":\"10.1109/JOE.2025.3538948\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Synthetic aperture sonar (SAS) provides high-resolution underwater imaging but can suffer from artifacts due to environment or navigation errors. This work explores Bayesian deep learning for classifying common imaging artifacts while quantifying model reliability. We introduce a novel labeled data set with simulated imaging errors through controlled beamforming perturbations. Two Bayesian neural network variants, Monte Carlo dropout and flipout, were trained on this data to detect three artifacts induced by: sound speed errors, yaw attitude error, and additive noise. Results demonstrate these methods accurately classify artifacts in SAS imagery while producing well-calibrated uncertainty estimates. Uncertainty tends to be higher for uniform seafloor textures where artifacts are harder to perceive, and lower for richly textured environments. Analyzing uncertainty reveals regions likely to be misclassified. By discarding 20% of the most uncertain predictions, classification improves from 0.92 F<inline-formula><tex-math>$_{1}$</tex-math></inline-formula>-score to 0.98 F<inline-formula><tex-math>$_{1}$</tex-math></inline-formula>-score. Overall, the Bayesian approach enables uncertainty-aware perception, boosting model reliability—an essential capability for real-world autonomous underwater systems. This work establishes Bayesian deep learning as a robust technique for uncertainty quantification and artifact detection in SAS.\",\"PeriodicalId\":13191,\"journal\":{\"name\":\"IEEE Journal of Oceanic Engineering\",\"volume\":\"50 3\",\"pages\":\"2280-2295\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Oceanic Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11006263/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, CIVIL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Oceanic Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11006263/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
Classification of Imaging Artifacts in Synthetic Aperture Sonar With Bayesian Deep Learning
Synthetic aperture sonar (SAS) provides high-resolution underwater imaging but can suffer from artifacts due to environment or navigation errors. This work explores Bayesian deep learning for classifying common imaging artifacts while quantifying model reliability. We introduce a novel labeled data set with simulated imaging errors through controlled beamforming perturbations. Two Bayesian neural network variants, Monte Carlo dropout and flipout, were trained on this data to detect three artifacts induced by: sound speed errors, yaw attitude error, and additive noise. Results demonstrate these methods accurately classify artifacts in SAS imagery while producing well-calibrated uncertainty estimates. Uncertainty tends to be higher for uniform seafloor textures where artifacts are harder to perceive, and lower for richly textured environments. Analyzing uncertainty reveals regions likely to be misclassified. By discarding 20% of the most uncertain predictions, classification improves from 0.92 F$_{1}$-score to 0.98 F$_{1}$-score. Overall, the Bayesian approach enables uncertainty-aware perception, boosting model reliability—an essential capability for real-world autonomous underwater systems. This work establishes Bayesian deep learning as a robust technique for uncertainty quantification and artifact detection in SAS.
期刊介绍:
The IEEE Journal of Oceanic Engineering (ISSN 0364-9059) is the online-only quarterly publication of the IEEE Oceanic Engineering Society (IEEE OES). The scope of the Journal is the field of interest of the IEEE OES, which encompasses all aspects of science, engineering, and technology that address research, development, and operations pertaining to all bodies of water. This includes the creation of new capabilities and technologies from concept design through prototypes, testing, and operational systems to sense, explore, understand, develop, use, and responsibly manage natural resources.