{"title":"合成孔径声纳图像的统一语义分割与目标检测框架","authors":"Shannon-Morgan Steele","doi":"10.1109/ICASSPW59220.2023.10193155","DOIUrl":null,"url":null,"abstract":"Manually identifying objects in synthetic aperture sonar (SAS) imagery is costly and time consuming, making identification through computer vision and deep learning techniques an appealing alternative. Depending on the application, a generalized map (semantic segmentation) and/or a characterization of each individual object (object detection) may be desired. Here, we demonstrate a framework that allows us to simultaneously generate both semantic segmentation maps and object detections with a single deep learning model by chaining together a U-Net model with k-means clustering and connected components. This framework streamlines the model training phase by allowing us to utilize a set of semantically segmented training data to yield both semantic segmentation and bounding box predictions. We demonstrate that the deep learning model can achieve accurate predictions with a small training set through transfer learning from a convolutional neural network pretrained on optical imagery. Results from this unified framework will be presented on images of boulders collected during various surveys using a Kraken Robotics miniature SAS (MINSAS).","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Unified Semantic Segmentation and Object Detection Framework for Synthetic Aperture Sonar Imagery\",\"authors\":\"Shannon-Morgan Steele\",\"doi\":\"10.1109/ICASSPW59220.2023.10193155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Manually identifying objects in synthetic aperture sonar (SAS) imagery is costly and time consuming, making identification through computer vision and deep learning techniques an appealing alternative. Depending on the application, a generalized map (semantic segmentation) and/or a characterization of each individual object (object detection) may be desired. Here, we demonstrate a framework that allows us to simultaneously generate both semantic segmentation maps and object detections with a single deep learning model by chaining together a U-Net model with k-means clustering and connected components. This framework streamlines the model training phase by allowing us to utilize a set of semantically segmented training data to yield both semantic segmentation and bounding box predictions. We demonstrate that the deep learning model can achieve accurate predictions with a small training set through transfer learning from a convolutional neural network pretrained on optical imagery. Results from this unified framework will be presented on images of boulders collected during various surveys using a Kraken Robotics miniature SAS (MINSAS).\",\"PeriodicalId\":158726,\"journal\":{\"name\":\"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICASSPW59220.2023.10193155\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSPW59220.2023.10193155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Unified Semantic Segmentation and Object Detection Framework for Synthetic Aperture Sonar Imagery
Manually identifying objects in synthetic aperture sonar (SAS) imagery is costly and time consuming, making identification through computer vision and deep learning techniques an appealing alternative. Depending on the application, a generalized map (semantic segmentation) and/or a characterization of each individual object (object detection) may be desired. Here, we demonstrate a framework that allows us to simultaneously generate both semantic segmentation maps and object detections with a single deep learning model by chaining together a U-Net model with k-means clustering and connected components. This framework streamlines the model training phase by allowing us to utilize a set of semantically segmented training data to yield both semantic segmentation and bounding box predictions. We demonstrate that the deep learning model can achieve accurate predictions with a small training set through transfer learning from a convolutional neural network pretrained on optical imagery. Results from this unified framework will be presented on images of boulders collected during various surveys using a Kraken Robotics miniature SAS (MINSAS).