{"title":"使用水下机器人获得的海底视觉地图的自动解释","authors":"Jin Wei Lim, A. Prügel-Bennett, B. Thornton","doi":"10.1109/OCEANSKOBE.2018.8559247","DOIUrl":null,"url":null,"abstract":"Scientific surveys using underwater robots can recover a huge volume of seafloor imagery. For mapping applications, these images can be packaged into vast, seamless and georeferenced seafloor visual reconstructions in a routine way, however interpreting this data to extract useful quantitative information typically relies on the manual effort of expert human annotators. This process is often slow and is a bottleneck in the flow of information. This work explores the feasibility of using Machine Learning tools, specifically Convolutional Neural Networks (CNNs) to at least partially automate the annotation process. A CNN was constructed to identify Shinkaia Crosnieri galetheid crabs and Bathymodiolus mussels, which are two distinct megabenthic taxa found in vast numbers in hydrothermally active regions of the seafloor. The CNN was trained with varying numbers of annotated data, where each annotation consisted of a small region surrounding a positive label at the centre of each individual within a seamless seafloor image reconstruction. The performance was assessed using an independent set of annotated data, taken from a separate reconstruction located approximately 500 m away. While the results show that the trained network can be used to classify new datasets at well characterized levels of uncertainty, the performance was found to vary between the different taxa and with a control dataset that showed only unpopulated regions of the seafloor. The analysis suggests that the number of training examples required to achieve a given level of accuracy is subject dependent, and this should be considered by humans when devising annotation strategies that make best use of their efforts to leverage the advantages offered by CNNs.","PeriodicalId":441405,"journal":{"name":"2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Interpretation of Seafloor Visual Maps Obtained Using Underwater Robots\",\"authors\":\"Jin Wei Lim, A. Prügel-Bennett, B. Thornton\",\"doi\":\"10.1109/OCEANSKOBE.2018.8559247\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scientific surveys using underwater robots can recover a huge volume of seafloor imagery. For mapping applications, these images can be packaged into vast, seamless and georeferenced seafloor visual reconstructions in a routine way, however interpreting this data to extract useful quantitative information typically relies on the manual effort of expert human annotators. This process is often slow and is a bottleneck in the flow of information. This work explores the feasibility of using Machine Learning tools, specifically Convolutional Neural Networks (CNNs) to at least partially automate the annotation process. A CNN was constructed to identify Shinkaia Crosnieri galetheid crabs and Bathymodiolus mussels, which are two distinct megabenthic taxa found in vast numbers in hydrothermally active regions of the seafloor. The CNN was trained with varying numbers of annotated data, where each annotation consisted of a small region surrounding a positive label at the centre of each individual within a seamless seafloor image reconstruction. The performance was assessed using an independent set of annotated data, taken from a separate reconstruction located approximately 500 m away. While the results show that the trained network can be used to classify new datasets at well characterized levels of uncertainty, the performance was found to vary between the different taxa and with a control dataset that showed only unpopulated regions of the seafloor. The analysis suggests that the number of training examples required to achieve a given level of accuracy is subject dependent, and this should be considered by humans when devising annotation strategies that make best use of their efforts to leverage the advantages offered by CNNs.\",\"PeriodicalId\":441405,\"journal\":{\"name\":\"2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/OCEANSKOBE.2018.8559247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/OCEANSKOBE.2018.8559247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automated Interpretation of Seafloor Visual Maps Obtained Using Underwater Robots
Scientific surveys using underwater robots can recover a huge volume of seafloor imagery. For mapping applications, these images can be packaged into vast, seamless and georeferenced seafloor visual reconstructions in a routine way, however interpreting this data to extract useful quantitative information typically relies on the manual effort of expert human annotators. This process is often slow and is a bottleneck in the flow of information. This work explores the feasibility of using Machine Learning tools, specifically Convolutional Neural Networks (CNNs) to at least partially automate the annotation process. A CNN was constructed to identify Shinkaia Crosnieri galetheid crabs and Bathymodiolus mussels, which are two distinct megabenthic taxa found in vast numbers in hydrothermally active regions of the seafloor. The CNN was trained with varying numbers of annotated data, where each annotation consisted of a small region surrounding a positive label at the centre of each individual within a seamless seafloor image reconstruction. The performance was assessed using an independent set of annotated data, taken from a separate reconstruction located approximately 500 m away. While the results show that the trained network can be used to classify new datasets at well characterized levels of uncertainty, the performance was found to vary between the different taxa and with a control dataset that showed only unpopulated regions of the seafloor. The analysis suggests that the number of training examples required to achieve a given level of accuracy is subject dependent, and this should be considered by humans when devising annotation strategies that make best use of their efforts to leverage the advantages offered by CNNs.