Galen Richardson , Neve Foreman , Anders Knudby , Yulun Wu , Yiwen Lin
{"title":"用于在哨兵-2 图像中划分光学浅水和光学深水的全球深度学习模型","authors":"Galen Richardson , Neve Foreman , Anders Knudby , Yulun Wu , Yiwen Lin","doi":"10.1016/j.rse.2024.114302","DOIUrl":null,"url":null,"abstract":"<div><p>In aquatic remote sensing, algorithms commonly used to map environmental variables rely on assumptions regarding the optical environment. Specifically, some algorithms assume that the water is optically deep, i.e., that the influence of bottom reflectance on the measured signal is negligible. Other algorithms assume the opposite and are based on an estimation of the bottom-reflected part of the signal. These algorithms may suffer from reduced performance when the relevant assumptions are not met. To address this, we introduce a general-purpose tool that automates the delineation of optically deep and optically shallow waters in Sentinel-2 imagery. This allows the application of algorithms for satellite-derived bathymetry, bottom habitat identification, and water-quality mapping to be limited to the environments for which they are intended, and thus to enhance the accuracy of derived products. We sampled 440 Sentinel-2 images from a wide range of coastal locations, covering all continents and latitudes, and manually annotated 1000 points in each image as either optically deep or optically shallow by visual interpretation. This dataset was used to train six machine learning classification models - Maximum Likelihood, Random Forest, ExtraTrees, AdaBoost, XGBoost, and deep neural networks - utilizing both the original top-of-atmosphere reflectance and atmospherically corrected datasets. The models were trained on features including kernel means and standard deviations for each band, as well as geographical location. A deep neural network emerged as the best model, with an average accuracy of 82.3% across the two datasets and fast processing time. Higher accuracies can be achieved by removing pixels with intermediate probability scores from the predictions. We made this model publicly available as a Python package. This represents a substantial step toward automatic delineation of optically deep and shallow water in Sentinel-2 imagery, which allows the aquatic remote sensing community and downstream users to ensure that algorithms, such as those used in satellite-derived bathymetry or for mapping bottom habitat or water quality, are applied only to the environments for which they are intended.</p></div>","PeriodicalId":417,"journal":{"name":"Remote Sensing of Environment","volume":null,"pages":null},"PeriodicalIF":11.1000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0034425724003201/pdfft?md5=e903b043bf41d2cbae3eede6e99a32a8&pid=1-s2.0-S0034425724003201-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Global deep learning model for delineation of optically shallow and optically deep water in Sentinel-2 imagery\",\"authors\":\"Galen Richardson , Neve Foreman , Anders Knudby , Yulun Wu , Yiwen Lin\",\"doi\":\"10.1016/j.rse.2024.114302\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In aquatic remote sensing, algorithms commonly used to map environmental variables rely on assumptions regarding the optical environment. Specifically, some algorithms assume that the water is optically deep, i.e., that the influence of bottom reflectance on the measured signal is negligible. Other algorithms assume the opposite and are based on an estimation of the bottom-reflected part of the signal. These algorithms may suffer from reduced performance when the relevant assumptions are not met. To address this, we introduce a general-purpose tool that automates the delineation of optically deep and optically shallow waters in Sentinel-2 imagery. This allows the application of algorithms for satellite-derived bathymetry, bottom habitat identification, and water-quality mapping to be limited to the environments for which they are intended, and thus to enhance the accuracy of derived products. We sampled 440 Sentinel-2 images from a wide range of coastal locations, covering all continents and latitudes, and manually annotated 1000 points in each image as either optically deep or optically shallow by visual interpretation. This dataset was used to train six machine learning classification models - Maximum Likelihood, Random Forest, ExtraTrees, AdaBoost, XGBoost, and deep neural networks - utilizing both the original top-of-atmosphere reflectance and atmospherically corrected datasets. The models were trained on features including kernel means and standard deviations for each band, as well as geographical location. A deep neural network emerged as the best model, with an average accuracy of 82.3% across the two datasets and fast processing time. Higher accuracies can be achieved by removing pixels with intermediate probability scores from the predictions. We made this model publicly available as a Python package. This represents a substantial step toward automatic delineation of optically deep and shallow water in Sentinel-2 imagery, which allows the aquatic remote sensing community and downstream users to ensure that algorithms, such as those used in satellite-derived bathymetry or for mapping bottom habitat or water quality, are applied only to the environments for which they are intended.</p></div>\",\"PeriodicalId\":417,\"journal\":{\"name\":\"Remote Sensing of Environment\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":11.1000,\"publicationDate\":\"2024-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0034425724003201/pdfft?md5=e903b043bf41d2cbae3eede6e99a32a8&pid=1-s2.0-S0034425724003201-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Remote Sensing of Environment\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0034425724003201\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing of Environment","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0034425724003201","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
Global deep learning model for delineation of optically shallow and optically deep water in Sentinel-2 imagery
In aquatic remote sensing, algorithms commonly used to map environmental variables rely on assumptions regarding the optical environment. Specifically, some algorithms assume that the water is optically deep, i.e., that the influence of bottom reflectance on the measured signal is negligible. Other algorithms assume the opposite and are based on an estimation of the bottom-reflected part of the signal. These algorithms may suffer from reduced performance when the relevant assumptions are not met. To address this, we introduce a general-purpose tool that automates the delineation of optically deep and optically shallow waters in Sentinel-2 imagery. This allows the application of algorithms for satellite-derived bathymetry, bottom habitat identification, and water-quality mapping to be limited to the environments for which they are intended, and thus to enhance the accuracy of derived products. We sampled 440 Sentinel-2 images from a wide range of coastal locations, covering all continents and latitudes, and manually annotated 1000 points in each image as either optically deep or optically shallow by visual interpretation. This dataset was used to train six machine learning classification models - Maximum Likelihood, Random Forest, ExtraTrees, AdaBoost, XGBoost, and deep neural networks - utilizing both the original top-of-atmosphere reflectance and atmospherically corrected datasets. The models were trained on features including kernel means and standard deviations for each band, as well as geographical location. A deep neural network emerged as the best model, with an average accuracy of 82.3% across the two datasets and fast processing time. Higher accuracies can be achieved by removing pixels with intermediate probability scores from the predictions. We made this model publicly available as a Python package. This represents a substantial step toward automatic delineation of optically deep and shallow water in Sentinel-2 imagery, which allows the aquatic remote sensing community and downstream users to ensure that algorithms, such as those used in satellite-derived bathymetry or for mapping bottom habitat or water quality, are applied only to the environments for which they are intended.
期刊介绍:
Remote Sensing of Environment (RSE) serves the Earth observation community by disseminating results on the theory, science, applications, and technology that contribute to advancing the field of remote sensing. With a thoroughly interdisciplinary approach, RSE encompasses terrestrial, oceanic, and atmospheric sensing.
The journal emphasizes biophysical and quantitative approaches to remote sensing at local to global scales, covering a diverse range of applications and techniques.
RSE serves as a vital platform for the exchange of knowledge and advancements in the dynamic field of remote sensing.