{"title":"Capsule network-based approach for estimating grassland coverage using time series data from enhanced vegetation index","authors":"Yaqi Sun, Hailong Liu, Zhengqiang Guo","doi":"10.1016/j.aiig.2021.08.001","DOIUrl":"10.1016/j.aiig.2021.08.001","url":null,"abstract":"<div><p>The degradation and desertification of grasslands pose a daunting challenge to China's arid and semiarid areas owing to the increasing demand for them in light of the rise of animal husbandry. Monitoring grasslands by using big data has emerged as a popular area of research in recent years. As grassland degradation is a slow and gradual process, the accurate identification of grassland cover is key to monitoring it. Vegetation coverage is currently monitored mainly by combining inversion-based methods with field surveys, which requires significant human effort and other resources and is thus unsuitable for use at a large scale. We proposed to use time series from the enhanced vegetation index (EVI) in capsule network-based methods to identify grasslands. The process classified grassland coverage into four levels, high, medium, low, and other, based on Landsat images from 2019. The accuracy in classifying the grasslands at each level was higher than 90%, with an overall accuracy of 96.32% and a kappa coefficient of 0.9508. The proposed method outperformed the SVM, RF, and LSTM algorithms in terms of classification accuracy.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"2 ","pages":"Pages 26-34"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2021.08.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91484651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flood susceptibility assessment using artificial neural networks in Indonesia","authors":"Stela Priscillia , Calogero Schillaci , Aldo Lipani","doi":"10.1016/j.aiig.2022.03.002","DOIUrl":"10.1016/j.aiig.2022.03.002","url":null,"abstract":"<div><p>Flood incidents can massively damage and disrupt a city economic or governing core. However, flood risk can be mitigated through event planning and city-wide preparation to reduce damage. For, governments, firms, and civilians to make such preparations, flood susceptibility predictions are required. To predict flood susceptibility nine environmental related factors have been identified. They are elevation, slope, curvature, topographical wetness index (TWI), Euclidean distance from a river, land-cover, stream power index (SPI), soil type and precipitation. This work will use these environmental related factors alongside Sentinel-1 satellite imagery in a model intercomparison study to back-predict flood susceptibility in Jakarta for the January 2020 historic flood event across 260 key locations. For each location, this study uses current environmental conditions to predict flood status in the following month. Considering the imbalance between instances of flooded and non-flooded conditions, the Synthetic Minority Oversampling Technique (SMOTE) has been implemented to balance both classes in the training set. This work compares predictions from artificial neural networks (ANN), k-Nearest Neighbors algorithms (k-NN) and Support Vector Machines (SVM) against a random baseline. The effects of the SMOTE are also assessed by training each model on balanced and imbalanced datasets. The ANN is found to be superior to the other machine learning models.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"2 ","pages":"Pages 215-222"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666544122000090/pdfft?md5=be9afd52112c0a20ec31a3de99a5d5da&pid=1-s2.0-S2666544122000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86957808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tariq Alkhalifah , Chao Song , Umair bin Waheed , Qi Hao
{"title":"Wavefield solutions from machine learned functions constrained by the Helmholtz equation","authors":"Tariq Alkhalifah , Chao Song , Umair bin Waheed , Qi Hao","doi":"10.1016/j.aiig.2021.08.002","DOIUrl":"10.1016/j.aiig.2021.08.002","url":null,"abstract":"<div><p>Solving the wave equation is one of the most (if not the most) fundamental problems we face as we try to illuminate the Earth using recorded seismic data. The Helmholtz equation provides wavefield solutions that are dimensionally reduced, per frequency, compared to the time domain, which is useful for many applications, like full waveform inversion. However, our ability to attain such wavefield solutions depends often on the size of the model and the complexity of the wave equation. Thus, we use here a recently introduced framework based on neural networks to predict functional solutions through setting the underlying physical equation as a loss function to optimize the neural network (NN) parameters. For an input given by a location in the model space, the network learns to predict the wavefield value at that location, and its partial derivatives using a concept referred to as automatic differentiation, to fit, in our case, a form of the Helmholtz equation. We specifically seek the solution of the scattered wavefield considering a simple homogeneous background model that allows for analytical solutions of the background wavefield. Providing the NN with a reasonable number of random points from the model space will ultimately train a fully connected deep NN to predict the scattered wavefield function. The size of the network depends mainly on the complexity of the desired wavefield, with such complexity increasing with increasing frequency and increasing model complexity. However, smaller networks can provide smoother wavefields that might be useful for inversion applications. Preliminary tests on a two-box-shaped scatterer model with a source in the middle, as well as, the Marmousi model with a source at the surface demonstrate the potential of the NN for this application. Additional tests on a 3D model demonstrate the potential versatility of the approach.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"2 ","pages":"Pages 11-19"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2021.08.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80598192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Yang , Zhiguo Wang , Jinghuai Gao , Naihao Liu , Zhen Li
{"title":"Sparse inversion-based seismic random noise attenuation via self-paced learning","authors":"Yang Yang , Zhiguo Wang , Jinghuai Gao , Naihao Liu , Zhen Li","doi":"10.1016/j.aiig.2022.03.003","DOIUrl":"10.1016/j.aiig.2022.03.003","url":null,"abstract":"<div><p>Seismic random noise reduction is an important task in seismic data processing at the Chinese loess plateau area, which benefits the geologic structure interpretation and further reservoir prediction. The sparse inversion is one of the widely used tools for seismic random noise reduction, which is often solved via the sparse approximation with a regularization term. The <em>ℓ</em><sub>1</sub> norm and total variation (TV) regularization term are two commonly used regularization terms. However, the <em>ℓ</em><sub>1</sub> norm is only a relaxation of the <em>ℓ</em><sub>0</sub> norm, which cannot always provide a sparse result. The TV regularization term may provide unexpected staircase artifacts. To avoid these disadvantages, we proposed a workflow for seismic random noise reduction by using the self-paced learning (SPL) scheme and a sparse representation (i.e. the continuous wavelet transform, CWT) with a mixed norm regularization, which includes the <em>ℓ</em><sub><em>p</em></sub> norm and the TV regularization. In the implementation, the SPL, which is inspired by human cognitive learning, is introduced to avoid the bad minima of the non-convex cost function. The SPL can first select the high signal-to-noise ratio (SNR) seismic data and then gradually select the low SNR seismic data into the proposed workflow. Moreover, the generalized Beta wavelet (GBW) is adopted as the basic wavelet of the CWT to better match for seismic wavelets and then obtain a more localized time-frequency (TF) representation. It should be noted that the GBW can easily constitute a tight frame, which saves the calculation time. Synthetic and field data examples are adopted to demonstrate the effectiveness of the proposed workflow for effectively suppressing seismic random noises and accurately preserving valid seismic reflections.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"2 ","pages":"Pages 223-233"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666544122000107/pdfft?md5=f1a54c0d9a60a906b15a366bf305460a&pid=1-s2.0-S2666544122000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91264915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local earthquakes detection: A benchmark dataset of 3-component seismograms built on a global scale","authors":"Fabrizio Magrini , Dario Jozinović , Fabio Cammarano , Alberto Michelini , Lapo Boschi","doi":"10.1016/j.aiig.2020.04.001","DOIUrl":"10.1016/j.aiig.2020.04.001","url":null,"abstract":"<div><p>Machine learning is becoming increasingly important in scientific and technological progress, due to its ability to create models that describe complex data and generalize well. The wealth of publicly-available seismic data nowadays requires automated, fast, and reliable tools to carry out a multitude of tasks, such as the detection of small, local earthquakes in areas characterized by sparsity of receivers. A similar application of machine learning, however, should be built on a large amount of labeled seismograms, which is neither immediate to obtain nor to compile. In this study we present a large dataset of seismograms recorded along the vertical, north, and east components of 1487 broad-band or very broad-band receivers distributed worldwide; this includes 629,095 3-component seismograms generated by 304,878 local earthquakes and labeled as EQ, and 615,847 ones labeled as noise (AN). Application of machine learning to this dataset shows that a simple Convolutional Neural Network of 67,939 parameters allows discriminating between earthquakes and noise single-station recordings, even if applied in regions not represented in the training set. Achieving an accuracy of 96.7, 95.3, and 93.2% on training, validation, and test set, respectively, we prove that the large variety of geological and tectonic settings covered by our data supports the generalization capabilities of the algorithm, and makes it applicable to real-time detection of local events. We make the database publicly available, intending to provide the seismological and broader scientific community with a benchmark for time-series to be used as a testing ground in signal processing.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"1 ","pages":"Pages 1-10"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2020.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83549665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exact Conditioning of Regression Random Forest for Spatial Prediction","authors":"Francky Fouedjio","doi":"10.1016/j.aiig.2021.01.001","DOIUrl":"10.1016/j.aiig.2021.01.001","url":null,"abstract":"<div><p>Regression random forest is becoming a widely-used machine learning technique for spatial prediction that shows competitive prediction performance in various geoscience fields. Like other popular machine learning methods for spatial prediction, regression random forest does not exactly honor the response variable’s measured values at sampled locations. However, competitor methods such as regression-kriging perfectly fit the response variable’s observed values at sampled locations by construction. Exactly matching the response variable’s measured values at sampled locations is often desirable in many geoscience applications. This paper presents a new approach ensuring that regression random forest perfectly matches the response variable’s observed values at sampled locations. The main idea consists of using the principal component analysis to create an orthogonal representation of the ensemble of regression tree predictors resulting from the traditional regression random forest. Then, the exact conditioning problem is reformulated as a Bayes-linear-Gauss problem on principal component scores. This problem has an analytical solution making it easy to perform Monte Carlo sampling of new principal component scores and then reconstruct regression tree predictors that perfectly match the response variable’s observed values at sampled locations. The reconstructed regression tree predictors’ average also precisely matches the response variable’s measured values at sampled locations by construction. The proposed method’s effectiveness is illustrated on the one hand using a synthetic dataset where the ground-truth is available everywhere within the study region, and on the other hand, using a real dataset comprising southwest England’s geochemical concentration data. It is compared with the regression-kriging and the traditional regression random forest. It appears that the proposed method can perfectly fit the response variable’s measured values at sampled locations while achieving good out of sample predictive performance comparatively to regression-kriging and traditional regression random forest.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"1 ","pages":"Pages 11-23"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2021.01.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"111719870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic fault instance segmentation based on mask propagation neural network","authors":"Ruoshui Zhou, Yufei Cai, Jingjing Zong, Xingmiao Yao, Fucai Yu, Guangmin Hu","doi":"10.1016/j.aiig.2020.12.001","DOIUrl":"10.1016/j.aiig.2020.12.001","url":null,"abstract":"<div><p>Fault interpretation plays a critical role in understanding the crustal development and exploring the subsurface reservoirs such as gas and oil. Recently, significant advances have been made towards fault semantic segmentation using deep learning. However, few studies employ deep learning in fault instance segmentation. We introduce mask propagation neural network for fault instance segmentation. Our study focuses on the description of the differences and relationships between each fault profile and the consistency of fault instance segmentations with adjacent profiles. Our method refers to the reference-guided mask propagation network, which is firstly used in video object segmentation: taking the seismic profiles as video frames while the seismic data volume as a video sequence along the inline direction we can achieve fault instance segmentation based on the mask propagation method. As a multi-level convolutional neural network, the mask propagation network receives a small number of user-defined tags as the guidance and outputs the fault instance segmentation on 3D seismic data, which can facilitate the fault reconstruction workflow. Compared with the traditional deep learning method, the introduced mask propagation neural network can complete the fault instance segmentation work under the premise of ensuring the accuracy of fault detection.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"1 ","pages":"Pages 31-35"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2020.12.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"105842170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seismic labeled data expansion using variational autoencoders","authors":"Kunhong Li , Song Chen , Guangmin Hu Ph.D","doi":"10.1016/j.aiig.2020.12.002","DOIUrl":"https://doi.org/10.1016/j.aiig.2020.12.002","url":null,"abstract":"<div><p>Supervised machine learning algorithms have been widely used in seismic exploration processing, but the lack of labeled examples complicates its application. Therefore, we propose a seismic labeled data expansion method based on deep variational Autoencoders (VAE), which are made of neural networks and contains two parts-Encoder and Decoder. Lack of training samples leads to overfitting of the network. We training the VAE with whole seismic data, which is a data-driven process and greatly alleviates the risk of overfitting. The Encoder captures the ability to map the seismic waveform <span><math><mrow><mi>Y</mi></mrow></math></span> to latent deep features <span><math><mrow><mi>z</mi></mrow></math></span>, and the Decoder captures the ability to reconstruct high-dimensional waveform <span><math><mrow><mover><mi>Y</mi><mo>ˆ</mo></mover></mrow></math></span> from latent deep features <span><math><mrow><mi>z</mi></mrow></math></span>. Later, we put the labeled seismic data into Encoders and get the latent deep features. We can easily use gaussian mixture model to fit the deep feature distribution of each class labeled data. We resample a mass of expansion deep features <span><math><mrow><msup><mi>z</mi><mo>∗</mo></msup></mrow></math></span> according to the Gaussian mixture model, and put the expansion deep features into the decoder to generate expansion seismic data. The experiments in synthetic and real data show that our method alleviates the problem of lacking labeled seismic data for supervised seismic facies analysis.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"1 ","pages":"Pages 24-30"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2020.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91764762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Licia Faenza , Alberto Michelini , Helen Crowley , Barbara Borzi , Marta Faravelli
{"title":"ShakeDaDO: A data collection combining earthquake building damage and ShakeMap parameters for Italy","authors":"Licia Faenza , Alberto Michelini , Helen Crowley , Barbara Borzi , Marta Faravelli","doi":"10.1016/j.aiig.2021.01.002","DOIUrl":"https://doi.org/10.1016/j.aiig.2021.01.002","url":null,"abstract":"<div><p>In this article, we present a new data collection that combines information about earthquake damage with seismic shaking. Starting from the Da.D.O. database, which provides information on the damage of individual buildings subjected to sequences of past earthquakes in Italy, we have generated ShakeMaps for all the events with magnitude greater than 5.0 that have contributed to these sequences. The sequences under examination are those of Irpinia 1980, Umbria Marche 1997, Pollino 1998, Molise 2002, L’Aquila 2009 and Emilia 2012. In this way, we were able to combine, for a total of the 117,695 buildings, the engineering parameters included in Da.D.O., but revised and reprocessed in this application, and the ground shaking data for six different variables (namely, intensity in MCS scale, PGA, PGV, SA at 0.3s, 1.0s and 3.0s). The potential applications of this data collection are innumerable: from recalibrating fragility curves to training machine learning models to quantifying earthquake damage. This data collection will be made available within Da.D.O., a platform of the Italian Department of Civil Protection, developed by EUCENTRE.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"1 ","pages":"Pages 36-51"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiig.2021.01.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91764763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}