Rodrigo Exterkoetter, Gustavo R. Dutra, Leandro P. de Figueiredo, Fernando Bordignon, Gilson M. S. Neto, Alexandre A. Emerick
{"title":"利用深度学习提取延时地震中的特征,实现数据同化","authors":"Rodrigo Exterkoetter, Gustavo R. Dutra, Leandro P. de Figueiredo, Fernando Bordignon, Gilson M. S. Neto, Alexandre A. Emerick","doi":"10.2118/212196-pa","DOIUrl":null,"url":null,"abstract":"<p>Assimilation of time-lapse (4D) seismic data with ensemble-based methods is challenging because of the massive number of data points. This situation requires excessive computational time and memory usage during the model updating step. We addressed this problem using a deep convolutional autoencoder to extract the relevant features of 4D images and generate a reduced representation of the data. The architecture of the autoencoder is based on the VGG-19 network, a deep convolutional architecture with 19 layers well-known for its effectiveness in image classification and object recognition. Some advantages of VGG-19 are the possibility of using some pretrained convolutional layers to create a feature extractor and taking advantage of the transfer learning technique to address other related problem domains. Using a pretrained model bypasses the need for large training data sets and avoids the high computational demand to train a deep network. For further improvements in the reconstruction of the seismic images, we apply a fine-tuning of the weights of the latent convolutional layer. We propose to use a fully convolutional architecture, which allows the application of distance-based localization during data assimilation with the ensemble smoother with multiple data assimilation (ES-MDA). The performance of the proposed method is investigated in a synthetic benchmark problem with realistic settings. We evaluate the methodology with three variants of the autoencoder, each one with a different level of data reduction. The experiments indicate that it is possible to use latent representations with major data reductions without impairing the quality of the data assimilation. Additionally, we compare central processing unit (CPU) and graphics processing unit (GPU) implementations of the ES-MDA update step and show in another synthetic problem that the reduction in the number of data points obtained with the application of the deep autoencoder may provide a substantial improvement in the overall computation cost of the data assimilation for large reservoir models.</p>","PeriodicalId":22252,"journal":{"name":"SPE Journal","volume":"35 1","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Feature Extraction in Time-Lapse Seismic Using Deep Learning for Data Assimilation\",\"authors\":\"Rodrigo Exterkoetter, Gustavo R. Dutra, Leandro P. de Figueiredo, Fernando Bordignon, Gilson M. S. Neto, Alexandre A. Emerick\",\"doi\":\"10.2118/212196-pa\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Assimilation of time-lapse (4D) seismic data with ensemble-based methods is challenging because of the massive number of data points. This situation requires excessive computational time and memory usage during the model updating step. We addressed this problem using a deep convolutional autoencoder to extract the relevant features of 4D images and generate a reduced representation of the data. The architecture of the autoencoder is based on the VGG-19 network, a deep convolutional architecture with 19 layers well-known for its effectiveness in image classification and object recognition. Some advantages of VGG-19 are the possibility of using some pretrained convolutional layers to create a feature extractor and taking advantage of the transfer learning technique to address other related problem domains. Using a pretrained model bypasses the need for large training data sets and avoids the high computational demand to train a deep network. For further improvements in the reconstruction of the seismic images, we apply a fine-tuning of the weights of the latent convolutional layer. We propose to use a fully convolutional architecture, which allows the application of distance-based localization during data assimilation with the ensemble smoother with multiple data assimilation (ES-MDA). The performance of the proposed method is investigated in a synthetic benchmark problem with realistic settings. We evaluate the methodology with three variants of the autoencoder, each one with a different level of data reduction. The experiments indicate that it is possible to use latent representations with major data reductions without impairing the quality of the data assimilation. Additionally, we compare central processing unit (CPU) and graphics processing unit (GPU) implementations of the ES-MDA update step and show in another synthetic problem that the reduction in the number of data points obtained with the application of the deep autoencoder may provide a substantial improvement in the overall computation cost of the data assimilation for large reservoir models.</p>\",\"PeriodicalId\":22252,\"journal\":{\"name\":\"SPE Journal\",\"volume\":\"35 1\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SPE Journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.2118/212196-pa\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, PETROLEUM\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SPE Journal","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.2118/212196-pa","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, PETROLEUM","Score":null,"Total":0}
Feature Extraction in Time-Lapse Seismic Using Deep Learning for Data Assimilation
Assimilation of time-lapse (4D) seismic data with ensemble-based methods is challenging because of the massive number of data points. This situation requires excessive computational time and memory usage during the model updating step. We addressed this problem using a deep convolutional autoencoder to extract the relevant features of 4D images and generate a reduced representation of the data. The architecture of the autoencoder is based on the VGG-19 network, a deep convolutional architecture with 19 layers well-known for its effectiveness in image classification and object recognition. Some advantages of VGG-19 are the possibility of using some pretrained convolutional layers to create a feature extractor and taking advantage of the transfer learning technique to address other related problem domains. Using a pretrained model bypasses the need for large training data sets and avoids the high computational demand to train a deep network. For further improvements in the reconstruction of the seismic images, we apply a fine-tuning of the weights of the latent convolutional layer. We propose to use a fully convolutional architecture, which allows the application of distance-based localization during data assimilation with the ensemble smoother with multiple data assimilation (ES-MDA). The performance of the proposed method is investigated in a synthetic benchmark problem with realistic settings. We evaluate the methodology with three variants of the autoencoder, each one with a different level of data reduction. The experiments indicate that it is possible to use latent representations with major data reductions without impairing the quality of the data assimilation. Additionally, we compare central processing unit (CPU) and graphics processing unit (GPU) implementations of the ES-MDA update step and show in another synthetic problem that the reduction in the number of data points obtained with the application of the deep autoencoder may provide a substantial improvement in the overall computation cost of the data assimilation for large reservoir models.
期刊介绍:
Covers theories and emerging concepts spanning all aspects of engineering for oil and gas exploration and production, including reservoir characterization, multiphase flow, drilling dynamics, well architecture, gas well deliverability, numerical simulation, enhanced oil recovery, CO2 sequestration, and benchmarking and performance indicators.