{"title":"基于变压器的测井处理和质量控制深度学习模型——基于复杂序列的全局依赖性建模","authors":"Ashutosh Kumar","doi":"10.2118/208109-ms","DOIUrl":null,"url":null,"abstract":"\n A single well from any mature field produces approximately 1.7 million Measurement While Drilling (MWD) data points. We either use cross-correlation and covariance measurement, or Long Short-Term Memory (LSTM) based Deep Learning algorithms to diagnose long sequences of extremely noisy data. LSTM's context size of 200 tokens barely accounts for the entire depth. Proposed work develops application of Transformer-based Deep Learning algorithm to diagnose and predict events in complex sequences of well-log data.\n Sequential models learn geological patterns and petrophysical trends to detect events across depths of well-log data. However, vanishing gradients, exploding gradients and the limits of convolutional filters, limit the diagnosis of ultra-deep wells in complex subsurface information. Vast number of operations required to detect events between two subsurface points at large separation limits them. Transformers-based Models (TbMs) rely on non-sequential modelling that uses self-attention to relate information from different positions in the sequence of well-log, allowing to create an end-to-end, non-sequential, parallel memory network. We use approximately 21 million data points from 21 wells of Volve for the experiment.\n LSTMs, in addition to auto-regression (AR), autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) conventionally models the events in the time-series well-logs. However, complex global dependencies to detect events in heterogeneous subsurface are challenging for these sequence models. In the presented work we begin with one meter depth of data from Volve, an oil-field in the North Sea, and then proceed up to 1000 meters. Initially LSTMs and ARIMA models were acceptable, as depth increased beyond a few 100 meters their diagnosis started underperforming and a new methodology was required. TbMs have already outperformed several models in large sequences modelling for natural language processing tasks, thus they are very promising to model well-log data with very large depth separation. We scale features and labels according to the maximum and minimum value present in the training dataset and then use the sliding window to get training and evaluation data pairs from well-logs. Additional subsurface features were able to encode some information in the conventional sequential models, but the result did not compare significantly with the TbMs. TbMs achieved Root Mean Square Error of 0.27 on scale of (0-1) while diagnosing the depth up to 5000 meters.\n This is the first paper to show successful application of Transformer-based deep learning models for well-log diagnosis. Presented model uses a self-attention mechanism to learn complex dependencies and non-linear events from the well-log data. Moreover, the experimental setting discussed in the paper will act as a generalized framework for data from ultra-deep wells and their extremely heterogeneous subsurface environment.","PeriodicalId":10981,"journal":{"name":"Day 4 Thu, November 18, 2021","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Transformer-Based Deep Learning Models for Well Log Processing and Quality Control by Modelling Global Dependence of the Complex Sequences\",\"authors\":\"Ashutosh Kumar\",\"doi\":\"10.2118/208109-ms\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n A single well from any mature field produces approximately 1.7 million Measurement While Drilling (MWD) data points. We either use cross-correlation and covariance measurement, or Long Short-Term Memory (LSTM) based Deep Learning algorithms to diagnose long sequences of extremely noisy data. LSTM's context size of 200 tokens barely accounts for the entire depth. Proposed work develops application of Transformer-based Deep Learning algorithm to diagnose and predict events in complex sequences of well-log data.\\n Sequential models learn geological patterns and petrophysical trends to detect events across depths of well-log data. However, vanishing gradients, exploding gradients and the limits of convolutional filters, limit the diagnosis of ultra-deep wells in complex subsurface information. Vast number of operations required to detect events between two subsurface points at large separation limits them. Transformers-based Models (TbMs) rely on non-sequential modelling that uses self-attention to relate information from different positions in the sequence of well-log, allowing to create an end-to-end, non-sequential, parallel memory network. We use approximately 21 million data points from 21 wells of Volve for the experiment.\\n LSTMs, in addition to auto-regression (AR), autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) conventionally models the events in the time-series well-logs. However, complex global dependencies to detect events in heterogeneous subsurface are challenging for these sequence models. In the presented work we begin with one meter depth of data from Volve, an oil-field in the North Sea, and then proceed up to 1000 meters. Initially LSTMs and ARIMA models were acceptable, as depth increased beyond a few 100 meters their diagnosis started underperforming and a new methodology was required. TbMs have already outperformed several models in large sequences modelling for natural language processing tasks, thus they are very promising to model well-log data with very large depth separation. We scale features and labels according to the maximum and minimum value present in the training dataset and then use the sliding window to get training and evaluation data pairs from well-logs. Additional subsurface features were able to encode some information in the conventional sequential models, but the result did not compare significantly with the TbMs. TbMs achieved Root Mean Square Error of 0.27 on scale of (0-1) while diagnosing the depth up to 5000 meters.\\n This is the first paper to show successful application of Transformer-based deep learning models for well-log diagnosis. Presented model uses a self-attention mechanism to learn complex dependencies and non-linear events from the well-log data. Moreover, the experimental setting discussed in the paper will act as a generalized framework for data from ultra-deep wells and their extremely heterogeneous subsurface environment.\",\"PeriodicalId\":10981,\"journal\":{\"name\":\"Day 4 Thu, November 18, 2021\",\"volume\":\"38 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Day 4 Thu, November 18, 2021\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2118/208109-ms\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 4 Thu, November 18, 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/208109-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transformer-Based Deep Learning Models for Well Log Processing and Quality Control by Modelling Global Dependence of the Complex Sequences
A single well from any mature field produces approximately 1.7 million Measurement While Drilling (MWD) data points. We either use cross-correlation and covariance measurement, or Long Short-Term Memory (LSTM) based Deep Learning algorithms to diagnose long sequences of extremely noisy data. LSTM's context size of 200 tokens barely accounts for the entire depth. Proposed work develops application of Transformer-based Deep Learning algorithm to diagnose and predict events in complex sequences of well-log data.
Sequential models learn geological patterns and petrophysical trends to detect events across depths of well-log data. However, vanishing gradients, exploding gradients and the limits of convolutional filters, limit the diagnosis of ultra-deep wells in complex subsurface information. Vast number of operations required to detect events between two subsurface points at large separation limits them. Transformers-based Models (TbMs) rely on non-sequential modelling that uses self-attention to relate information from different positions in the sequence of well-log, allowing to create an end-to-end, non-sequential, parallel memory network. We use approximately 21 million data points from 21 wells of Volve for the experiment.
LSTMs, in addition to auto-regression (AR), autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) conventionally models the events in the time-series well-logs. However, complex global dependencies to detect events in heterogeneous subsurface are challenging for these sequence models. In the presented work we begin with one meter depth of data from Volve, an oil-field in the North Sea, and then proceed up to 1000 meters. Initially LSTMs and ARIMA models were acceptable, as depth increased beyond a few 100 meters their diagnosis started underperforming and a new methodology was required. TbMs have already outperformed several models in large sequences modelling for natural language processing tasks, thus they are very promising to model well-log data with very large depth separation. We scale features and labels according to the maximum and minimum value present in the training dataset and then use the sliding window to get training and evaluation data pairs from well-logs. Additional subsurface features were able to encode some information in the conventional sequential models, but the result did not compare significantly with the TbMs. TbMs achieved Root Mean Square Error of 0.27 on scale of (0-1) while diagnosing the depth up to 5000 meters.
This is the first paper to show successful application of Transformer-based deep learning models for well-log diagnosis. Presented model uses a self-attention mechanism to learn complex dependencies and non-linear events from the well-log data. Moreover, the experimental setting discussed in the paper will act as a generalized framework for data from ultra-deep wells and their extremely heterogeneous subsurface environment.