基于稀疏变换域迭代深度学习的精确背景速度模型构建方法

Guoxin Chen
{"title":"基于稀疏变换域迭代深度学习的精确背景速度模型构建方法","authors":"Guoxin Chen","doi":"arxiv-2407.19419","DOIUrl":null,"url":null,"abstract":"Whether it is oil and gas exploration or geological science research, it is\nnecessary to accurately grasp the structural information of underground media.\nFull waveform inversion is currently the most popular seismic wave inversion\nmethod, but it is highly dependent on a high-quality initial model. Artificial\nintelligence algorithm deep learning is completely data-driven and can get rid\nof the dependence on the initial model. However, the prediction accuracy of\ndeep learning algorithms depends on the scale and diversity of training data\nsets. How to improve the prediction accuracy of deep learning without\nincreasing the size of the training set while also improving computing\nefficiency is a worthy issue to study. In this paper, an iterative deep\nlearning algorithm in the sparse transform domain is proposed based on the\ncharacteristics of deep learning: first, based on the computational efficiency\nand the effect of sparse transform, the cosine transform is selected as the\nsparse transform method, and the seismic data and the corresponding velocity\nmodel are cosine transformed to obtain their corresponding sparse expressions,\nwhich are then used as the input data and corresponding label data for deep\nlearning; then we give an iterative deep learning algorithm in the cosine\ntransform domain, that is, after obtaining the seismic data residuals and\nvelocity model residuals of the previous round of test results, they are used\nagain as new input data and label data, and re-trained in the cosine domain to\nobtain a new network, and the prediction results of the previous round are\ncorrected, and then the cycle is repeated until the termination condition is\nreached. The algorithm effect was verified on the SEG/EAGE salt model and the\nseabed sulfide physical model site data.","PeriodicalId":501270,"journal":{"name":"arXiv - PHYS - Geophysics","volume":"362 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accurate background velocity model building method based on iterative deep learning in sparse transform domain\",\"authors\":\"Guoxin Chen\",\"doi\":\"arxiv-2407.19419\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Whether it is oil and gas exploration or geological science research, it is\\nnecessary to accurately grasp the structural information of underground media.\\nFull waveform inversion is currently the most popular seismic wave inversion\\nmethod, but it is highly dependent on a high-quality initial model. Artificial\\nintelligence algorithm deep learning is completely data-driven and can get rid\\nof the dependence on the initial model. However, the prediction accuracy of\\ndeep learning algorithms depends on the scale and diversity of training data\\nsets. How to improve the prediction accuracy of deep learning without\\nincreasing the size of the training set while also improving computing\\nefficiency is a worthy issue to study. In this paper, an iterative deep\\nlearning algorithm in the sparse transform domain is proposed based on the\\ncharacteristics of deep learning: first, based on the computational efficiency\\nand the effect of sparse transform, the cosine transform is selected as the\\nsparse transform method, and the seismic data and the corresponding velocity\\nmodel are cosine transformed to obtain their corresponding sparse expressions,\\nwhich are then used as the input data and corresponding label data for deep\\nlearning; then we give an iterative deep learning algorithm in the cosine\\ntransform domain, that is, after obtaining the seismic data residuals and\\nvelocity model residuals of the previous round of test results, they are used\\nagain as new input data and label data, and re-trained in the cosine domain to\\nobtain a new network, and the prediction results of the previous round are\\ncorrected, and then the cycle is repeated until the termination condition is\\nreached. The algorithm effect was verified on the SEG/EAGE salt model and the\\nseabed sulfide physical model site data.\",\"PeriodicalId\":501270,\"journal\":{\"name\":\"arXiv - PHYS - Geophysics\",\"volume\":\"362 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Geophysics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.19419\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Geophysics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.19419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

全波形反演是目前最流行的地震波反演方法,但它高度依赖于高质量的初始模型。人工智能算法深度学习完全由数据驱动,可以摆脱对初始模型的依赖。然而,深度学习算法的预测精度取决于训练数据集的规模和多样性。如何在不增加训练集规模的情况下提高深度学习的预测精度,同时提高计算效率是一个值得研究的问题。本文基于深度学习的特点,提出了一种稀疏变换域的迭代深度学习算法:首先,基于计算效率和稀疏变换的效果,选择余弦变换作为稀疏变换方法,对地震数据和相应的速度模型进行余弦变换,得到相应的稀疏表达式,然后将其作为深度学习的输入数据和相应的标签数据;然后给出余弦变换域的迭代深度学习算法,即在得到上一轮测试结果的地震数据残差和速度模型残差后,再次将其作为新的输入数据和标签数据,在余弦域中重新训练得到新的网络,并对上一轮的预测结果进行修正,如此循环往复,直到达到终止条件。算法效果在 SEG/EAGE 盐模型和这些海底硫化物物理模型站点数据上得到了验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accurate background velocity model building method based on iterative deep learning in sparse transform domain
Whether it is oil and gas exploration or geological science research, it is necessary to accurately grasp the structural information of underground media. Full waveform inversion is currently the most popular seismic wave inversion method, but it is highly dependent on a high-quality initial model. Artificial intelligence algorithm deep learning is completely data-driven and can get rid of the dependence on the initial model. However, the prediction accuracy of deep learning algorithms depends on the scale and diversity of training data sets. How to improve the prediction accuracy of deep learning without increasing the size of the training set while also improving computing efficiency is a worthy issue to study. In this paper, an iterative deep learning algorithm in the sparse transform domain is proposed based on the characteristics of deep learning: first, based on the computational efficiency and the effect of sparse transform, the cosine transform is selected as the sparse transform method, and the seismic data and the corresponding velocity model are cosine transformed to obtain their corresponding sparse expressions, which are then used as the input data and corresponding label data for deep learning; then we give an iterative deep learning algorithm in the cosine transform domain, that is, after obtaining the seismic data residuals and velocity model residuals of the previous round of test results, they are used again as new input data and label data, and re-trained in the cosine domain to obtain a new network, and the prediction results of the previous round are corrected, and then the cycle is repeated until the termination condition is reached. The algorithm effect was verified on the SEG/EAGE salt model and the seabed sulfide physical model site data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信