Domain Informed - a better approach to regularization and semi-supervised learning for seismic event analysis.

L. Linville
{"title":"Domain Informed - a better approach to regularization and semi-supervised learning for seismic event analysis.","authors":"L. Linville","doi":"10.2172/1829553","DOIUrl":null,"url":null,"abstract":"Typically, data-driven learning works best when we can exploit expectations from our data domain. For example, the development of recurrent neural network architectures to deal with the temporal dependence in language, geometric deep learning for 3-D problems, and physics-constrained Bayesian learning for more interpretable dependencies. Yet it can be unclear how to interject expectations, and which specific expectations will result in better outcomes for a given domain. In seismic event processing, enforcing consistency over disparate observations for an individual event has a long history of empirical value. For example, we almost always use magnitude estimates from many individual stations, drop outliers, and average to arrive at a final event magnitude. Similarly, we can leverage the expectation that stations provide consistent predictions for any event-level attributes, such as event type, when we develop deep learning based predictive models. In this work we show how to formulate this expectation as a loss term during model training and give several examples of how this expectation can result in better model regularization, which can reduce overfitting while still outperforming other methods, give us more trustworthy decision confidence, and allows us to leverage data where no ground truth is available.","PeriodicalId":233836,"journal":{"name":"Proposed for presentation at the Science and Technology Conference 2020 held June 28, 2021 - November 02, 2020 in Vienna, Austria.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proposed for presentation at the Science and Technology Conference 2020 held June 28, 2021 - November 02, 2020 in Vienna, Austria.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2172/1829553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Typically, data-driven learning works best when we can exploit expectations from our data domain. For example, the development of recurrent neural network architectures to deal with the temporal dependence in language, geometric deep learning for 3-D problems, and physics-constrained Bayesian learning for more interpretable dependencies. Yet it can be unclear how to interject expectations, and which specific expectations will result in better outcomes for a given domain. In seismic event processing, enforcing consistency over disparate observations for an individual event has a long history of empirical value. For example, we almost always use magnitude estimates from many individual stations, drop outliers, and average to arrive at a final event magnitude. Similarly, we can leverage the expectation that stations provide consistent predictions for any event-level attributes, such as event type, when we develop deep learning based predictive models. In this work we show how to formulate this expectation as a loss term during model training and give several examples of how this expectation can result in better model regularization, which can reduce overfitting while still outperforming other methods, give us more trustworthy decision confidence, and allows us to leverage data where no ground truth is available.
领域信息——地震事件分析中一种更好的正则化和半监督学习方法。
通常,数据驱动的学习在我们能够利用数据领域的期望时效果最好。例如,开发递归神经网络架构来处理语言中的时间依赖性,用于3d问题的几何深度学习,以及用于更多可解释依赖性的物理约束贝叶斯学习。然而,如何插入期望以及哪些特定的期望将为给定领域带来更好的结果可能是不清楚的。在地震事件处理中,加强对单个事件的不同观测结果的一致性具有悠久的经验价值。例如,我们几乎总是使用来自许多独立站点的震级估计,去掉异常值,然后取平均值来得到最终的事件震级。同样,当我们开发基于深度学习的预测模型时,我们可以利用台站为任何事件级别属性(如事件类型)提供一致预测的期望。在这项工作中,我们展示了如何在模型训练期间将此期望表述为损失项,并给出了几个示例,说明该期望如何导致更好的模型正则化,这可以减少过拟合,同时仍然优于其他方法,为我们提供更值得信赖的决策信心,并允许我们利用没有基础事实可用的数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信