OTBENCHMARK: AN OPEN SOURCE PYTHON PACKAGE FOR BENCHMARKING AND VALIDATING UNCERTAINTY QUANTIFICATION ALGORITHMS

E. Fekhari, M. Baudin, V. Chabridon, Youssef Jebroun
{"title":"OTBENCHMARK: AN OPEN SOURCE PYTHON PACKAGE FOR BENCHMARKING AND VALIDATING UNCERTAINTY QUANTIFICATION ALGORITHMS","authors":"E. Fekhari, M. Baudin, V. Chabridon, Youssef Jebroun","doi":"10.7712/120221.8034.19093","DOIUrl":null,"url":null,"abstract":". Over the past decade, industrial companies and academic institutions pooled their efforts and knowledge to propose a generic uncertainty management methodology for computer simulation. This framework led to the collaborative development of an open source software dedicated to the treatment of uncertainties, called “OpenTURNS” (Open source Treatment of Uncertainty, Risk’N Statistics). This paper aims at presenting a new Python package, called “ otbenchmark ”, offering tools to evaluate the performance of a large panel of uncertainty quantification algorithms. It provides benchmark classes containing problems with their reference values. Two categories of benchmark classes are currently available: reliability estimation problems ( i.e., estimating failure probabilities) and sensitivity analysis problems ( i.e., estimating sensitivity indices such as the Sobol’ indices). This package can either be used for validating a new algorithm or automatically comparing various algorithms on a set of problems. Additionally, the package provides several convergence and accuracy metrics to compare the performance of each algorithm. To face high-dimensional problems, otbenchmark offers graphical tools to draw multidimensional events, functions and distributions based on cross-cuts visualizations. Finally, to ensure otbenchmark ’s accuracy, a test-driven software development method has been adopted (using, among others, Git for collaborative development, unit tests and continuous integration). Ultimately, otbenchmark is an industrial platform gath-ering problems with reference values of their solutions and various tools to achieve a robust comparison of uncertainty management algorithms.","PeriodicalId":444608,"journal":{"name":"4th International Conference on Uncertainty Quantification in Computational Sciences and Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"4th International Conference on Uncertainty Quantification in Computational Sciences and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7712/120221.8034.19093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

. Over the past decade, industrial companies and academic institutions pooled their efforts and knowledge to propose a generic uncertainty management methodology for computer simulation. This framework led to the collaborative development of an open source software dedicated to the treatment of uncertainties, called “OpenTURNS” (Open source Treatment of Uncertainty, Risk’N Statistics). This paper aims at presenting a new Python package, called “ otbenchmark ”, offering tools to evaluate the performance of a large panel of uncertainty quantification algorithms. It provides benchmark classes containing problems with their reference values. Two categories of benchmark classes are currently available: reliability estimation problems ( i.e., estimating failure probabilities) and sensitivity analysis problems ( i.e., estimating sensitivity indices such as the Sobol’ indices). This package can either be used for validating a new algorithm or automatically comparing various algorithms on a set of problems. Additionally, the package provides several convergence and accuracy metrics to compare the performance of each algorithm. To face high-dimensional problems, otbenchmark offers graphical tools to draw multidimensional events, functions and distributions based on cross-cuts visualizations. Finally, to ensure otbenchmark ’s accuracy, a test-driven software development method has been adopted (using, among others, Git for collaborative development, unit tests and continuous integration). Ultimately, otbenchmark is an industrial platform gath-ering problems with reference values of their solutions and various tools to achieve a robust comparison of uncertainty management algorithms.
Otbenchmark:用于基准测试和验证不确定性量化算法的开源python包
. 在过去的十年中,工业公司和学术机构汇集了他们的努力和知识,提出了一种通用的计算机模拟不确定性管理方法。这个框架导致了一个致力于处理不确定性的开源软件的合作开发,称为“OpenTURNS”(不确定性的开源处理,风险统计)。本文旨在介绍一个新的Python包,称为“otbenchmark”,它提供了评估大量不确定性量化算法性能的工具。它提供了包含问题及其参考值的基准类。目前有两类基准类:可靠性估计问题(即估计失效概率)和灵敏度分析问题(即估计Sobol指数等灵敏度指标)。这个包既可以用于验证新算法,也可以用于自动比较一组问题上的各种算法。此外,该软件包还提供了几个收敛性和准确性指标来比较每种算法的性能。为了面对高维问题,otbenchmark提供了图形化工具来绘制基于横切可视化的多维事件、函数和分布。最后,为了确保otbenchmark的准确性,采用了测试驱动的软件开发方法(其中使用Git进行协作开发、单元测试和持续集成)。最终,otbenchmark是一个工业平台,汇集了具有解决方案参考价值的问题和各种工具,以实现不确定性管理算法的鲁棒比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信