A novel approach for estimating largest Lyapunov exponents in one-dimensional chaotic time series using machine learning.

IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED
Chaos Pub Date : 2025-10-01 DOI:10.1063/5.0289352
Andrei Velichko, Maksim Belyaev, Petr Boriskov
{"title":"A novel approach for estimating largest Lyapunov exponents in one-dimensional chaotic time series using machine learning.","authors":"Andrei Velichko, Maksim Belyaev, Petr Boriskov","doi":"10.1063/5.0289352","DOIUrl":null,"url":null,"abstract":"<p><p>Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving Rpos2 > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, k-nearest neighbor (KNN) yields the closest fits (KNN-R comparable; random forest larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes, it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance vs the average signal-to-noise ratio (SNR) over parameter sweeps: accuracy saturates for SNRm ≳ 30 dB and collapses below ≈27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.</p>","PeriodicalId":9974,"journal":{"name":"Chaos","volume":"35 10","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1063/5.0289352","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving Rpos2 > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, k-nearest neighbor (KNN) yields the closest fits (KNN-R comparable; random forest larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes, it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance vs the average signal-to-noise ratio (SNR) over parameter sweeps: accuracy saturates for SNRm ≳ 30 dB and collapses below ≈27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.

一种利用机器学习估计一维混沌时间序列中最大Lyapunov指数的新方法。
从数据中理解和量化混乱仍然具有挑战性。我们提出了一种利用机器学习从一维混沌时间序列估计最大李雅普诺夫指数(LLE)的数据驱动方法。一个预测器被训练来产生样本外的、多视界的预测;然后,从地平线上的几何平均预测误差的指数增长推断出LLE,它作为轨迹发散的代理。我们在logistic、sine、cubic和chebyshev4个标准1D映射上验证了该方法,在序列短至M = 450的参考LLE曲线上实现了Rpos2 > 0.99。在基线中,k近邻(KNN)产生最接近的拟合(KNN- r可比;随机森林较大的偏差)。通过设计,估计器以正指数为目标:在周期/稳定状态下,它返回与零难以区分的值。噪声鲁棒性是通过添加零均值白测量噪声来评估的,并将性能与参数扫描的平均信噪比(SNR)进行总结:SNRm≥30 dB时精度达到饱和,低于≈27 dB(保守的传感器级基准)时精度下降。该方法简单,计算效率高,且与模型无关,只需要平稳性和主要正指数的存在。在只有标量时间序列测量可用的实验环境中,它为LLE估计提供了一条实用的途径,并将扩展到高维和不规则采样数据,以供未来的工作使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Chaos
Chaos 物理-物理:数学物理
CiteScore
5.20
自引率
13.80%
发文量
448
审稿时长
2.3 months
期刊介绍: Chaos: An Interdisciplinary Journal of Nonlinear Science is a peer-reviewed journal devoted to increasing the understanding of nonlinear phenomena and describing the manifestations in a manner comprehensible to researchers from a broad spectrum of disciplines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信