{"title":"一种利用机器学习估计一维混沌时间序列中最大Lyapunov指数的新方法。","authors":"Andrei Velichko, Maksim Belyaev, Petr Boriskov","doi":"10.1063/5.0289352","DOIUrl":null,"url":null,"abstract":"<p><p>Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving Rpos2 > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, k-nearest neighbor (KNN) yields the closest fits (KNN-R comparable; random forest larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes, it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance vs the average signal-to-noise ratio (SNR) over parameter sweeps: accuracy saturates for SNRm ≳ 30 dB and collapses below ≈27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.</p>","PeriodicalId":9974,"journal":{"name":"Chaos","volume":"35 10","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel approach for estimating largest Lyapunov exponents in one-dimensional chaotic time series using machine learning.\",\"authors\":\"Andrei Velichko, Maksim Belyaev, Petr Boriskov\",\"doi\":\"10.1063/5.0289352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving Rpos2 > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, k-nearest neighbor (KNN) yields the closest fits (KNN-R comparable; random forest larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes, it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance vs the average signal-to-noise ratio (SNR) over parameter sweeps: accuracy saturates for SNRm ≳ 30 dB and collapses below ≈27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.</p>\",\"PeriodicalId\":9974,\"journal\":{\"name\":\"Chaos\",\"volume\":\"35 10\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chaos\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1063/5.0289352\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1063/5.0289352","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
A novel approach for estimating largest Lyapunov exponents in one-dimensional chaotic time series using machine learning.
Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving Rpos2 > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, k-nearest neighbor (KNN) yields the closest fits (KNN-R comparable; random forest larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes, it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance vs the average signal-to-noise ratio (SNR) over parameter sweeps: accuracy saturates for SNRm ≳ 30 dB and collapses below ≈27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.
期刊介绍:
Chaos: An Interdisciplinary Journal of Nonlinear Science is a peer-reviewed journal devoted to increasing the understanding of nonlinear phenomena and describing the manifestations in a manner comprehensible to researchers from a broad spectrum of disciplines.