中文版 | English
题名

Examination of Performance in Different Reservoir Architectures in Low Signal-to-Noise Environments

其他题名
低信噪比环境下不同储备池架构的性能比较研究
姓名
姓名拼音
XIE Qiaolin
学号
12133014
学位类型
硕士
学位专业
0701Z1 商务智能与大数据
学科门类/专业学位类别
07 理学
导师
SANDRO CLAUDIO LERA
导师单位
风险分析预测与管控研究院
论文答辩日期
2023-05-19
论文提交日期
2023-07-04
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

    In recent years, Reservoir Computing (RC) has gained increasing popularity as a promising method for time-series prediction. RC methods are especially appealing due to their high prediction accuracy, simple architecture, and ease of implementation.  In particular, the Nonlinear Vector Autoregression (NVAR) model, known as the Next-Generation Reservoir Computing (NG-RC), generating superior performance with a relatively low computational cost is expected to enable a wide range of opportunities in the applications of temporal information processing. However, promising results of many state-of-the-art methods are often obtained in a relatively noise-free environment which does not reflect the realities we live in. To address this issue, this paper evaluates the performance of NG-RC and RC models on time series prediction in the presence of noise.  This study contributes to the literature by highlighting the significance of noise in model evaluation to strengthen implications on applicability.

    Specifically, the Echo State Network (ESN) and the NVAR models are first evaluated with the Mackey-Glass time series and Lorenz time series. Then, the study evaluates the two models with real-life data from finance. Notably, the out-performance of NVAR with benchmark time series does not persist in financial time series. In fact, financial time series have low signal-to-noise ratios, which has posed great challenges for prediction models. On the contrary, the benchmark Lorenz time series and Mackey-Glass time series contain little noise. The study hypothesizes that the discrepancy in model performance could be attributed to differences in signal-to-noise ratios. To further investigate the role of noise in model performance, this study continues to evaluate target models with synthetic data, intentionally varying the signal-to-noise ratios.

    The results of this study reveal that the signal-to-noise ratio is a crucial dimension in evaluating model performance and applicability. Experimental results demonstrate that the NVAR model does have better forecasting performance in comparison with ESN in relatively noise-free time series, with higher forecasting accuracy and higher computational efficiency.  However, the comparative advantage of the NVAR model decreases with the increase of noise and vanishes in low signal-to-noise environments such as finance.  In reality, noise is ubiquitous, and the challenges of noise should be formally investigated as an added dimension in model performance evaluation, such that progress in machine learning can be better evaluated and applied to solving practical problems in various fields.

其他摘要

      近年来,储备池计算(Reservoir Computing, RC)因结构设计简单且易于实现,成为时间序列预测领域备受青睐的方法。作为下一代储备池计算的非线性向量自回归模型(Nonlinear Vector Autoregression, NVAR)具有卓越的预测性能和相对较低的计算成本,在时间信息处理方面显示出广阔的前景。然而,许多相关的研究通常是在不能反映生活现实的无噪声或低噪声数据集上对模型进行评估的,影响了其在高噪声场景中的适用性。因此,本文评估了储备池计算和下一代储备池计算在存有噪声的混沌时间序列中的预测性能,通过强调噪声在模型评估中的重要性,补充了对模型适用性的研究。

      具体来说,本研究首先评估了经典储备池计算架构——回声状态网络(Echo State Network, ESN)和非线性向量自回归模型在Lorenz时间序列、Mackey-Glass时间序列和来自真实世界的金融时间序列上的预测性能。实验结果表明非线性向量自回归模型在基准时间序列上表现出色,但其优势在金融时间序列预测中无法持续。与噪声含量较少的Lorenz时间序列和Mackey-Glass时间序列相比,金融时间序列的低信噪比对模型的预测带来了很大的挑战。本研究在假设模型性能差异源自时间序列信噪比差异的基础上,使用可调节信噪比的合成数据进一步评估噪声对模型预测性能的影响。

      本研究的结果表明,在评估模型性能和适用性时,信噪比是一个重要的维度。实验结果显示,相对于回声状态网络模型,在相对较少噪声的时间序列中,非线性向量自回归模型确实具有更好的预测性能,包括更高的预测精度和更高的计算效率。然而,随着噪声水平的增加,非线性向量自回归模型的相对优势减弱,并在金融等低信噪比的环境中消失。实际上,噪声无处不在,因此噪声带来的挑战应该被考虑为模型性能评估的一个重要维度,以便更好地评估模型在各个领域的应用性。

关键词
其他关键词
语种
英语
培养类别
独立培养
入学年份
2021
学位授予年份
2023-06
参考文献列表

[1] GONON L, ORTEGA J P. Reservoir computing universality with stochastic inputs[J]. IEEEtransactions on neural networks and learning systems, 2019, 31(1): 100-112.
[2] HART A G, HOOK J L, DAWES J H. Echo State Networks trained by Tikhonov least squares areL2(μ) approximators of ergodic dynamical systems[J/OL]. Physica D: Nonlinear Phenomena,2021, 421: 132882. https://www.sciencedirect.com/science/article/pii/S0167278921000403.DOI: https://doi.org/10.1016/j.physd.2021.132882.
[3] BOLLT E. On explaining the surprising success of reservoir computing forecaster of chaos?The universal machine learning dynamical system with contrast to VAR and DMD[J]. Chaos:An Interdisciplinary Journal of Nonlinear Science, 2021, 31(1): 013108.
[4] GAUTHIER D J, BOLLT E, GRIFFITH A, et al. Next generation reservoir computing[J]. Naturecommunications, 2021, 12(1): 5564.
[5] YUAN J D, WANG Z H. Review of time series representation and classification techniques[J].Computer Science, 2015, 42(3): 1-7.
[6] HE Y L, XU Q K. Review of Time Series Prediction Technology[J]. INFORMATION andCOMMUNICATIONS, 2018, 11.
[7] GU Y. VII. On a method of investigating periodicities disturbed series, with special reference toWolfer’s sunspot numbers[J]. Philosophical Transactions of the Royal Society of London. SeriesA, Containing Papers of a Mathematical or Physical Character, 1927, 226(636-646): 267-298.
[8] BOX G E, JENKINS G M, REINSEL G C, et al. Time series analysis: forecasting and control[M]. John Wiley & Sons, 2015.
[9] LEE N, CHOI H, KIM S H. Bayes shrinkage estimation for high-dimensional VAR models withscale mixture of normal distributions for noise[J]. Computational Statistics & Data Analysis,2016, 101: 250-276.
[10] ATHANASOPOULOS G, POSKITT D S, VAHID F, et al. Determination of Long-run andShort-run Dynamics in EC-VARMA Models via Canonical Correlations[J]. Journal of AppliedEconometrics, 2016, 31(6): 1100-1119.
[11] BERARDENGO M, ROSSI G B, CRENNA F. Sea Spectral Estimation Using ARMA Models[J]. Sensors, 2021, 21(13): 4280.
[12] RAO T S, GABR M M. An introduction to bispectral analysis and bilinear time series models:volume 24[M]. Springer Science & Business Media, 2012.
[13] XIANG Y. Using ARIMA-GARCH Model to Analyze Fluctuation Law of International OilPrice[J]. Mathematical Problems in Engineering, 2022, 2022: 1-7.
[14] VAPNIK V. The nature of statistical learning theory[M]. Springer science & business media,1999.
[15] SUN W, XU C. Carbon price prediction based on modified wavelet least square support vectormachine[J]. Science of the Total Environment, 2021, 754: 142052.
[16] DING M, ZHOU H, XIE H, et al. A time series model based on hybrid-kernel least-squaressupport vector machine for short-term wind power forecasting[J]. ISA transactions, 2021, 108:58-68.
[17] LIANG F. Bayesian neural networks for nonlinear time series forecasting[J]. Statistics andcomputing, 2005, 15: 13-29.
[18] CHEN X, SUN L. Bayesian temporal factorization for multidimensional time series prediction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(9): 4659-4673.
[19] LAPEDES A, FARBER R. How neural nets work[C]//Neural information processing systems.1987.
[20] HUANG G B, ZHU Q Y, SIEW C K. Extreme learning machine: theory and applications[J].Neurocomputing, 2006, 70(1-3): 489-501.
[21] MIRIKITANI D T, NIKOLAEV N. Recursive bayesian recurrent neural networks for time-seriesmodeling[J]. IEEE Transactions on Neural Networks, 2009, 21(2): 262-274.
[22] ARDALANI-FARSA M, ZOLFAGHARI S. Chaotic time series prediction with residual analysis method using hybrid Elman–NARX neural networks[J]. Neurocomputing, 2010, 73(13-15):2540-2553.
[23] JAEGER H, HAAS H. Harnessing nonlinearity: Predicting chaotic systems and saving energyin wireless communication[J]. Science, 2004, 304(5667): 78-80.
[24] JAEGER H. The “echo state” approach to analysing and training recurrent neural networkswith an erratum note[J]. Bonn, Germany: German National Research Center for InformationTechnology GMD Technical Report, 2001, 148(34): 13.
[25] MAASS W, NATSCHLÄGER T, MARKRAM H. Real-time computing without stable states:A new framework for neural computation based on perturbations[J]. Neural computation, 2002,14(11): 2531-2560.
[26] SCHRAUWEN B, VERSTRAETEN D, VAN CAMPENHOUT J. An overview of reservoircomputing: theory, applications and implementations[C]//The European Symposium on Artificial Neural Networks. 2007: 471-482.
[27] GALLICCHIO C, MICHELI A, PEDRELLI L. Deep reservoir computing: A critical experimental analysis[J]. Neurocomputing, 2017, 268: 87-99.
[28] GALLICCHIO C, MICHELI A, PEDRELLI L. Design of deep echo state networks[J]. NeuralNetworks, 2018, 108: 33-47.
[29] AKIYAMA T, TANAKA G. Computational efficiency of multi-step learning echo state networks for nonlinear time series prediction[J]. IEEE Access, 2022, 10: 28535-28544.
[30] GOUDARZI A, STEFANOVIC D. Towards a calculus of echo state networks[J]. ProcediaComputer Science, 2014, 41: 176-181.
[31] LIU W J, BAI Y T, JIN X B, et al. Adaptive Broad Echo State Network for Nonstationary TimeSeries Forecasting[J]. Mathematics, 2022, 10(17): 3188.
[32] CHEN M, SAAD W, YIN C. Liquid state machine learning for resource and cache management in LTE-U unmanned aerial vehicle (UAV) networks[J]. IEEE Transactions on WirelessCommunications, 2019, 18(3): 1504-1517.
[33] TANG C, JI J, LIN Q, et al. Evolutionary neural architecture design of liquid state machinefor image classification[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP). IEEE, 2022: 91-95.
[34] DECKERS L, TSANG I J, LEEKWIJCK W V, et al. Extended liquid state machines for speechrecognition[J]. Frontiers in neuroscience.-Lausanne, 2022, 16: 1-14.
[35] SHI Z, HAN M. Support vector echo-state machine for chaotic time-series prediction[J]. IEEEtransactions on neural networks, 2007, 18(2): 359-372.
[36] SCHRAUWEN B, STROOBANDT D, et al. Using reservoir computing in a decompositionapproach for time series prediction[C]//ESTSP 2008 European Symposium on Time Series Prediction. Multiprint Oy/Otamedia, 2008: 149-158.
[37] HAN M, WANG Y N. Prediction of multivariate time series based on reservoir principal component analysis[J]. Control and Decision, 2009, 24(10): 1526-1530.
[38] XU M L, HAN M, LIN H F. Wavelet-denoising multiple echo state networks for multivariatetime series prediction[J]. Information Sciences, 2018, 465: 439-458.
[39] FERREIRA A A, LUDERMIR T B. Evolutionary strategy for simultaneous optimization ofparameters, topology and reservoir weights in Echo State Networks[C]//The 2010 InternationalJoint Conference on Neural Networks (IJCNN). IEEE, 2010: 1-7.
[40] RABIN M J A, HOSSAIN M S, AHSAN M S, et al. Sensitivity learning oriented nonmonotonic multi reservoir echo state network for short-term load forecasting[C]//2013 InternationalConference on Informatics, Electronics and Vision (ICIEV). IEEE, 2013: 1-6.
[41] CHOUIKHI N, AMMAR B, ROKBANI N, et al. PSO-based analysis of Echo State Networkparameters for time series forecasting[J]. Applied Soft Computing, 2017, 55: 211-225.
[42] THIEDE L A, PARLITZ U. Gradient based hyperparameter optimization in echo state networks[J]. Neural Networks, 2019, 115: 23-29.
[43] ÖZTÜRK M M, CANKAYA I A, IPEKCI D. Optimizing echo state network through a novelfisher maximization based stochastic gradient descent[J]. Neurocomputing, 2020, 415: 215-224.
[44] JAEGER H. Discovering multiscale dynamical features with hierarchical echo state networks[M]. Deutsche Nationalbibliothek, 2007.
[45] RODAN A, TINO P. Minimum complexity echo state network[J]. IEEE transactions on neuralnetworks, 2010, 22(1): 131-144.
[46] CUI H, FENG C, CHAI Y, et al. Effect of hybrid circle reservoir injected with wavelet-neuronson performance of echo state network[J]. Neural Networks, 2014, 57: 141-151.
[47] GALLICCHIO C, MICHELI A. Architectural and markovian factors of echo state networks[J].Neural Networks, 2011, 24(5): 440-456.
[48] MA Q L, CHEN W B. Modular state space of echo state network[J]. Neurocomputing, 2013,122: 406-417.
[49] LUN S X, LIN J, YAO X S. Time series prediction with an improved echo state network usingsmall world network[J]. Acta Automatica Sinica, 2015, 41(9): 1669-1679.
[50] XUE Y, ZHANG Q, SLOWIK A. Automatic topology optimization of echo state network basedon particle swarm optimization[J]. Engineering Applications of Artificial Intelligence, 2023,117: 105574.
[51] DUTOIT X, SCHRAUWEN B, VAN CAMPENHOUT J, et al. Pruning and regularization inreservoir computing[J]. Neurocomputing, 2009, 72(7-9): 1534-1546.
[52] HAN M, WANG Y N. Multivariate time series online predictor with Kalman filter trainedreservoir[J]. Acta Automatica Sinica, 2010, 36(1): 169-173.
[53] SONG Q S, FENG Z R, LI R H. Stable training method for output connection weights of echostate networks[J]. Control and Decision, 2011, 26(1): 22-26.
[54] CHATZIS S P, DEMIRIS Y. Echo state Gaussian process[J]. IEEE Transactions on NeuralNetworks, 2011, 22(9): 1435-1445.
[55] SHAHI S, FENTON F H, CHERRY E M. Prediction of chaotic time series using recurrentneural networks and reservoir computing techniques: A comparative study[J]. Machine learningwith applications, 2022, 8: 100300.
[56] MACKEY M C, GLASS L. Oscillation and chaos in physiological control systems[J]. Science,1977, 197(4300): 287-289.
[57] LORENZ E N. Deterministic nonperiodic flow[J]. Journal of atmospheric sciences, 1963, 20(2): 130-141.
[58] LUKOŠEVIČIUS M. A practical guide to applying echo state networks[J]. Neural Networks:Tricks of the Trade: Second Edition, 2012: 659-686.
[59] VERSTRAETEN D, DAMBRE J, DUTOIT X, et al. Memory versus non-linearity in reservoirs[C]//The 2010 international joint conference on neural networks (IJCNN). IEEE, 2010: 1-8.

所在学位评定分委会
数学
国内图书分类号
TP18
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/545046
专题商学院_信息系统与管理工程系
推荐引用方式
GB/T 7714
Xie QL. Examination of Performance in Different Reservoir Architectures in Low Signal-to-Noise Environments[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12133014-谢巧琳-信息系统与管理(3073KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[谢巧琳]的文章
百度学术
百度学术中相似的文章
[谢巧琳]的文章
必应学术
必应学术中相似的文章
[谢巧琳]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。